Free
Research Article  |   November 2007
The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions
Author Affiliations
Journal of Vision November 2007, Vol.7, 4. doi:https://doi.org/10.1167/7.14.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Benjamin W. Tatler; The central fixation bias in scene viewing: Selecting an optimal viewing position independently of motor biases and image feature distributions. Journal of Vision 2007;7(14):4. https://doi.org/10.1167/7.14.4.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Observers show a marked tendency to fixate the center of the screen when viewing scenes on computer monitors. This is often assumed to arise because image features tend to be biased toward the center of natural images and fixations are correlated with image features. A common alternative explanation is that experiments typically use a central pre-trial fixation marker, and observers tend to make small amplitude saccades. In the present study, the central bias was explored by dividing images post hoc according to biases in their image feature distributions. Central biases could not be explained by motor biases for making small saccades and were found irrespective of the distribution of image features. When the scene appeared, the initial response was to orient to the center of the screen. Following this, fixation distributions did not vary with image feature distributions when freely viewing scenes. When searching the scenes, fixation distributions shifted slightly toward the distribution of features in the image, primarily during the first few fixations after the initial orienting response. The endurance of the central fixation bias irrespective of the distribution of image features, or the observer's task, implies one of three possible explanations: First, the center of the screen may be an optimal location for early information processing of the scene. Second, it may simply be that the center of the screen is a convenient location from which to start oculomotor exploration of the scene. Third, it may be that the central bias reflects a tendency to re-center the eye in its orbit.

Introduction
When observers view scenes presented on computer monitors, they tend to look more frequently to the middle of the screen than to the outer edges. This central bias in fixation distributions was noted in the earliest studies of eye movements when viewing complex scenes (Buswell, 1935), is well documented in the scene viewing literature, and is found across a wide range of experimental paradigms (e.g., Mannan, Ruddock, & Wooding, 1995, 1996, 1997; Parkhurst, Law, & Niebur, 2002; Parkhurst & Niebur 2003; Tatler, Baddeley, & Gilchrist, 2005). Despite the fact that this central tendency is well documented, the reasons for this bias are as yet uncertain. Given the popular assumption that the correlation between image features and selecting locations for fixation underlies the observed central fixation tendencies, the central bias offers an interesting opportunity to explore not only the underlying mechanisms responsible for this tendency but also the extent to which fixations are determined by the image features present in scenes. Two possible explanations have been suggested in the literature for the tendency that human observers display to look at the center of scenes more than the periphery; these will now be discussed in turn. 
First the central bias may result from motor biases in the saccadic system that favor small amplitude saccades over large amplitude saccades. It is clear from a wide range of eye movement studies that there is a tendency to make small amplitude saccades. Saccade magnitudes show a positively skewed, long-tailed distribution in most experimental settings in which complex scenes are viewed (e.g., Bahill, Adler, & Stark, 1975; Gajewski, Pearson, Mack, Bartlett, & Henderson, 2005; Pelz & Canosa, 2001; Tatler, Baddeley, & Vincent, 2006) and this form of distribution is common across tasks. Given this tendency to make small saccades, the fact that scene viewing experiments typically use a centrally located pre-trial fixation marker means that a bias to make small amplitude saccades will result in the observed central bias in fixation distributions (e.g., Parkhurst & Niebur, 2003). The influence of any motor bias toward small amplitude saccades would be particularly prominent if presentation times for individual scenes were brief because fewer saccades would be made during the presentation time and as such there is little opportunity to have moved far from the central starting point of the trial (many experiments use presentation times of only a few seconds, allowing only a small number of saccades to be executed). 
Second, it is often assumed that the bias arises from selecting image features for fixation, which are often centrally biased in the scenes (e.g., Parkhurst et al., 2002; Reinagel & Zador, 1999; Tatler et al., 2005). A range of recent studies have shown that the locations selected for fixation by human observers tend to correlate with low-level image features in the scene (Itti & Koch, 2000; Kadir & Brady, 2001; Parkurst et al., 2002; Parkhurst & Niebur 2003; Reinegal & Zador, 1999; Renninger, Coughlan, & Vergheese, 2005; Tatler et al., 2005, 2006); in particular, fixated locations tend to have higher-than-average contrast and edge information (Baddeley & Tatler, 2006; Parkhurst & Niebur 2003; Reinagel & Zador, 1999). In natural scenes, the distribution of features and objects tends not to be uniform. There tends to be a reliable bias toward having more features and objects in the center of natural scenes (e.g., Parkhurst & Niebur, 2003; Reinegal & Zador, 1999; Tatler et al., 2005). This bias is in part because photographers tend to place objects of interest at the center of the viewfinder. Thus, if fixations and features correlate, a centrally biased distribution of features in scenes would result in the observed central biases in human fixation distributions. 
The present study explores the central fixation tendency displayed by observes when viewing natural scenes. In order to consider the possible role of motor biases in producing the central fixation bias, the location of the pre-trial fixation marker was varied randomly to minimize the possible influence of any motor biases. Furthermore, human observer data were compared to random walk simulations constructed using saccade characteristics matched to the human observers. In order to consider the possible role of the distribution of image features in producing the central fixation bias, the image set was divided post hoc according to the distribution of features in the scenes. Fixation distributions on sets of images with different biases in the distribution of their features were then compared. If the central tendency in fixation behavior derives from central tendencies in image features in scenes, then scenes that do not have central biases in either feature distributions ought to show either no central fixation bias, or a much reduced central fixation tendency. Parkhurst and Niebur (2003) found that the distribution of fixations was centrally biased on a range of scene types, but that this bias tended to shift in the direction expected given the contents of the scene: toward the bottom for urban scenes (presumably to objects on the ground plane) and toward the top for interiors (possibly at objects on walls). 
In the present study, observers viewed a range of images of natural scenes under one of two task conditions: Observers were either asked to view the scenes freely or to search for a small luminance target embedded in the scene. This task manipulation was introduced because of the well-documented influence that behavioral task has upon where observers fixate (e.g., Hayhoe, Shrivastava, Mruczek, & Pelz, 2003; Land & Hayhoe, 2001; Nelson, Cottrell, Movellan, & Sereno, 2004; Yarbus, 1967). A current challenge in understanding the link between image features and fixation is to account for such task-related differences. Here I consider (1) whether task influences the central fixation tendency and (2) whether a task that explicitly requires search for a target defined only in terms of a low-level feature (its brightness) promotes a greater association between the distribution of image features on the distribution of fixations. 
Considering the link between image feature distributions and fixation locations offers an opportunity not only to explore the basis of the central fixation bias in human observers, but also to consider the nature of the association between features and fixation. Previous reports of the close association between image features and locations fixated have often been interpreted as suggesting that the presence of these low-level image features is a causal factor in determining where observers fixate (e.g., Itti & Koch, 2000; Parkhurst & Niebur, 2004; Parkhurst et al., 2002; Underwood & Foulsham, 2006; Underwood, Foulsham, van Loon, Humphreys, & Bloyce, 2006). However, the correlation between features and fixation locations should not necessarily be interpreted as causal (e.g., Carmi & Itti, 2006; Einhäuser & König, 2003; Henderson, 2003). By considering whether the tendency to look at the center of scenes persists when the distribution of image features is not centrally biased, we can dissociate the extent to which this tendency arises from the organization of the scene or from higher level strategic factors. 
Method
Participants
Twenty-two participants took part in the free viewing condition. Thirty participants carried out the search task. All had normal or corrected-to-normal vision, were naïve to the purposes of the study, and took part in the experiment either for course credit or monetary reward. 
Stimuli and procedure
Participants each viewed 120 photographic images of real-world scenes. Scenes were indoor (40 images), outdoor with man-made structures present (e.g., urban scenes; 40 images), and outdoor with no man-made structures present (40 images). Images were taken using a Nikon D2 digital SLR using the highest resolution (4 megapixels). In taking the photographs, care was taken to avoid always placing objects of interest in the center of the composition. In this way, it was hoped that the set of scenes would not always have image features biased toward the center. 
The images were presented in 1600 × 1200 pixel format on a 21-in. SVGA color monitor with a refresh rate of 100 Hz and a maximum luminance of 55 cd m −2. The monitor was positioned at a viewing distance of 60 cm; consequently, the images presented subtended 40° horizontally and 30° vertically. In the search task, a small (SD = 0.3°) Gaussian brightness target was added to a randomly selected location in half of the scenes (the scenes to which the brightness target was added were randomly selected for each participant). 
The search task was not chosen to be naturalistic and has no obvious real-world analogue. However, this task was useful to the question posed in this study in two ways. First, the target was not a natural object and was placed entirely randomly in the scene. As such there was no contextual cue as to where the target might occur (cf. Torralba, Oliva, Castelhano, & Henderson, 2006), and thus the scenes had to be searched in their entirety to locate the target. Second, the target was identified purely in terms of a low-level image feature (luminance) and as such, any association between image features and fixation selection should be maximally apparent when explicitly searching on the basis of such a feature. This therefore made it possible to consider whether the distribution of image features influences search when contextual (high level) cues are effectively removed. An example of a scene with a Gaussian luminance target embedded is shown in Figure 1
Figure 1
 
An example of a scene with the Gaussian luminance target embedded. The luminance target is on the wall at the left of the scene, indicated by the arrow in this figure. For clarity in this figure, the insert in the lower left of the figure shows an enlargement of the region containing the luminance target.
Figure 1
 
An example of a scene with the Gaussian luminance target embedded. The luminance target is on the wall at the left of the scene, indicated by the arrow in this figure. For clarity in this figure, the insert in the lower left of the figure shows an enlargement of the region containing the luminance target.
Before each image was presented, participants were required to fixate a small marker positioned randomly within a circle of radius 10° from the center of the screen (thus the fixation marker was between 0° and 10° from the screen center). Images were presented for 5 seconds and were followed by a white noise mask. 
Eye movement recording
Eye movements were recorded using an SR Research Ltd. EyeLink II eye tracker, sampling pupil position at 500 Hz. A 9-point target grid was used to calibrate eye position and the spatial accuracy of this calibration was assessed using a further 9-point grid. If the second 9-point grid revealed a spatial accuracy worse than ±0.5°, the eye tracker was re-calibrated. Eye position data were collected for the eye that produced the better spatial accuracy as determined using the calibration. Saccades and fixations were defined using the saccade detection algorithm supplied by SR Research: Saccades were identified by deflections in eye position in excess of 0.1°, with a minimum velocity of 30°s −1 and a minimum acceleration of 8000°s −2, maintained for at least 4 ms. A minimum fixation duration of 50 ms was used. The first fixation in each trial was defined as the first fixation that began after the onset of the scene image. Thus, the fixation on the pre-trial fixation marker was not included in the analyses. 
For the search task, data were only analyzed for eye movements made when viewing images in which the search target was absent: In this way, the stimuli viewed in the two experimental conditions were identical for the eye movements analyzed in the present study. 
While qualitatively participants did not move their heads much during recording sessions, no quantitative measure of head position was available: No head restraint was used and no separate record of head position was made (the EyeLink II corrects gaze for head movements). 
Quantifying distributions of image features
A convenient way to quantify the contents of natural scenes is to describe the image in terms of the visual features present in it. Four different regularities in the images were quantified: brightness, chromaticity, contrast, and edge-content. The same procedure was used for quantifying these regularities as detailed in Tatler et al. (2005), with the exception that each regularity was quantified over a wider range of spatial scales. The filters had standard deviations between 0.625 and 20 cpd (this refers to the standard deviation of center Gaussian in the contrast feature maps, and the standard deviation of the Gaussian carrier for the edge-content feature maps). 
In order to quantify the distribution of features in the images, the feature maps were combined across all spatial scales and across the four features by summing the individual maps. This provides a convenient description of the distributions of image features in the images. 
For the purposes of considering whether the distribution of features in scenes influences the central fixation bias, images were divided post hoc on the basis of the combined feature maps. First the images were divided according to whether they had a bias in the feature distribution toward the center or periphery of the image. Images were categorized as having a bias toward the center if the sum of all pixels in the overall feature map that were within 7.5 degrees of the center of the scene exceeded the sum of all pixels in the overall feature map that were further than 7.5 degrees from the center. Using this approach, 63 images were categorized as having a feature bias toward the center of the image, and 57 images were categorized as having a bias toward the periphery. Figures 4A and 4B show the distribution of image features in each of these two categories. The modulation in image features across these distributions was considerable. The central/peripheral biases in these two categories of images are clear in both the horizontal ( Figure 4C) and vertical ( Figure 4D) directions. Examples of images from these categories are shown in Figures 2A and 2B
Figure 2
 
Examples of images for each of the post hoc categorizations based on the distribution of image features in the scenes.
Figure 2
 
Examples of images for each of the post hoc categorizations based on the distribution of image features in the scenes.
The images were also divided into those with biases to the left of the image (where the sum of all pixels to the left of the horizontal midline in the overall feature map was greater than the sum of all pixels to the right of the horizontal midline, N = 57 images) and those with a bias in features toward the right of the image ( N = 63 images). Figures 5A and 5B show the image feature distributions for these two categories of images. There is considerable modulation in image features in the horizontal direction in these two image categories ( Figure 5C), but little difference in the vertical direction ( Figure 5D). These images therefore differ in their horizontal distribution of image features but not in their vertical distribution of image features. Examples of images from these categories are shown in Figures 2C and 2D
Finally, images were divided into those with a bias in their feature distribution toward the top of the image (where the sum of all pixels above the vertical midline in the overall feature map was greater than the sum of all pixels below the vertical midline, N = 66 images) and those images with a feature distribution biased toward the bottom of the image ( N = 54 images). Figures 6A and 6B show the distribution of features in images in each of these two categories. There was little variation in image features in the horizontal direction in these images ( Figure 6C), but considerable modulation in image features in the vertical direction in these images ( Figure 6D). Thus, these images differ in their vertical distribution of image features but not in their horizontal distribution of features. Examples of images from these categories are shown in Figures 2E and 2F
Quantifying the distributions of fixations
Fixation distributions on the scenes were characterized using kernel density estimation (Matlab toolbox, available at http://ttic.uchicago.edu/~ihler/code/kde.php) with Gaussian kernels. A full description of kernel density estimation can be found in Bishop (1996), but it essentially “smoothes” over the data with the kernel, the width of which affects the degree of smoothing. Kernel widths were set at 1 degree to reflect estimates of foveal size. 
Results and discussion
Motor bias and the central fixation bias
If central biases in fixation behavior could be explained by motor biases in the oculomotor system, then a random walk model of fixation selection should produce similar central tendencies, provided that the steps in the random walk were matched to the characteristics of human saccades. To test this hypothesis, sequences of fixations were simulated, sampling randomly from distributions of saccade amplitudes and directions constructed from the observed human oculomotor behavior. As such, any motor biases toward small amplitude saccades in the human observers would also be present in the simulations. In the simulations, an equal number of sequences as provided by the human observers were generated (one sequence for each image for each observer). Furthermore, the number of simulated fixation locations in each sequence was matched to the human observer data. In this way, the simulations and human behavioral data provided distributions of fixations comprising the same number of sequences and fixations. As explained in the Method section, the location of the pre-trial fixation marker varied randomly on each trial in order to minimize possible influences of motor biases on promoting central fixation tendencies. Consequently, the simulated fixation sequences each used the same starting locations as the human observers did. In this way, it was possible to compare whether, given the randomized starting position, random walk processes with biases toward small amplitude saccades could account for the observed central fixation tendencies. It is clear from Figure 3 that the strong central tendency in fixation behavior when freely viewing scenes or searching the scenes for a luminance target ( Figure 3A) were not matched by a random walk simulation even when the starting locations and saccade characteristics were matched to the human observers ( Figure 3B). Figure 3D plots horizontal cross sections through the vertical midline of these distributions to show the differences between the observed data and the simulations more clearly: Here it is clear that the simulations did not produce the strong central bias exhibited by the human observers. Therefore, by simulating sequences of fixations based upon random walk processes with the amplitudes and directions sampled from human observer data, it was demonstrated that the central bias in fixation behavior could not be accounted for purely in terms of the natural tendency that observers show to make small amplitude saccades. 
Figure 3
 
(A) Kernel density estimates of the observed fixation distributions in the free viewing and search tasks. (B) Kernel density estimates of model “fixation” locations generated from a random walk model with amplitude and direction characteristics matched to the human observers. The starting position of each simulated sequence was matched to the human observers. (C) Kernel density estimates of model fixations generated using a random walk simulation in which each simulated sequence started in the center of the screen. Simulated saccades shared the same amplitude and direction characteristics as the human observers. (D) Horizontal cross sections through the vertical midline of the distributions plotted in panels A–C. The central tendency in the human data (red line) is not present in the simulations with starting positions matched the human observers (blue line) or in the simulations with central starting positions (green line). The shaded regions indicate 95% confidence limits for the kernel density estimates, generated by creating estimates of 200 bootstrapped samples (Efron & Tibshirani, 1993).
Figure 3
 
(A) Kernel density estimates of the observed fixation distributions in the free viewing and search tasks. (B) Kernel density estimates of model “fixation” locations generated from a random walk model with amplitude and direction characteristics matched to the human observers. The starting position of each simulated sequence was matched to the human observers. (C) Kernel density estimates of model fixations generated using a random walk simulation in which each simulated sequence started in the center of the screen. Simulated saccades shared the same amplitude and direction characteristics as the human observers. (D) Horizontal cross sections through the vertical midline of the distributions plotted in panels A–C. The central tendency in the human data (red line) is not present in the simulations with starting positions matched the human observers (blue line) or in the simulations with central starting positions (green line). The shaded regions indicate 95% confidence limits for the kernel density estimates, generated by creating estimates of 200 bootstrapped samples (Efron & Tibshirani, 1993).
Given that most scene viewing experiments use a central fixation marker before each trial, it is interesting to consider whether central fixation biases in such experiments might arise at least in part from the central starting location. Therefore, a second set of random walk simulations were run in which the steps in the walk were matched to the human observers in terms of amplitudes and directions and in which the same number and lengths of simulated fixation sequences were generated, but in which the sequences always started in the center of the screen ( Figure 3C). As expected, in these simulations there was a more pronounced central bias than the simulations in which starting locations were not always central but were matched to the human observers. However, the central tendencies in these simulations were still far less pronounced than those displayed by the human observers ( Figure 3D). Hence, not only were such oculomotor biases unable to account for the observed data, in which the pre-trial fixation marker varied in its location on the screen, but this explanation does not produce a particularly strong central bias even if the pre-trial fixation marker is always centrally located. We can therefore effectively dismiss the first possible account of the central bias in fixation behavior that has been suggested previously in the literature (e.g., Parkhurst & Niebur, 2003). 
Distributions of features and the central fixation bias
Given that the observed central fixation bias cannot be explained in terms of motor biases in the oculomotor system, I will now consider whether the central bias in fixation behavior arises from the distribution of features in the images. 
In order to address the question of whether the central bias in fixation behavior arises as a result of central biases in feature distributions within scenes, scenes in which image features were biased toward the center of the image ( Figure 4A, N = 63 images) were compared to those in which image features were biased toward the periphery of the images ( Figure 4B, N = 57 images). It is clear that when observers either freely viewed scenes or searched for a luminance target within scenes, there was a central bias in fixation behavior, which occurred both when the features in the image were more prevalent in the center of the scenes being viewed, and when the features in the image were more prevalent at the margins of the scene. This figure clearly demonstrates that the central fixation tendency observed in many scene viewing experiments is not simply a result of central biases in the features present in the scenes. By taking horizontal and vertical cross sections through the distributions ( Figures 4C and 4D), it is clear how little difference the distribution of image features made to the distribution of fixations. There was some evidence of a decrease in the magnitude of the central fixation bias for scenes with peripheral biases in their feature distributions ( Figures 4C and 4D), but the distributions overlapped considerably. 
Figure 4
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was a bias toward centrally distributed image features. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the periphery of the scenes. (C) Horizontal and (D) vertical cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with centrally biased image feature distributions. The blue lines show the distributions of features and fixations for scenes with peripherally biased image feature distributions. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
Figure 4
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was a bias toward centrally distributed image features. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the periphery of the scenes. (C) Horizontal and (D) vertical cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with centrally biased image feature distributions. The blue lines show the distributions of features and fixations for scenes with peripherally biased image feature distributions. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
In order to assess whether the task or bias in image features influenced the overall spatial extent of the observers' fixations in the scenes, the variances of fixation locations (expressed in terms of distance from screen center for each fixation) were compared (for a similar approach to assessing the spatial extent of fixation distributions, see Crundall & Underwood, 1998). A 2 (task) × 2 (central/peripheral feature bias) mixed design ANOVA showed no main effects of task or feature bias upon the variance of fixation locations. Thus, whether the image features were biased toward the center or periphery of the images did not influence the overall distribution of fixations made by viewers; nor did the task of the observer. However, there was a significant interaction between task and feature bias, F(1, 50) = 32.26, p < .001. Bonferroni-corrected post hoc t-tests showed that when freely viewing the scenes, fixation location variance was higher for scenes with peripheral image feature biases than for scenes with central feature biases (p = .012). Conversely, when searching the scenes, variance in fixation location was higher for scenes with centrally biased features than for scenes with peripherally biased features (p < .001). Thus, while having more prevalent image features in the periphery of the scenes did slightly increase the spatial variance of the fixation distributions in the free viewing task, it had the opposite effect for the search task. 
While the above comparison of images with central and peripheral biases in their image feature distributions clearly suggests that the central fixation bias prevails even in the absence of a central bias in image features, the fact that there was some suggestion of a change in the magnitude of the central fixation bias under these circumstances warrants further consideration. Any shift in the fixation distribution that was contingent upon the distribution of image features may be easier to see if alternative categorizations of the images in terms of their feature distributions are employed. As a result, fixation behavior was compared in scenes (1) with image features biased toward the left of the scene or the right of the scene and (2) with image features biased toward the top or bottom of the scene. 
Figure 5 shows fixation behavior when viewing scenes with features biased to either the left ( Figure 5A, N = 57 images) or right ( Figure 5B, N = 63 images) of the scenes. 
Figure 5
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was an image feature bias toward the left. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the right of the scenes. (C) Horizontal and (D) vertical cross sections through the distributions shown in panels A and B. For image features, the plots show the mean distribution of features in the horizontal (C) or vertical (D) direction. For fixations, the plots show horizontal (C) and vertical (D) cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with image feature distributions biased toward the left. The blue lines show the distributions of features and fixations for scenes with image feature distributions biased toward the right. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
Figure 5
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was an image feature bias toward the left. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the right of the scenes. (C) Horizontal and (D) vertical cross sections through the distributions shown in panels A and B. For image features, the plots show the mean distribution of features in the horizontal (C) or vertical (D) direction. For fixations, the plots show horizontal (C) and vertical (D) cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with image feature distributions biased toward the left. The blue lines show the distributions of features and fixations for scenes with image feature distributions biased toward the right. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
The central fixation bias was clear irrespective of the image feature distribution, and the fixation distributions when looking at scenes with features predominantly on the left were very similar to those when viewing scenes with features predominantly on the right of the image. In order to assess whether there was any shift in the fixation locations in the direction of the image feature biases in the scenes, the mean horizontal locations for all fixations that lay within 5° of the vertical midline of the screen were compared. A 2 (task) × 2 (left/right feature bias) mixed design ANOVA showed a significant interaction between task and feature bias, F(1, 50) = 6.70, p = .013. Bonferroni-corrected post hoc t-tests showed that in the search task, the distribution of image features influenced the horizontal mean fixation location ( p < .001), and this was in the same direction as the image feature biases ( Figure 5C). However, this trend was not evident in the free viewing data: There was no significant difference in the mean horizontal location of fixations for scenes with features biased toward the left or right ( p = .964; Figure 5C). 
This trend toward a greater correlation between the image features and the fixation location distribution in the search task than when freely viewing the images, can be interpreted in at least two ways. First, it could be argued that image features played a more prominent role in selecting where to fixate in the search task than they did when freely viewing the same scenes. Such an account of fixation behavior may not be surprising given that the search task required observers to locate a target defined only in terms of its luminance: As such this may promote the selection of low-level image features in this task. Indeed, any purely low-level account of fixation selection would predict this result. For example, the weighted salience framework (Itti & Koch, 2000; Peters, Iyer, Itti, & Koch, 2005) would predict a greater influence of a particular feature on viewing when the target is defined by that feature. 
Alternatively, it may be that the luminance target is hardest to locate when superimposed on cluttered regions of the scene. These regions of scenes would therefore be expected to require the most scrutiny by the observer and so attract a large amount of fixation. Since cluttered regions of scenes are likely to have the highest image feature content, fixation distributions would be shifted in the direction of feature biases, without any causality of the image features. 
However, whatever the explanation of the shift in fixation distributions in the search task, it should be noted that the magnitude of the shift in the fixation distribution was very small. 
Figure 6 shows fixation behavior when viewing scenes where the image features were biased either toward the top of the image ( Figure 6A, N = 66 images) or toward the bottom of the image ( Figure 6B, N = 54 images). 
Figure 6
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was an image feature bias toward the top of the scene. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the bottom of the scenes. (C) Horizontal and (D) vertical cross sections through the distributions shown in panels A and B. For image features, the plots show the mean distribution of features in the horizontal (C) or vertical (D) direction. For fixations, the plots show horizontal (C) and vertical (D) cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with image feature distributions biased toward the top. The blue lines show the distributions of features and fixations for scenes with image feature distributions biased toward the bottom. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
Figure 6
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was an image feature bias toward the top of the scene. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the bottom of the scenes. (C) Horizontal and (D) vertical cross sections through the distributions shown in panels A and B. For image features, the plots show the mean distribution of features in the horizontal (C) or vertical (D) direction. For fixations, the plots show horizontal (C) and vertical (D) cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with image feature distributions biased toward the top. The blue lines show the distributions of features and fixations for scenes with image feature distributions biased toward the bottom. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
Once again the central fixation bias was evident irrespective of the image feature bias or task. A 2 (task) × 2 (top/bottom feature bias) mixed design ANOVA was run to assess whether the vertical distribution of image features influenced the vertical mean of fixation locations. For each observer, the mean vertical fixation location was calculated for all fixations within 5° of the horizontal midline of the screen. There was a significant interaction between task and feature bias, F(1, 50) = 19.40, p < .001. Bonferroni-corrected post hoc t-tests showed that in the free viewing condition, the vertical distribution of fixations did not differ according to the distribution of image features ( p = .159; Figure 6D). However, in the search task, the vertical mean of the distribution of fixation locations did differ according to the image feature bias ( p < .001): There was a shift of the distribution of fixations toward the top of the scene, when image features were more prevalent in the upper half of the image ( Figure 6D). This shift did not appear to be mirrored by a downward shift in the distribution of fixations when image features were biased toward the lower half of the scene: Here the distribution remained centered on the vertical midline of the image. 
As was found when the images were divided according to the horizontal distribution of image features, the vertical distribution of features appeared to influence fixation behavior more when observers were searching for a luminance target than when they were freely viewing the scenes. Once again, however, the magnitude of the effect of image feature distributions upon fixation distributions was small. 
By dividing the images up according to the distribution of image features, it is clear that the bias in the image feature distributions had very little impact upon fixation behavior. A strong central tendency in fixation behavior was seen even when image features were biased toward the periphery of images. This anti-correlation between image features and fixations is compelling evidence against the suggestion that central biases in the distribution of image features in scenes underlie the central fixation bias. There are two important implications of the lack of strong correlation between the distribution of image features and the distribution of fixations made by human observers. 
First it is clear that image features play a relatively minor role in determining the overall distribution in fixation locations. Clearly, the majority of the observed oculomotor behavior resulted from factors other than the location of low-level image features in the scenes. As such, there was no evidence to support the notion that human fixation behavior was particularly closely correlated with image features in the scenes. The inadequacy of purely low-level accounts of fixation selection in explaining human eye movement behavior is becoming increasingly evident in the literature. The recent development of a framework in which feature selection is modulated by contextual priors generated from scene gist information (Torralba et al., 2006), demonstrated that a purely feature-based model was poor at accounting for fixation selection. However, a model combining feature selection and contextual information provided a much better account of where observers fixated. Even the popular weighted salience account of selection, in which the top down modulation can be manifest in terms of selectively weighting the feature maps (Itti & Koch, 2000; Peters et al., 2005), fails to account for certain aspects of human performance (Vincent, Troscianko, & Gilchrist, 2007): For example, it predicts that all feature and conjunction searches should be maximally efficient, yet human observers are not. Clearly, frameworks that are based solely on image features struggle to account for human fixation behavior. 
Second, any small influence of the distribution of features upon oculomotor behavior was task-dependent. When images were split according to the horizontal or vertical bias in the image features, there was some evidence of a shift of the distribution of fixations in the direction of the bias in the image features, but this was only the case when participants were searching for a luminance target in the scenes. When freely viewing images, any vertical or horizontal bias in the distribution of image features had no substantial influence upon fixation behavior. The task dependence of this influence further underlines the suggestion that there is not a fundamental causal link between the image features in the scenes and the selection of locations for fixation by the observers. Such task dependence of fixation selection is consistent with studies of vision under natural settings (e.g., Hayhoe et al., 2003; Land & Hayhoe, 2001), with previous studies of the correlation between features and fixation in free viewing and search tasks (Underwood & Foulsham, 2006; Underwood et al., 2006), and with recent modifications to feature-based models of fixation selection in order to include higher level factors (Torralba et al., 2006). 
Given that the data argue against the role of oculomotor biases or image features in giving rise to the observed fixation behavior, it would appear that the center of the scene must offer some strategic advantage to the observer that favors fixating this location. 
First, the center of the screen may be an optimal location for extracting information from scenes. Recent accounts have suggested that eye movements may target regions that are maximally informative to the viewer (Najemnik & Geisler, 2005; Raj, Geisler, Frazor, & Bovik, 2005) or that reduce the uncertainty of the object or scene being inspected (Renninger, Vergheese, & Coughlan, 2007). It may therefore be that when viewing complex natural scenes, the center of the screen serves as a highly informative location or a location that reduces uncertainty more effectively than other locations in the scene. 
Second, it may be that the center of the screen offers no information processing benefit but is an optimal location from which to subsequently explore the scene. In this way, the tendency to look to the center of images may simply be an orienting response and may be independent of the image features in the scene, the observer's high-level task goals, or the informativeness of the central portion of the scene. The existence of such localizing or orienting responses has been suggested by Renninger and colleagues (2007): When viewing object silhouettes, the initial saccade was made reliably to the center of the objects, but subsequent fixations were distributed around the margins of the silhouetted objects. Renninger et al. suggested that while later fixations served to maximize global information or reduce local uncertainty, the initial saccade to the center of the object could not be explained in this way. Instead Renninger and colleagues argued that these initial saccades were a localizing response from which subsequent exploration of the object ensued. 
Third, it may be that the central bias is not a bias toward the center of the scene per se, rather that it is a bias toward centering the eyeball within its orbit. A bias toward centering the eyeball in its orbit has been demonstrated in previous research (Fuller, 1996; Zambarbieri, Beltrami, & Versino, 1995). This bias is reflected in a shorter latency of saccades that bring the eyeball toward the center of the orbit than saccades that take the eyeball from one eccentric location to another. Paré and Munoz (2001) argued that this centering bias offers an optimal visual strategy for the observer: This is the optimal orbital position from which to make eye movements to explore the visual surroundings. Since observers in the present study were seated facing the center of the monitor on which scenes were displayed, and head stability was not recorded, it is not possible to dissociate any bias toward the center of the screen from any re-centering bias that favors bringing the eyeball back to the center of its orbit. However, in a previous study of the central fixation bias in reading, it was found that it was the screen center rather than the straight-ahead position (hence the orbital center) that produced the observed central fixation tendencies (Vitu, Kapoula, Lancelin, & Lavigne, 2004): When the screen was displaced from the straight-ahead position of the observer, fixations remained biased toward the center of the screen, not the center of the orbit. 
While it is not easy to discriminate between the above possible interpretations of the observed central bias in fixation behavior, one possible way to distinguish these accounts is to consider whether the central fixation bias changes over the course of viewing the scene for several seconds. If the bias arises from an initial orienting to select an optimal location from which to subsequently explore the scenes, the central bias should only be seen in the first (or possibly first few) fixations on the scene; subsequent fixations should not require this centering in the scene. A similar argument can be made if the scene center offers maximal information or uncertainty reduction: The initial benefit for fixating this location ought to promote a strong central bias at the start of viewing but not later on in viewing. However, given such an information theoretic account of the optimality of the screen center, it could be argued that the central bias ought to be less heavily restricted to the first saccade than if the tendency to look to the middle of the screen is a simple orienting response: It may be that the center serves to provide added information later in viewing as well as at the start of viewing. Finally, if the central fixation bias arises from the tendency that observers display to re-center the eyeball in the socket, this central fixation bias should be seen throughout viewing: There should be frequent re-centering movements after exploring peripheral locations in the scene. 
Figure 7 shows the distributions of fixations as a function of the ordinal fixation number in each sequence when freely viewing or searching the natural images, for scenes with either a central or peripheral bias in their image feature distributions. 
Figure 7
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with either central or peripheral biases in the image feature distributions.
Figure 7
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with either central or peripheral biases in the image feature distributions.
In the free viewing data ( Figure 7 left), the central fixation bias was present throughout the first 12 fixations but diminished in magnitude as viewing progressed. Whether the images being viewed had central or peripheral biases in the distribution of image features seemed to make little difference to the distribution of locations for each ordinal fixation in the sequence for the free viewing condition. A very different pattern of results was found when observers were searching for a luminance target ( Figure 7 right). The observers showed a strong initial centering response, moving toward the center of the image on their first fixation. Thereafter, differences began to emerge in the fixation distributions that were contingent on the image feature distributions. The kernel density estimates of the fixation distributions suggest that when image features were biased toward the center of the scene, fixations remained mainly clustered around the center of the image. Conversely, when image feature distributions were biased to the periphery of the scenes, fixations tended to become more prevalent in the peripheral locations in the images. 
Once again, alternative divisions of the images on the basis of the distribution of features were explored. Figure 8 shows the distributions of fixations when viewing scenes with image features biased either to the left or the right of center. Figure 9 shows the distributions of fixations when viewing scenes with image features biased either toward the top or the bottom of the scene. In all cases, there was little correlation between the distributions of features and fixations made by observers when freely viewing the scenes; although there is some suggestion of a downward shift in the distribution for images with features biased toward the lower half of the scene for fixations 2 to 4. When searching the images for a luminance target, there was a clearer association between the distribution of locations for each fixation in the sequence and the distribution of image features. However, this association was most evident in the first few fixations after the initial orienting to the screen center (mainly in fixations 2 to 4). 
Figure 8
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with biases toward the left or right of scenes in the distribution of image features.
Figure 8
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with biases toward the left or right of scenes in the distribution of image features.
Figure 9
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with biases toward the top or bottom of scenes in the distribution of image features.
Figure 9
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with biases toward the top or bottom of scenes in the distribution of image features.
Taken together, the data suggest that both task and the distribution of image features interacted to exert an influence on fixation locations as viewing progressed. In all cases, there was a strong initial centering response: When the scene appeared, irrespective of the task or the distribution of image features, the initial response of the observer was to move their eyes to the middle of the scene. This task- and image feature-independent initial response implies an initial orienting response when faced with a new visual scene, as has been suggested occurs when isolated objects are presented (Renninger et al., 2007). Looking at the center of the screen may also be advantageous for rapidly extracting the gist of the scene at the start of viewing: The gist of a scene is extracted very rapidly from images (e.g., Biederman, 1981; Intraub, 1980, 1981). Within Torralba et al.'s (2006) contextual guidance model of fixation selection, the contextual priors are constructed by extracting global scene features within the first few hundred milliseconds of viewing. It may be that the optimal location for extracting this global information is the screen center and as such this initial orienting response serves the construction of contextual priors to aid subsequent oculomotor exploration of the scene. Alternatively, the center of the screen may simply be a good place to begin further exploration of the scene. 
After initially orienting to the center of the screen, the distribution of subsequent fixations depended upon the task of the observer. When freely viewing scenes, image feature distributions had no influence on fixation distributions and a central tendency persisted throughout viewing. As such, there was no strong evidence for a causal link between image feature distributions and fixation distributions. The persistence of a central tendency, but of a lower magnitude than the original orienting response, suggests either that the screen center maintained a privileged place in viewing, or that the eyeball was being continually re-centered in its orbit when there was no task to override this re-centering tendency (e.g., Paré & Munoz, 2001). From the present data, these two possibilities cannot be dissociated. 
Image feature distributions did influence fixation distributions in the search task: From the second fixation, fixation distributions tended toward the distribution of image features. Thus, when the observers' task was to search for a target defined only in terms of luminance, the distribution of fixations showed a stronger correlation with the distribution of features. The association between distributions of features and fixations was clearest early in viewing, for fixations 2 to 4. A stronger association between features and fixations early in viewing than later in viewing has been reported previously (e.g., Carmi & Itti, 2006; Parkhurst et al., 2002); this is in contrast to Tatler et al.'s (2005) suggestion that the strength of association between features and fixations does not vary over the course of viewing a scene. The present data suggest that whether or not features are more strongly correlated with early fixations than later fixations may depend upon the observers' task. However, as stated earlier the causal factors behind this correlation cannot be determined: It may be that image features are more involved in fixation selection in the search task, or it may be that other factors such as the difficulty of locating the target against cluttered regions of the scene result in the observed correlation. 
In the search task, the central fixation tendency that persisted throughout viewing in the free viewing condition, rapidly dissipated: From the third fixation, there was little evidence for a central fixation tendency in the observers. As such, this suggest that the center offers no benefit to the viewer in completing their search task, and also that there was no pronounced re-centering of the eye in its orbit during this task. 
General discussion
When observers view complex scenes presented on computer monitors, there is a strong tendency to look more frequently around the center of the scene than around the periphery. This central tendency could not be explained by motor biases in the oculomotor system that promote small amplitude saccades rather than large amplitude saccades. The common explanation that central fixation tendencies arise from central biases in image features in natural scenes was also unable to account for the present data. Central biases in fixation behavior were found independently of whether the image features were centrally biased in the scenes viewed. There was always an initial orienting response that brought the eye close to the center of the screen with the first saccade after the scene appeared. Thereafter, the association between image features and fixations was task-dependent: When freely viewing scenes, the distribution of fixations was no different for different distributions of image features; when searching the scenes for a luminance target, fixation distributions were shifted toward the distributions of image features. This correlation between features and fixations in the search task was most obvious early in viewing, but following the initial centering response. 
Increasingly, it is becoming clear that purely featural, low-level accounts of eye movement behavior are limited in their ability to account for where humans fixate. While a number of modifications of the basic feature-based account have been suggested (e.g., Peters et al., 2005; Torralba et al., 2006), the data presented here suggest that a significant proportion of eye movement behavior when observers view images presented on monitors may arise from a tendency to look toward the center of the screen that is independent of image features and, to a large extent, task. This centering response implies an optimal viewing position for the center of the screen. This may be a simple response to center the eye in its orbit or to move to a convenient location for efficient exploration of the scene. Alternatively, it may be that the screen center is optimal for the initial extraction of gist from the scene or for extracting global scene features for the construction of spatial contextual priors (Torralba et al., 2006). 
Given the prominence of the central fixation tendency when observers view scenes presented on monitors, it will be important to establish the origins and implications of this bias in future research. It may be that the screen offers information processing advantages, especially early in viewing. If so, then understanding the nature of these advantages will be an important first step in accounts of eye movement behavior. If, on the other hand, the central fixation bias is an artefact of the unnatural setting of viewing a monitor, and is not found under naturalistic viewing conditions, then it will not be informative with regard to the factors that underlie saccade target selection in natural vision. Whether an aspect of natural vision or not, it will be important to account for or compensate for this aspect of fixation behavior in any monitor-based studies that attempt to describe the factors underlying oculomotor target selection. 
Acknowledgments
The author wishes to thank Ben Vincent (who also suggested the Monte Carlo simulations), Roland Baddeley, and Mike Land for looking at previous versions of this manuscript. I also thank Geoff Underwood and James Brockmole for their helpful comments and suggestions. This research was supported by the EPSRC-sponsored REVERB (Reverse engineering the vertebrate brain) project. 
Commercial relationships: none. 
Corresponding author: Benjamin W. Tatler. 
Email: B.W.Tatler@dundee.ac.uk. 
Address: School of Psychology, University of Dundee, Dundee, DD1 4HN, UK. 
References
Baddeley, R. J. Tatler, B. W. (2006). High frequency edges (but not contrast predict where we fixate: A Bayesian system identification analysis. Vision Research, 46, 2824–2833. [PubMed] [CrossRef] [PubMed]
Bahill, A. T. Adler, D. Stark, L. (1975). Most naturally occurring human saccades have magnitudes of 15 degrees or less. Investigative Ophthalmology & Visual Science, 14, 468–469. [PubMed] [Article]
Biederman, I. Kubovy, M. Pomerantz, J. R. (1981). On the Semantics of a glance at a scene. Perceptual organization. (pp. 213–253). Hillsdale, NJ: Lawrence Erlbaum Associates.
Bishop, C. M. (1996). Neural networks for pattern recognition. New York: Oxford University Press.
Buswell, G. T. (1935). How people look at pictures: A study of the psychology of perception in art. Chicago: University of Chicago Press.
Carmi, R. Itti, L. (2006). Visual causes versus correlates of attentional selection in dynamic scenes. Vision Research, 46, 4333–4345. [PubMed] [CrossRef] [PubMed]
Crundall, D. E. Underwood, G. (1998). Effects of experience and processing demands on visual information acquisition in drivers. Ergonomics, 41, 448–458. [CrossRef]
Efron, B. Tibshirani, R. J. (1993). An introduction to the bootstrap. New York: Chapman and Hall.
Einhäuser, W. König, P. (2003). Does luminance-contrast contribute to a saliency map for overt visual attention? European Journal of Neuroscience, 17, 1089–1097. [PubMed] [CrossRef] [PubMed]
Fuller, J. H. (1996). Eye position and target amplitude effects on human visual saccadic latencies. Experimental Brain Research, 109, 457–466. [PubMed] [CrossRef] [PubMed]
Gajewski, D. A. Pearson, A. M. Mack, M. L. Bartlett, F. N. Henderson, J. M. Paletta,, L. Tsotsos,, J. K. Rome,, E. Humphreys, G. (2005). Human gaze control in real world search. Attention and performance in computational vision. (3368, pp. 83–99). New York: Springer-Verlag.
Hayhoe, M. M. Shrivastava, A. Mruczek, R. Pelz, J. B. (2003). Visual memory and motor planning in a natural task. Journal of Vision, 3, (1):6, 49–63, http://journalofvision.org/3/1/6/, doi:10.1167/3.1.6. [PubMed] [Article] [CrossRef] [PubMed]
Henderson, J. M. (2003). Human gaze control in real-world scene perception. Trends in Cognitive Sciences, 7, 498–504. [PubMed] [CrossRef] [PubMed]
Intraub, H. (1980). Presentation rate and the representation of briefly glimpsed pictures in memory. Journal of Experimental Psychology: Human Learning and Memory, 6, 1–12. [PubMed] [CrossRef] [PubMed]
Intraub, H. Fisher,, D. F. Monty,, R. A. Senders, J. W. (1981). Identification and processing of briefly glimpsed visual scenes. Eye movements: Cognition and visual perception. (pp. 181–190). Hillsdale, NJ: Lawrence Erlbaum Associates.
Itti, L. Koch, C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489–1506. [PubMed] [CrossRef] [PubMed]
Kadir, T. Brady, M. (2001). Saliency, scale and image description. International Journal of Computer Vision, 45, 83–105. [CrossRef]
Land, M. F. Hayhoe, M. M. (2001). In what ways do eye movements contribute to everyday activities? Vision Research, 41, 3559–3565. [PubMed] [CrossRef] [PubMed]
Mannan, S. Ruddock, K. H. Wooding, D. S. (1995). Automatic control of saccadic eye movements made in visual inspection of briefly presented 2-D images. Spatial Vision, 9, 363–386. [PubMed] [CrossRef] [PubMed]
Mannan, S. K. Ruddock, K. H. Wooding, D. S. (1996). The relationship between the locations of spatial features and those of fixations made during visual examination of briefly presented images. Spatial Vision, 10, 165–188. [PubMed] [CrossRef] [PubMed]
Mannan, S. K. Ruddock, K. H. Wooding, D. S. (1997). Fixation sequences made during visual examination of briefly presented 2D images. Spatial Vision, 11, 157–178. [PubMed] [CrossRef] [PubMed]
Najemnik, J. Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387–391. [PubMed] [CrossRef] [PubMed]
Nelson, J. D. Cottrell, G. W. Movellan, J. R. Sereno, M. I. (2004). Yarbus lives: A foveated exploration of saccadic eye movement [Abstract]. Journal of Vision, 4, (8):741, [CrossRef]
Paré, M. Munoz, D. P. (2001). Expression of a re-centering bias in saccade regulation by superior colliculus neurons. Experimental Brain Research, 137, 354–368. [PubMed] [CrossRef] [PubMed]
Parkhurst, D. Law, K. Niebur, E. (2002). Modeling the role of salience in the allocation of overt visual attention. Vision Research, 42, 107–123. [PubMed] [CrossRef] [PubMed]
Parkhurst, D. J. Niebur, E. (2003). Scene content selected by active vision. Spatial Vision, 16, 125–154. [PubMed] [CrossRef] [PubMed]
Parkhurst, D. J. Niebur, E. (2004). Texture contrast attracts overt visual attention in natural scenes. European Journal of Neuroscience, 19, 783–789. [PubMed] [CrossRef] [PubMed]
Pelz, J. B. Canosa, R. (2001). Oculomotor behavior and perceptual strategies in complex tasks. Vision Research, 41, 3587–3596. [PubMed] [CrossRef] [PubMed]
Peters, R. J. Iyer, A. Itti, L. Koch, C. (2005). Components of bottom-up gaze allocation in natural images. Vision Research, 45, 2397–2416. [PubMed] [CrossRef] [PubMed]
Raj, R. Geisler, W. S. Frazor, R. A. Bovik, A. C. (2005). Contrast statistics for foveated visual systems: Fixation selection by minimizing contrast entropy. Journal of the Optical Society of America A, Optics, image science, and vision, 22, 2039–2049. [PubMed] [CrossRef] [PubMed]
Reinagel, P. Zador, A. M. (1999). Natural scene statistics at the centre of gaze. Network, 10, 341–350. [PubMed] [CrossRef] [PubMed]
Renninger, L. W. Coughlan, J. Vergheese, P. Saul,, L. K. Weiss,, Y. Bottou, L. (2005). An information maximization model of eye movements. Advances in neural information processing systems. Cambridge, MA: MIT Press.
Renninger, L. W. Vergheese, P. Coughlan, J. (2007). Where to look next Eye movements reduce local uncertainty. Journal of Vision, 7, (3):6, 1–17, http://journalofvision.org/7/3/6/, doi:10.1167/7.3.6. [PubMed] [Article] [CrossRef] [PubMed]
Tatler, B. W. Baddeley, R. J. Gilchrist, I. D. (2005). Visual correlates of fixation selection: Effects of scale and time. Vision Research, 45, 643–659. [PubMed] [CrossRef] [PubMed]
Tatler, B. W. Baddeley, R. J. Vincent, B. T. (2006). The long and the short of it: Spatial statistics at fixation vary with saccade amplitude and task. Vision Research, 46, 1857–1862. [PubMed] [CrossRef] [PubMed]
Torralba, A. Oliva, A. Castelhano, M. S. Henderson, J. M. (2006). Contextual guidance of eye movements and attention in real-world scenes: The role of global features in object search. Psychological Review, 113, 766–786. [PubMed] [CrossRef] [PubMed]
Underwood, G. Foulsham, T. (2006). Visual saliency and semantic incongruency influence eye movements when inspecting pictures. Quarterly Journal of Experimental Psychology, 59, 1931–1949. [PubMed] [CrossRef]
Underwood, G. Foulsham, T. van Loon, E. Humphreys, L. Bloyce, J. (2006). Eye movements during scene inspection: A test of the saliency map hypothesis. European Journal of Cognitive Psychology, 18, 321–342. [CrossRef]
Vincent, B. T. Troscianko, T. Gilchrist, I. D. (2007). Vision Research, 47, 1809–1820. [PubMed] [CrossRef] [PubMed]
Vitu, F. Kapoula, Z. Lancelin, D. Lavigne, F. (2004). Eye movements in reading isolated words: Evidence for strong biases towards the center of the screen. Vision Research, 44, 321–338. [PubMed] [CrossRef] [PubMed]
Zambarbieri, D. Beltrami, G. Versino, M. (1995). Saccade latency toward auditory targets depends on the relative position of the sound source with respect to the eyes. Vision Research, 35, 3305–3312. [PubMed] [CrossRef] [PubMed]
Yarbus, A. L. (1967). Eye movements and vision. New York: Plenum Press.
Figure 1
 
An example of a scene with the Gaussian luminance target embedded. The luminance target is on the wall at the left of the scene, indicated by the arrow in this figure. For clarity in this figure, the insert in the lower left of the figure shows an enlargement of the region containing the luminance target.
Figure 1
 
An example of a scene with the Gaussian luminance target embedded. The luminance target is on the wall at the left of the scene, indicated by the arrow in this figure. For clarity in this figure, the insert in the lower left of the figure shows an enlargement of the region containing the luminance target.
Figure 2
 
Examples of images for each of the post hoc categorizations based on the distribution of image features in the scenes.
Figure 2
 
Examples of images for each of the post hoc categorizations based on the distribution of image features in the scenes.
Figure 3
 
(A) Kernel density estimates of the observed fixation distributions in the free viewing and search tasks. (B) Kernel density estimates of model “fixation” locations generated from a random walk model with amplitude and direction characteristics matched to the human observers. The starting position of each simulated sequence was matched to the human observers. (C) Kernel density estimates of model fixations generated using a random walk simulation in which each simulated sequence started in the center of the screen. Simulated saccades shared the same amplitude and direction characteristics as the human observers. (D) Horizontal cross sections through the vertical midline of the distributions plotted in panels A–C. The central tendency in the human data (red line) is not present in the simulations with starting positions matched the human observers (blue line) or in the simulations with central starting positions (green line). The shaded regions indicate 95% confidence limits for the kernel density estimates, generated by creating estimates of 200 bootstrapped samples (Efron & Tibshirani, 1993).
Figure 3
 
(A) Kernel density estimates of the observed fixation distributions in the free viewing and search tasks. (B) Kernel density estimates of model “fixation” locations generated from a random walk model with amplitude and direction characteristics matched to the human observers. The starting position of each simulated sequence was matched to the human observers. (C) Kernel density estimates of model fixations generated using a random walk simulation in which each simulated sequence started in the center of the screen. Simulated saccades shared the same amplitude and direction characteristics as the human observers. (D) Horizontal cross sections through the vertical midline of the distributions plotted in panels A–C. The central tendency in the human data (red line) is not present in the simulations with starting positions matched the human observers (blue line) or in the simulations with central starting positions (green line). The shaded regions indicate 95% confidence limits for the kernel density estimates, generated by creating estimates of 200 bootstrapped samples (Efron & Tibshirani, 1993).
Figure 4
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was a bias toward centrally distributed image features. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the periphery of the scenes. (C) Horizontal and (D) vertical cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with centrally biased image feature distributions. The blue lines show the distributions of features and fixations for scenes with peripherally biased image feature distributions. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
Figure 4
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was a bias toward centrally distributed image features. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the periphery of the scenes. (C) Horizontal and (D) vertical cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with centrally biased image feature distributions. The blue lines show the distributions of features and fixations for scenes with peripherally biased image feature distributions. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
Figure 5
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was an image feature bias toward the left. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the right of the scenes. (C) Horizontal and (D) vertical cross sections through the distributions shown in panels A and B. For image features, the plots show the mean distribution of features in the horizontal (C) or vertical (D) direction. For fixations, the plots show horizontal (C) and vertical (D) cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with image feature distributions biased toward the left. The blue lines show the distributions of features and fixations for scenes with image feature distributions biased toward the right. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
Figure 5
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was an image feature bias toward the left. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the right of the scenes. (C) Horizontal and (D) vertical cross sections through the distributions shown in panels A and B. For image features, the plots show the mean distribution of features in the horizontal (C) or vertical (D) direction. For fixations, the plots show horizontal (C) and vertical (D) cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with image feature distributions biased toward the left. The blue lines show the distributions of features and fixations for scenes with image feature distributions biased toward the right. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
Figure 6
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was an image feature bias toward the top of the scene. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the bottom of the scenes. (C) Horizontal and (D) vertical cross sections through the distributions shown in panels A and B. For image features, the plots show the mean distribution of features in the horizontal (C) or vertical (D) direction. For fixations, the plots show horizontal (C) and vertical (D) cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with image feature distributions biased toward the top. The blue lines show the distributions of features and fixations for scenes with image feature distributions biased toward the bottom. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
Figure 6
 
(A) Distributions of image features and fixations (in both free viewing and search conditions) for images in which there was an image feature bias toward the top of the scene. The color bar for the image features plot shows the modulation in the image features across the distribution as a proportion difference from the mean in the distribution. Fixation distributions are kernel density estimates. (B) Distributions of image features and fixations (in both free viewing and search conditions) for images in which image features were biased toward the bottom of the scenes. (C) Horizontal and (D) vertical cross sections through the distributions shown in panels A and B. For image features, the plots show the mean distribution of features in the horizontal (C) or vertical (D) direction. For fixations, the plots show horizontal (C) and vertical (D) cross sections through the midlines of the distributions shown in panels A and B. The red lines show the distributions of features and fixations for scenes with image feature distributions biased toward the top. The blue lines show the distributions of features and fixations for scenes with image feature distributions biased toward the bottom. The shaded regions around the solid lines in these plots indicate bootstrapped 95% confidence limits for the kernel density estimates of the fixation distributions (Efron & Tibshirani, 1993).
Figure 7
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with either central or peripheral biases in the image feature distributions.
Figure 7
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with either central or peripheral biases in the image feature distributions.
Figure 8
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with biases toward the left or right of scenes in the distribution of image features.
Figure 8
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with biases toward the left or right of scenes in the distribution of image features.
Figure 9
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with biases toward the top or bottom of scenes in the distribution of image features.
Figure 9
 
Kernel density estimates of fixation distributions for each ordinal fixation in the sequence of viewing the scenes. Distributions are shown for the first 6 fixations and then for fixations 9 and 12 as representatives of later fixations. Fixation distributions are plotted for free viewing (left) and searching (right) the scenes and are plotted separately for fixations on scenes with biases toward the top or bottom of scenes in the distribution of image features.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×