Free
Review  |   September 2011
Eye movements and perception: A selective review
Author Affiliations
Journal of Vision September 2011, Vol.11, 9. doi:https://doi.org/10.1167/11.5.9
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alexander C. Schütz, Doris I. Braun, Karl R. Gegenfurtner; Eye movements and perception: A selective review. Journal of Vision 2011;11(5):9. https://doi.org/10.1167/11.5.9.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Eye movements are an integral and essential part of our human foveated vision system. Here, we review recent work on voluntary eye movements, with an emphasis on the last decade. More selectively, we address two of the most important questions about saccadic and smooth pursuit eye movements in natural vision. First, why do we saccade to where we do? We argue that, like for many other aspects of vision, several different circuits related to salience, object recognition, actions, and value ultimately interact to determine gaze behavior. Second, how are pursuit eye movements and perceptual experience of visual motion related? We show that motion perception and pursuit have a lot in common, but they also have separate noise sources that can lead to dissociations between them. We emphasize the point that pursuit actively modulates visual perception and that it can provide valuable information for motion perception.

Introduction
Eye movement research has seen massive advances during the last 50 years. By now, the major neural pathways controlling different types of eye movements are well established, and the technology for tracking gaze position has advanced considerably and most importantly has become widely available. Eye movement studies gained widespread attention in disciplines ranging from biology and medicine to computer science and economics. 1 Nonetheless, the most pertinent questions that relate to understanding gaze direction remain unchanged. Why do we look where we do, when viewing scenes? How are eye movements and perception related? These questions have already been raised in the now classical work of Buswell (1935) and Yarbus (1967). The fact that scientists are still asking the same questions (e.g., Tatler, 2009) shows that so far no satisfactory consensus has been reached in answer to these questions. In our review, we will focus on these two questions, and we hope to be able to deliver at least partial answers. 
Scientific research on eye movements began at the end of the 19th century when reliable methods for the measurement of eye position were first developed (Buswell, 1935; Huey, 1898; Orschansky, 1899; for a detailed historical overview, see Wade & Tatler, 2005; Yarbus, 1967). While some of these devices had a remarkable measurement precision, they were generally custom built and not widely available. The development of the scleral search coil technique by David Robinson (1963) was a hallmark invention to measure eye position precisely and is still used in nearly all explorations into the physiology of eye movements. Search coils were later successfully adopted for use with human observers (Collewijn, van der Mark, & Jansen, 1975). At the same time, the development of the dual Purkinje image eye tracker by SRI International (Cornsweet & Crane, 1973; Crane, 1994) allowed non-invasive, high-precision and low-noise measurements in humans. These devices have been highly successful and are still in use. Over the last 20 years, big improvements were made in video-based eye tracking and its wide availability has certainly led to a strong increase in the number of investigations on eye movements. 
In line with these technological advances, insights were gained into the anatomical and physiological basis of the primate eye movement system. On the one hand, recordings from single neurons in the monkey brain led to precise measurements of the properties of neurons in most areas related to eye movement control (Bruce & Goldberg, 1985; Mays & Sparks, 1980; Robinson, 1972; Robinson & Fuchs, 1969; Wurtz & Goldberg, 1972). On the other hand, eye movements were highly relevant to human neurology (Leigh & Kennard, 2004; Leigh & Zee, 1999; Munoz & Everling, 2004), and knowledge from these two main sources provided us with a detailed picture of the neural pathways controlling different types of eye movements. For example, the whole circuit for pursuit eye movements from the retina, via visual cortex, frontal eye fields, cerebellum down to the oculomotor plant, has been characterized in great detail (Lisberger, 2010). Several recent excellent neurophysiological reviews exist on these topics (Ilg & Thier, 2008; Krauzlis, 2004, 2005; Thier & Ilg, 2005), so we will not go into detail here but rather concentrate on behavioral data. 
It should be noted that some of the eye movement papers that were most often cited had little to do with visual processing. The discovery of rapid eye movements during certain periods of sleep, thus named REM sleep, revolutionized sleep research because it established an objective criterion for distinguishing between different periods of sleep for the first time (Dement & Kleitman, 1957). Similarly, the observation that smooth pursuit eye movements are impaired in schizophrenic patients has led to promising efforts to characterize specific oculomotor deficits as endophenoytpes—vulnerability markers—of psychiatric disorders (Gottesman & Gould, 2003). Interestingly, it was even discovered that the mere execution of smooth tracking movements while remembering traumatic life events could alleviate symptoms of post-traumatic stress disorders (Shapiro, 1989). While the neural bases of all these correlations are far from being understood, they seem to suggest that eye movements are not just controlling our window into the world but might also serve as a window into our minds. 
In this review, we want to look at two specific questions that have concerned scientists studying the relationship between eye movements and visual processing. For every scientist who has ever recorded the scanning eye movements of a person when viewing a scene, the immediate question seems to be: “Why do we look where we do?” We will present recent work and suggest a layered framework for the control of saccadic target selection that consists of separate control circuits for salience, object recognition, value, and plans. The second specific question we want to address concerns the relationship between eye movements and perception and, in particular, between smooth pursuit eye movements and perception. Recent work on the relationship between perception and action in general (Goodale & Milner, 1992; Milner & Goodale, 2006) has led to a number of studies comparing the signals used for motion perception to those controlling pursuit eye movements. At the same time, our perception of the world is severely altered during the execution of eye movements. Here, a more complicated picture seems to emerge. To a large degree, pursuit and motion perception behave quite similarly, suggesting identical neural circuits. Only when one looks quite closely, dissociations and different sources of noise become apparent, suggesting that the decoding of motion information can be task-dependent. 
Of course, there are numerous other highly interesting questions to be asked. For example, scientists have wondered for decades about the role of small fixational eye movements for vision (Ditchburn & Ginsborg, 1952; Kowler & Steinman, 1979c; Krauskopf, Cornsweet, & Riggs, 1960), and several recent papers have led to a renewed interest in this field and to exciting debates (Collewijn & Kowler, 2008; Engbert & Kliegl, 2003; Martinez-Conde, Macknik, & Hubel, 2004). For these and other questions, we refer the reader to several excellent books on eye movements in general (Carpenter, 1988; Findlay & Gilchrist, 2003; Land & Tatler, 2009; Leigh & Zee, 1999) and a flurry of recent review articles (Henderson, 2003; Klein & Ettinger, 2008; Kowler, 2011; Krauzlis, 2004, 2005; Land, 2006; Lisberger, 2010; Orban de Xivry & Lefevre, 2007; Rolfs, 2009; Sommer & Wurtz, 2008; Thier & Ilg, 2005; Trommershäuser, Glimcher, & Gegenfurtner, 2009; Van der Stigchel, 2010; Wurtz, 2008). 
Why do we look where we do?
Ever since scientists were able to measure eye movements, the main question they were concerned with was why we fixate at certain places and not at others. Of course, different paradigms have been used to approach this question and different influencing factors have been identified. However, up to this date, nobody has really succeeded in predicting the sequence of fixations of a human observer looking at an arbitrary scene. 
Here, we propose that several interacting control loops drive eye movements (Figure 1), which is analogous to a scheme that has been suggested by Fuster (2004) for more general action–perception loops. More specifically, we look at the contributions of salience, object recognition, value, and plans to saccadic target selection. These factors act on different levels of processing: salience, for instance, is a typical bottom-up process, while plans are typical top-down processes. In the following sections, we review how these factors contribute to eye movement guidance and how they interact with each other, for instance, how salience can be overridden by top-down mechanisms like plans. 
Figure 1
 
Framework for the control of saccadic eye movements. There are several interacting layers of control that influence saccadic target selection. Figure modified after Fuster (2004).
Figure 1
 
Framework for the control of saccadic eye movements. There are several interacting layers of control that influence saccadic target selection. Figure modified after Fuster (2004).
Salience
One widely cited model concerning the main determinants of where we look posits that salient parts of the scene first attract our attention and then our gaze (Itti, Koch, & Niebur, 1998). There are a number of reasons for the great prominence of the saliency map model. It is formulated as a computational model (Niebur & Koch, 1996), it has been implemented to allow easy predictions (Itti, Koch et al., 1998; Peters, Iyer, Itti, & Koch, 2005; Walther & Koch, 2006), and it agrees very well with what we know about the early visual system (Itti, Braun, Lee, & Koch, 1998). The saliency map model is based on the vast literature on visual search where individual feature maps are searched for a target in parallel (Treisman & Gelade, 1980). Koch and Ullman (1985) proposed that these feature maps are combined into a salience map that is followed by a winner-take-all network used to guide visual attention. This basic conceptual framework was later spelled out in more detail (Itti & Koch, 2000) and tested numerous times using stimuli of different complexity. Overall, the saliency map model is capable of predicting fixation locations better than chance, but we argue here that just exactly how well it performs depends on many factors. In most cases, when passively viewing static natural images, it performs just barely better than chance (Betz, Kietzmann, Wilming, & König, 2010; Tatler & Vincent, 2009). 
In the most prominent implementation of a salience model (Itti & Koch, 2000, 2001), the input image is first linearly filtered at eight spatial scales and center–surround differences are computed, both separately for three features: intensity, color, and orientation. This resembles transformations carried out by neurons in the early stages of visual processing. After normalization, a conspicuity map is created for each feature, which are finally merged into a single saliency map. A winner-take-all network detects the most salient point in the image. 
One reason why the saliency map approach caught so much attraction was its close relationship to our knowledge of the early visual system. Nowadays, the idea of parallel and independent pathways for the processing of different visual attributes such as color, form, or motion is no longer as dominant as it was in the 1980s. However, this assumption is not crucial for the model. The main assumption of the computation of local feature contrast has found empirical support from V1 physiology (reviewed in Carandini et al., 2005) and computational support in models of V1 (Carandini & Heeger, 1994; Carandini, Heeger, & Movshon, 1997). The putative anatomical substrate of the saliency map—assumed to be the LGN by Koch and Ullman (1985)—has been attributed to a number of locations in the visual hierarchy. Areas suggested include V1 (Li, 2002), V4 (Mazer & Gallant, 2003), LIP (Kusunoki, Gottlieb, & Goldberg, 2000), and FEF (Thompson & Bichot, 2005). Maps in some of these areas, typically higher up in the cortical hierarchy, are often called priority maps, because they integrate bottom-up visual salience and top-down signals (Ipata, Gee, Bisley, & Goldberg, 2009). Most likely, each one of the branches in the framework shown in Figure 1 has its own map, and possibly, all available information is integrated into a common priority map. In such a framework, the priority map would be closely linked with areas that underlie the control of saccadic eye movements and, therefore, most likely situated in frontal brain areas such as the FEF (Schall & Thompson, 1999) or in parietal areas such as the LIP (Goldberg, Bisley, Powell, & Gottlieb, 2006). 
A number of recent studies on saliency maps have addressed the questions of what features should be part of the map (Baddeley & Tatler, 2006; Einhäuser & König, 2003; Frey, Honey, & König, 2008; Frey, König, & Einhäuser, 2007; Jansen, Onat, & König, 2009; Onat, Libertus, & König, 2007) and how these features should be combined (Engmann et al., 2009; Koene & Zhaoping, 2007; Nothdurft, 2000; Onat et al., 2007; Peters et al., 2005; Zhao & Koch, 2011). What all these studies have in common is a relatively low overall level of predictive power. A recent summary (Betz et al., 2010) gives values between 57% and 68% correct fixation prediction. These absolute values depend a lot on image complexity and, therefore, should be interpreted with caution. It is also important to note that the prediction of fixation locations does not imply a true causal influence. If fixation locations can be predicted by salience, it might be that salience is the actual cause, driving the eye movements. However, it also might be that salience merely covaries with another factor, which is actually controlling gaze. 
A more general approach was taken by Kienzle, Franz, Schölkopf, and Wichmann (2009). They collected a large number of fixations on a series of calibrated natural images. Then, they used machine learning techniques (i.e., support vector machines) to differentiate between fixated and non-fixated patches (Figure 2). The advantage of this approach is that no a priori assumptions need to be made about the particular features that contribute to salience or how these features are combined to a single salience map. This method produced a simple solution with two center–surround operators, which to a first approximation match analogous components of most salience models. On the positive side, this simple feed-forward model lacking orientation selectivity predicts fixations equally well as the more complex Itti and Koch (2000) model does on the same images (64% vs. 62%). On the negative side, overall predictive performance remains low, which indicates a real upper limit for salience-based approaches. 
Figure 2
 
Difference between fixated and non-fixated image patches. (a) Dots represent fixation locations from eye movements of 14 observers. The patches on the right display the areas around all fixated locations. (b) Dots represent fixation locations from another scene (inset). These fixation locations are used to obtain non-fixated image patches (right). The contrast of the fixated image patches seems higher than that of the non-fixated patches, but there are no obvious structural differences. This indicates that high contrast attracts eye movements. Figure reproduced from Kienzle et al. (2009).
Figure 2
 
Difference between fixated and non-fixated image patches. (a) Dots represent fixation locations from eye movements of 14 observers. The patches on the right display the areas around all fixated locations. (b) Dots represent fixation locations from another scene (inset). These fixation locations are used to obtain non-fixated image patches (right). The contrast of the fixated image patches seems higher than that of the non-fixated patches, but there are no obvious structural differences. This indicates that high contrast attracts eye movements. Figure reproduced from Kienzle et al. (2009).
There have been other suggestions that notably improve predictions of fixation locations. When viewing static images, observers are biased to fixate the center of the screen, which is partly caused by a photographer's bias to locate interesting objects at the center (Bindemann, 2010). Using these oculomotor biases as an ingredient, the performance of a salience model can be improved from 56% to 80% by including the probability of saccade directions and amplitudes (Tatler & Vincent, 2009). Furthermore, a model based on oculomotor biases alone performs better than the standard salience model. Of course, these oculomotor features are no longer purely image-based—the motor system makes those image regions “salient.” 
To summarize the salience approach with static images so far, there is overwhelming evidence for a role of stimulus salience on saccadic target selection, because it was shown successfully in a large number of studies. However, there is also good evidence that this role might be relatively small in terms of explained variance at least for passively viewing static images. 
Of course, static images lack one of the most salient visual features, namely, visual motion and flicker. The salience approach has been extended to video sequences, but the results showed a large degree of variability. It seems that the choice of input is even more crucial for video sequences than for static images. There are several ways video sequences differ from static images. Motion of the observer leads to global changes in the retinal image and motion of objects in the scene leads to more local retinal motion. Under natural viewing conditions, both of these types of motion occur and lead to complex changes in the retinal image. Furthermore, artificial video sequences often contain cuts that do not occur at all in natural vision. 
In a recent study, 't Hart et al. (2009) directly compared eye movements of actively moving observers to the eye movements of static observers viewing either a continuous video of the head-centered image sequences experienced by the moving observers or a sequence of static images taken from these videos. The moving observers actively explored different real-world outdoor and indoor environments (Schumann et al., 2008). Similar to studies with static images, they found a modest effect of low-level salience. Predictions based on salience were just slightly better than chance at levels at around 55%. While the consistency between observers was highest for the sequence of static images, mainly due to the center bias, the saliency prediction was best for the passive viewing of continuous movies. Thus, it seems that observer motion by itself is not the crucial factor when thinking about improving performance of saliency models. 
The motion of objects within a scene might be of greater importance. In a remarkable series of studies, Hasson et al. (Hasson, Landesman et al., 2008; Hasson, Nir, Levy, Fuhrmann, & Malach, 2004; Hasson, Yang, Vallines, Heeger, & Rubin, 2008) measured eye positions and brain activity of a number of observers when viewing Hollywood movies. They found surprisingly good agreement between observers for both eye movements and brain activation, indicating that salience might play a much bigger role when viewing movie sequences containing object motion. The question that arises, of course, is how typical these movies or the MTV-style movie clips used in other studies (Carmi & Itti, 2006; Tseng, Carmi, Cameron, Munoz, & Itti, 2009) are of the real world. Experiments by Dorr, Martinetz, Gegenfurtner, and Barth (2010) indicate that they might not be typical. Dorr et al. took movies of real-world scenes with a stationary camera. Scenes were selected to include at least some movement (http://www.inb.uni-luebeck.de/tools-demos/gaze). One major finding was that a high degree of interobserver agreement could be found in the natural movies only when isolated objects start to move (Figure 3, Movie 1). In the natural movies, this did not happen very often compared to the more frequent movements in Hollywood movies. Another major difference between Hollywood and natural movies is frequent scene cuts. Whenever these cuts occur, the observers tend to relocate their gaze to the center of the screen, and this oculomotor strategy leads to a large correlation of the eye movements across observers. These two factors might have contributed to the overall high agreement between observers in the studies by Hasson et al. (Hasson, Landesman et al., 2008; Hasson, Yang et al., 2008). Overall, it seems that motion discontinuities in space–time are a highly prominent feature in the salience map (Mota, Stuke, Aach, & Barth, 2005). 
Figure 3
 
Scan path coherence for three different movies. Scan path coherence is a measure of agreement between scan paths of different observers, with high values representing high agreement. In the Ducks_boat movie (red), a duck is flying (from 5 to 10 s and from 11 to 13 s) in front of a natural scene. In the Roundabout movie (black), several small moving objects are distributed across the whole scene and coherence is low. Much higher coherence is found for the War of the Worlds movie (blue, dashed), a Hollywood movie trailer. The black horizontal line represents the average across all natural movies. There is only a high agreement between the scan paths in natural scenes, if a single moving object appears. Figure reproduced from Dorr et al. (2010).
Figure 3
 
Scan path coherence for three different movies. Scan path coherence is a measure of agreement between scan paths of different observers, with high values representing high agreement. In the Ducks_boat movie (red), a duck is flying (from 5 to 10 s and from 11 to 13 s) in front of a natural scene. In the Roundabout movie (black), several small moving objects are distributed across the whole scene and coherence is low. Much higher coherence is found for the War of the Worlds movie (blue, dashed), a Hollywood movie trailer. The black horizontal line represents the average across all natural movies. There is only a high agreement between the scan paths in natural scenes, if a single moving object appears. Figure reproduced from Dorr et al. (2010).
 
Movie 1
 
Ducks_boat movie fromFigure 3. The red dots indicate the fixation locations of human observers, the green bar represents the scan path coherence. Scan path coherence increases when a duck is flying through the scene. The movie is based on data from Dorr et al. (2010).
In summary, salience by itself has a rather modest effect on guiding our gaze. We already remarked that oculomotor strategies, such as fixating in the center of the display, have a large effect on viewing behavior (Tatler & Vincent, 2009). In addition to these, there are several factors that provide high-level visual input or top-down control. 
Object recognition
The most remarkable aspect of saliency is that it works on individual features and has no knowledge about objects: their use, familiarity, or history. When looking around, the world is full of objects and we direct our gaze to objects in order to scrutinize, recognize, or use them. It would then be a natural assumption that saccadic target selection is driven by objects rather than features. Of course, local features and objects are often correlated, and features change at the borders of objects. So far, there are only a few studies directly investigating the question of whether objects can predict gaze better than features. Einhäuser, Spain, and Perona (2008) obtained a clear answer in favor of objects. Using an ROC analysis, objects predicted gaze with an accuracy of around 65%, while the predictive level of salience (features) was below 60%. Nuthmann and Henderson (2010) found that the preferred saccadic landing position was close to the center of objects, also supporting the role of object-based saccadic target selection. Similarly, Cerf, Frady, and Koch (2009) found that observers tended to fixate faces in scenes even when not specifically instructed to search for them. Extending salience map algorithms with a face processing module greatly improved gaze predictions for images containing faces, while not impairing performance for images without faces (Cerf et al., 2009). 
Faces and objects play an important role in saccade control, as shown in a number of studies on recognition in natural scenes. Starting with the groundbreaking experiment by Thorpe, Fizet, and Marlot (1996), a series of studies has shown that human observers are capable of detecting animals or other objects in a scene very rapidly. One outstanding aspect of these studies is that the estimated time for cortical processing to make a decision about the presence of an animal in a scene was as low as 70 ms. Of equal importance is that human observers can execute a saccadic eye movement to the one of two images that contains an animal in about 200 ms. More recently, Crouzet, Kirchner, and Thorpe (2010) have shown that saccades to faces can be even faster, with an average latency of 147 ms in a 2AFC task. The fastest response times where performance was better than chance were as low as 110 ms, which leaves very little time for processing the retinal image at all. 
Because of these extremely rapid responses, arguments have been made that the kind of processing that occurs in these types of tasks are simplified in several ways. First, most of these experiments were performed using the commercially available COREL image database, whose images might not be very natural. In fact, images of animals and faces typically have their subject in sharp focus in the central foreground and the background blurry to emphasize the theme. Distractor images are often landscapes or city scenes where the whole image is in focus. Therefore, algorithms can classify these images based on simple features, in this case the amplitude spectrum (Torralba & Oliva, 2003; Wichmann, Drewes, Rosas, & Gegenfurtner, 2010), and humans could, in principle, use this information, too. In fact, recent work by Wichmann et al. (2010) has shown that human performance is better for images that are classified more easily based on the amplitude spectrum. However, they also found that human performance was still better once the amplitude spectrum was equalized across all images. In that case, a classification based on the spectrum would no longer work, of course. Furthermore, equalizing the spectral information leads only to a tiny decrease in absolute performance, indicating that this type of information is not essential for human classification performance. Using a new image database of more realistic photographs, Drewes, Trommershäuser, and Gegenfurtner (2011) went on to show that rapid animal detection was still possible and that observers are able not only to saccade to the side of the image containing the animal but also to fixate the animal directly. In many cases, the saccades were directed to the animal's head rather than the center of gravity of the animal. They also showed that a simple salience-based algorithm could not account for the full performance. Unfortunately, these studies only show that there is no easy solution to this task, leaving us with the mystery of how our visual system can achieve high performance so quickly. Given the predictive power of objects for fixation locations and the rapid object recognition, objects are certainly an important factor contributing to saccadic target selection. 
Plans
In nearly all of the studies mentioned so far, observers were passively looking at a scene. However, humans are carrying out some sort of active task during most of the time they are awake. A very influential series of investigations has studied how the execution of an active task influences eye movement behavior (for reviews, see Hayhoe & Ballard, 2005; Land, 2006). Task demands during eye movements have been studied during basic everyday tasks like making tea (Land, Mennie, & Rusted, 1999) or peanut butter sandwiches (Figure 4; Hayhoe, 2000), during various sports activities like playing cricket (Land & McLeod, 2000) or catching a ball (Hayhoe, Mennie, Sullivan, & Gorgos, 2005), but also during laboratory tasks such as moving an object around an obstacle (Johansson, Westling, Backstrom, & Flanagan, 2001), copying an arrangement of blocks (Ballard, Hayhoe, & Pelz, 1995), tapping a 3D object (Epelboim, 1998; Epelboim et al., 1997; Herst, Epelboim, & Steinman, 2001), or simply grasping an object (Brouwer, Franz, & Gegenfurtner, 2009). There are also numerous studies on the coordination of eye, hand, and body movements during locomotion, which we will not consider here since there is an excellent detailed review of them (Land & Tatler, 2009). There is also a vast literature on eye movements during reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Legge, Klitz, & Tjan, 1997; Rayner, 1998). However, it is quite clear that eye movements during reading are mainly determined by the task at hand. Interestingly, even very simple tasks such as searching for a specific stimulus (Einhäuser, Rutishauser, & Koch, 2008) or counting people in an image (Henderson, Brockmole, Castelhano, & Mack, 2007) can suppress the influence of salience completely. 
Figure 4
 
Scan path of a person who makes a peanut butter and jelly sandwich. The yellow circles represent fixation locations, with size proportional to duration. The red lines connect consecutive fixations. Task-relevant objects are fixated almost exclusively. Figure reproduced from Hayhoe and Ballard (2005).
Figure 4
 
Scan path of a person who makes a peanut butter and jelly sandwich. The yellow circles represent fixation locations, with size proportional to duration. The red lines connect consecutive fixations. Task-relevant objects are fixated almost exclusively. Figure reproduced from Hayhoe and Ballard (2005).
The main message from these studies is that while we perform a specific task, salience-based mechanisms seem to be “off duty.” During everyday activities (Hayhoe, 2000; Land et al., 1999), subjects almost exclusively fixated task-relevant objects. When making tea, observers fixate the objects used for the task such as the cup. Interestingly, subjects also fixated task-relevant but “empty” areas such as the place on the table where they wanted to place the cup. It is obvious that such fixations on “nothing” could never be predicted by bottom-up salience. Fixations during these tasks are typically just one step ahead of a particular action. Information for the task is sampled “just in time” (Ballard et al., 1995), which avoids a reliance on visual memory and instead uses the world as a huge memory, eye movements serving as the method of accessing it (Rensink, 2000, 2002). The experiments by Ballard et al. (Ballard, Hayhoe, Li, & Whitehead, 1992; Ballard et al., 1995) where observers had to copy an arrangement of blocks came to the same conclusion. Rather than storing the block arrangement in visual memory, observers repeatedly shifted their gaze to the blocks they had to copy. Highly redundant fixations, which were related to limitations in working memory, were also found in a geometry task (Epelboim & Suppes, 2001). These findings are consistent with the idea that humans use the world as an external memory (O'Regan, 1992). These findings also put in question the inhibition of return mechanism that is a necessary part of salience models and prevents gaze from getting stuck at the most salient point. 
Similar effects of action on eye movement control were also shown in simple laboratory experiments. Johansson et al. (2001) measured eye and hand movements while the participants had to lift a bar and navigate the bar around an obstacle. They found that participants fixated the contact points between fingers and object before they actually grasped the object. Fixations were on those locations that were critical for the task. Eye movements served to assist the grasping of the object, to navigate it around an obstacle, and finally to dock it at a switch. Similar results have been obtained in a navigation task, where objects either had to be picked up or to be avoided. Objects that had to be picked up were fixated in the center, whereas objects that had to be avoided were fixated at the borders (Rothkopf, Ballard, & Hayhoe, 2007). 
A direct comparison of eye movements when passively viewing objects and when grasping the same objects revealed interesting differences (Brouwer et al., 2009). During passive viewing, fixation locations were clustered around the center of gravity of the object. During active grasping, fixation locations were biased toward the contact points of the thumb and index finger, with a preference for the index finger. The index finger has a more variable trajectory than the thumb during grasping movements and might simply need more visual feedback when approaching the target. Interestingly, even low-level oculomotor properties like the relationship between speed and amplitude of gaze shifts, the so-called “main sequence,” differ between passive viewing and an active task. Gaze shifts were faster in speed and shorter in duration when observers actively tapped a sequence of 3D targets than when they viewed the sequence passively (Epelboim, 1998; Epelboim et al., 1997). 
All of these studies clearly show that our eye movements are mainly controlled by task demands when we are pursuing a goal. This implies that eye movements are necessary and helpful to achieve these goals. The next logical question is whether we get better at some tasks if we somehow manage to make “better” eye movements. Everyday activities such as making sandwiches may not require us to strive for perfection or speed. 2 However, in certain sports that demand action at high speeds such as baseball, eye movements might make a difference between a home run and a strike. Bahill and LaRitz (1984) have investigated eye movements of baseball hitters and found that professional baseball players were better than students at smoothly tracking a ball approaching the plate. Land and McLeod (2000) investigated eye movement strategies in cricket players and found that better players used their eye movements more effectively to predict future locations of the ball. These studies show that eye movement strategies can be different for expert and novice players, but they do not necessarily show that the eye movements themselves make the difference. A recent study by Spering, Schütz, Braun, and Gegenfurtner (2011) has investigated a paradigm they called “eye soccer” where observers judged whether a small target (“the ball”) would intercept a larger target (“the goal”). Observers either followed the ball movement or fixated the ball while the goal moved, leading to roughly similar retinal movement patterns. Observers were better in this task when they actively pursued the ball, leading credence to the advice widely used in sports to “keep your eyes on the ball.” 
Value
Value is of great importance for our behavior in general, but this concept has been neglected in the context of human eye movements until recently. The reason for this is most likely that eye movements are a very special type of motor behavior. When we move our hands or our bodies, we can actively change or manipulate our environment, with immediate consequences that can be considered positive or negative. For more than 100 years, learning theory has studied the effects of these consequences on behavior. In contrast, moving our eyes hardly affects our environment with the possible exception of some social interactions. There is seldom direct reward for making good eye movements or punishment for bad ones. At the same time, little metabolic energy is used by the eye muscles, leading to the long-held belief that eye movements are “for free.” This would mean that there is no cost for making too many eye movements. However, eye movements determine or change our retinal input so that we see some things better and others worse or not at all, which in turn can guide further actions. Hence, eye movements are certainly not “for free” in terms of their consequences for visual perception. 
Interestingly, recent research has shown that the consequences of eye movements are taken into account when selecting targets and planning movements to these targets. One line of research has investigated the indirect value of saccadic eye movements. Selecting a certain gaze position lets us see things better, and the information gained can be precisely quantified and compared to the information gained by an ideal target selector (Najemnik & Geisler, 2005). Another line of research has looked at direct effects, in situations were saccades to certain targets were directly rewarded (Sohn & Lee, 2006). Both lines of research indicate that the control of saccadic eye movements is closely linked to brain circuitry responsible for the evaluation of our actions. 
In terms of indirect effects, it has been thought for a long time that saccades select informative image regions. However, what is meant by “informative” has rarely been quantified. One argument against the idea of saccades extracting information from scenes was that saccades do visit the same locations all over again, so that the information content at these locations can hardly be considered high anymore. The solution to this apparent contradiction might lie in the low capacity of our visual memory. Repeated fixations at the same locations would still be consistent with the assumption that saccades are directed to informative regions, if memory capacity is highly limited. The real world serves as our memory, and eye movements are the only way we can read out this memory (Ballard et al., 1995; see Plans section above). 
Experiments where visual information uptake was precisely quantified include the work by Geisler et al. on visual search (Geisler, Perry, & Najemnik, 2006; Najemnik & Geisler, 2005, 2008). In their task, observers had to search for small Gabor targets in the midst of pink random noise. Najemnik and Geisler (2005) compared the statistics of saccades made by their human observers to those of an ideal Bayesian observer. The ideal Bayesian observer uses knowledge about the visibility map to guide the next saccade to the location that will maximize information gain. As human performance closely matched the ideal, it is likely that humans represent their own visibility map and access this map to guide saccades. A follow-up study showed that humans indeed select fixation locations, which maximize information gain instead of locations that have the highest target probability (Najemnik & Geisler, 2008). Similarly, Renninger, Verghese, and Coughlan (2007) studied eye movements in a shape discrimination task and also found correlations between human and ideal eye movement behavior. While these studies have exciting implications, it has to be kept in mind that they did not demonstrate directly that humans follow the exact computations of the ideal observer. Rather, they exhibit behavior that matches that of the ideal observer in some respects. Some studies (Araujo, Kowler, & Pavel, 2001) and preliminary reports propose that saccades might not be that optimal after all (Morvan, Zhang, & Maloney, 2010; Verghese, 2010). 
Despite the above-mentioned particularities of eye movements, studies of saccadic eye movements and reward in monkeys are part of the foundation for the discipline of neuroeconomics (Glimcher, 2003, 2010; Glimcher, Camerer, Poldrack, & Fehr, 2008). These experiments, in which a direct reward was linked to an eye movement, come from electrophysiology and mostly demonstrated a clear effect of reward. Platt and Glimcher (1999) found that the activity of single neurons in LIP was proportional to the reward magnitude and the probability of reward. Leon and Shadlen (1999) found analogous results in dorsolateral prefrontal cortex but not in the frontal eye fields (FEFs). Ikeda and Hikosaka (2003) found reward-dependent effects in the superior colliculus. Sugrue, Corrado, and Newsome (2004) showed that LIP neurons can code value in a simulated foraging task. Peck, Jangraw, Suzuki, Efem, and Gottlieb (2009) showed that cues signaling reward lead to sustained activity in LIP, while cues signaling the absence of reward lead to inhibition. All these areas are tightly connected to the basal ganglia, which have been characterized as a reward system in general (Schultz, 2000; Schultz, Dayan, & Montague, 1997; Schultz, Tremblay, & Hollerman, 2003) and also specifically as an integral part of the reward system in saccade tasks (Hikosaka, 2007; Hikosaka, Nakamura, & Nakahara, 2006; Hikosaka, Takikawa, & Kawagoe, 2000; Lau & Glimcher, 2007). These findings have led to the development of a “back-pocket model” of choice behavior that includes a topographic reward map as a central feature (Glimcher, 2009). 
At the level of human psychophysics, Milstein and Dorris (2007) found that latencies of human observers were shorter for rewarded targets. However, it is unclear how much of the effect was due to attentional modulation (Adam & Manohar, 2007). Sohn and Lee (2006) also observed shorter latencies in sequential movements for the saccades closer to the rewarded target. Navalpakkam, Koch, Rangel, and Perona (2010) found interactions between rewards and salience in a visual search task. In this task, observers searched in a display that always contained two targets with different saliency and reward. Observers picked the target that maximized the expected reward, not the more salient target nor the more valuable target. As the results were similar when the observers indicated their choice by button presses instead of saccades, the selection seems to reflect a general decision process rather than a specific saccadic target selection process. Finally, Xu-Wilson, Zee, and Shadmehr (2009) found that even intrinsic value could affect saccades. Saccades to neutral targets were faster when the subsequent presentation of a face was anticipated. 
These results strongly suggest that value can play a major role when eye movement targets are selected. However, in the tasks used in these studies, saccades can be thought of as a symbolic response, indicating which one of two distinct alternatives is chosen. For other forms of motor behavior, e.g., pointing movements (Körding & Wolpert, 2006; Trommershäuser, Maloney, & Landy, 2003, 2008), reward has been shown to influence the fine tuning of motor actions. If there is a topographic value map, in addition to a saliency map and an attention map, then there must be mechanisms for combining these different maps. So far only the study by Stritzke, Trommershäuser, and Gegenfurtner (2009) has investigated this question. They did observe effects of reward, but reward affected only the selection of objects as saccade targets in their task, and not so much the fine tuning of saccadic landing positions within that object. Preliminary results by Schütz and Gegenfurtner (2010) indicate that such a fine tuning may exist if the object borders are made more uncertain by blurring them, effectively countering the potential contribution of object recognition to target selection. 
Overall, the picture emerges that numerous factors determine why we look where we do. We have exemplified the effects of salience, object recognition, plans, and value here, but there might be several more of these control loops. In the past, the contributions of these factors have been studied mostly in isolation. There is ample of evidence that all of these factors influence our gaze, but none of them can explain gaze behavior completely. As illustrated in our framework (Figure 1), these different factors presumably contribute at the same time to the decision where to look next. In the next decade, studies using more naturalistic viewing conditions where several of these factors can be combined and manipulated will lead to a deeper understanding of their relative importance (Ballard & Sprague, 2005, 2006; Sprague, Ballard, & Robinson, 2007). 
Do motion perception and pursuit rely on the same signals?
For smooth pursuit eye movements, the answer to the question “why do we look where we do?” is much easier because these continuous eye rotations require a visual motion stimulus or the percept of motion (Berryhill, Chiu, & Hughes, 2006; Rashbass, 1961). Early investigations of pursuit eye movements were aimed at studying how the pursuit system was driven by retinal velocity errors (Robinson, 1965). The traditional stimulus for these studies was a bright spot on a dark background where there were no confounding variables. Later, through the work of Steinbach (1976), it became clear that pursuit is based to a large degree on the percept of motion rather than on the retinal stimulation. In a classical study, Steinbach presented a wheel rolling horizontally with light sources fixed to its rim in a dark room. When two light sources were on, observers perceived a rolling wheel and tracked its imagined center and not the individual lights undergoing a cycloidal motion trajectory. Following this study, a close relationship between pursuit and perceived rather than physical motion has been confirmed in numerous studies (Beutter & Stone, 1998, 2000; Dobkins, Stoner, & Albright, 1998; Madelain & Krauzlis, 2003; Ringach, Hawken, & Shapley, 1996; Steinbach, 1976; Stone, Beutter, & Lorenceau, 2000; Wyatt & Pola, 1979; Yasui & Young, 1975). Second-order motion (Butzer, Ilg, & Zanker, 1997; Hawken & Gegenfurtner, 2001), isoluminant motion (Braun et al., 2008), motion aftereffects (Braun, Pracejus, & Gegenfurtner, 2006; Watamaniuk & Heinen, 2007), biological motion (Orban de Xivry, Coppe, Lefevre, & Missal, 2010), and just about any stimulus that leads to the percept of visual motion can elicit pursuit eye movements. Many of these stimulus conditions give rise to motion perception that is not veridical, and there is a corresponding lack of veridicality in pursuit. Thus, at least at the qualitative level, there is a good correspondence between motion perception and pursuit, suggesting that both are based on the same computations of motion signals. At a closer level of scrutiny, several studies have shown the same biases for pursuit and perception. 
Motion perception and smooth pursuit: Bias
Under some conditions, the perceived motion direction of a stimulus deviates from its actual direction. In these cases, do we pursue the perceived direction or the veridical direction? Numerous studies indicate that in most conditions pursuit corresponds to the perceived direction. For example, Beutter and Stone (1998) found similar biases for direction judgments when they compared perceptual and oculomotor responses to plaid stimuli moving behind elongated apertures. In another study, Beutter and Stone (2000) studied the percept and concomitant pursuit eye movements of observers looking at partially occluded outlines of parallelograms, which moved 10 degrees to the left or right of vertical. Two vertical stationary apertures served as occluders and segmented these outlines into four separate line segments; the vertices stayed invisible. Depending on the contrast between apertures and background, observers had the percept of a single coherently moving figure or of separately moving lines (Lorenceau & Shiffrar, 1992). Observers tracking behavior followed their percepts: When no contrast was provided, no object motion was perceived and the eyes moved vertically following the line segments. With visible occluders, a coherent object moving in a diagonal direction was perceived and the eyes also moved diagonally. Along similar lines, Krukowski and Stone (2005) found an oblique effect for direction judgments and pursuit responses when comparing the directions of a moving spot. Such an effect was missed in an earlier study by Churchland, Gardner, Chou, Priebe, and Lisberger (2003), probably because their stimulus contained less uncertainty and contained only a smaller number of directions, both factors reducing the statistical power. 
There are also qualitative similarities that argue for common sensory processing for speed perception and pursuit. Smooth pursuit acceleration is reduced for isoluminant stimuli, which are also perceived as moving slower compared to luminance stimuli of comparable contrast (Braun et al., 2008). It is well established that low contrasts result in perceptual slowing (Thompson, 1982), which is also found in pursuit (Spering, Kerzel, Braun, Hawken, & Gegenfurtner, 2005). Moreover, steady-state smooth pursuit gain and perceived speed are in the same way affected by coherence and noise type of random-dot kinematograms (Schütz, Braun, Movshon, & Gegenfurtner, 2010). 
Pursuit and motion perception can be directly related by the link to neural activity in the major motion sensing area of the visual cortex, area MT. Neurons in area MT have been tightly linked to behavioral performance through the groundbreaking experiments by Newsome et al. (for reviews, see Movshon & Newsome, 1992; Newsome, Britten, Salzman, & Movshon, 1990). Lesions of area MT lead to deficits in motion perception and pursuit initiation (Newsome & Pare, 1988; Newsome, Wurtz, Dursteler, & Mikami, 1985), the firing of individual MT neurons can account for behavioral performance of a monkey observer in motion direction discrimination tasks (Britten, Shadlen, Newsome, & Movshon, 1992), and microstimulation of a direction column in MT can systematically bias the monkey's direction judgments (Salzman, Murasugi, Britten, & Newsome, 1992). Area MT was hypothesized to be the neural correlate of conscious motion processing (Block, 1996). However, this role of MT has been questioned because more recently several stimulus conditions have been identified, whose motion can be perceived but cannot be signaled by neurons in area MT, such as several types of second-order motion (Ilg & Churan, 2004; Majaj, Carandini, & Movshon, 2007; Tailby, Majaj, & Movshon, 2010). Due to the results of functional neuroimaging studies, it became clear that there is a rich network of motion-sensitive areas in visual cortex, which seem to be important for motion integration (Culham, He, Dukelow, & Verstraten, 2001; Sunaert, Van Hecke, Marchal, & Orban, 1999). So far, it is not yet clear to what degree each of these other areas contributes to perception and pursuit. 
There are several studies that bridge the gap between neural activity in area MT of monkeys and the pursuit eye movements. A particularly nice example of an agreement between neuronal responses in area MT and pursuit eye movements was discovered in the context of the so-called aperture problem. An infinite number of motion vectors are compatible with the change in position of an elongated line within a circular aperture (Adelson & Movshon, 1982). The small receptive fields of neurons in V1 and foveal MT can be thought of as such apertures. Several processing steps and integration over space and time are required to reconstruct the true movement direction (Bayerl & Neumann, 2007; Masson & Stone, 2002). Pack and Born (2001) analyzed in area MT of macaques the time course of direction selectivity of single-unit responses to moving line segments presented in different orientations. They found that response properties of MT neurons changed over time. While early MT responses showed an interaction between movement direction and stimulus orientation, late responses became independent of line orientation and followed the true movement direction (Figure 5c). These temporal dynamics of motion signal integration were also represented in the continuous change of pursuit direction during the early phase of pursuit initiation. Pursuit started out toward the direction orthogonal to the line and changed into the true direction of motion at the end of pursuit initiation (Figures 5a and 5b; Born, Pack, Ponce, & Yi, 2006; Masson & Stone, 2002; Wallace, Stone, & Masson, 2005). These dynamic changes of motion integration over time were also found for the initiation of ocular tracking movements (Masson, Rybarczyk, Castet, & Mestre, 2000). 
Figure 5
 
Temporal dynamics of the solution of the aperture problem. A bar was either orthogonal (red) or tilted (blue and green) relative to its motion direction. Smooth pursuit eye movements and neural responses in area MT were measured. (a) Eye velocity perpendicular to the target motion. (b) Eye velocity parallel to the target motion. (c) The preferred direction responses of 60 MT neurons show a continuous transition from orientation-dependent to motion-dependent responses (at about 140 ms) evolving over 60 ms. Figure modified from Pack and Born (2001).
Figure 5
 
Temporal dynamics of the solution of the aperture problem. A bar was either orthogonal (red) or tilted (blue and green) relative to its motion direction. Smooth pursuit eye movements and neural responses in area MT were measured. (a) Eye velocity perpendicular to the target motion. (b) Eye velocity parallel to the target motion. (c) The preferred direction responses of 60 MT neurons show a continuous transition from orientation-dependent to motion-dependent responses (at about 140 ms) evolving over 60 ms. Figure modified from Pack and Born (2001).
During the steady-state phase, the final corrected pursuit direction stays stable even during transient object blanking (Masson & Stone, 2002). Knowing the target motion direction or orientation does not eliminate these transient tracking direction errors at pursuit initiation (Montagnini, Spering, & Masson, 2006). However, this is different for pursuit that starts before the onset of motion, which is driven by the cognitive expectation of the target motion and called anticipatory pursuit (Kowler & Steinman, 1979a, 1979b). It was found that anticipatory pursuit direction was close to the true 2D motion direction. Therefore, both signals, retinal image motion and object motion prediction, seem to be independent: The earliest phase of pursuit and reflexive tracking are influenced by low-level motion signals that are always computed for each pursuit or ocular following initiation irrespective of past experiences. Anticipatory pursuit, however, is strongly influenced by learning or knowledge of object trajectory (Kowler, 1989). 
These studies show that the direction of pursuit eye movements can be directly related to the direction tuning of individual MT neurons. The story is more complicated for speed, since the motion-sensitive MT neurons respond to a range of speeds and speed inherently has to be coded by a population of speed-tuned neurons (Dubner & Zeki, 1971; Maunsell & Van Essen, 1983; Movshon, Lisberger, & Krauzlis, 1990). To enable a comparison with pursuit, Lisberger et al. (Churchland & Lisberger, 2001; Lisberger, 2010; Priebe & Lisberger, 2004; Yang & Lisberger, 2009) have established such a model for the population coding of speed in area MT. Basically, their model uses the vector average of the responses of many MT neurons to indicate speed. They used this model to show a correspondence between pursuit, perception, and physiology for apparent motion (Churchland & Lisberger, 2001; Lisberger, 2010). In apparent motion, flashes appear sequentially along a virtual motion trajectory. When the temporal gap between the flashes is increased, perceived speed and initial pursuit acceleration are both increased above the levels for smooth motion. This is somewhat counterintuitive because increasing the temporal gap reduces the quality of motion and should rather lead to a reduction of perceived speed and pursuit acceleration. Interestingly, the population coding model (Churchland & Lisberger, 2001) predicts the increase in perceived and pursuit speed from neural activity in MT. As expected from the reduction of motion quality, the activity of neurons in MT is reduced when the temporal gap is increased, but the reduction is more pronounced for neurons with low preferred speeds. This imbalance results in higher estimates of speed when it is based on the vector average across the population response. Hence, the paradoxical increase of perceived speed and pursuit acceleration for apparent motion can be explained by an imbalance in the population response of area MT. 
Motion perception and smooth pursuit: Accuracy and noise
The aforementioned studies show that perception and pursuit follow the same biases in general. This indicates that they use similar neural computations, but it does not prove that they use the exact same neural machinery. Although unlikely, it would still be possible that they rely on parallel processing streams, which just execute similar computations. A possible way to approach this question is to measure accuracy of perception and pursuit in terms of speed and direction. In a seminal study, Kowler and McKee (1987) asked how well pursuit and perception are capable of detecting and discriminating speed differences of single moving spot-like stimuli. To facilitate the comparison between perception and pursuit thresholds, they introduced the novel concept of an oculometric function. In psychophysics, since the 19th century work of Weber and Fechner, there have been established methods to measure perceptual discriminability. A number of stimuli differing only slightly in one attribute, for example, speed, are repeatedly presented. The observer's task is to judge the speed of each stimulus relative to an implicit (method of single stimuli) or explicit (method of constant stimuli) standard stimulus. The increase in the proportion of faster judgments with increasing velocity is typically well described by a cumulative Gaussian function. The standard deviation of the underlying Gaussian can then be used as an estimate for the discrimination threshold. To construct the equivalent oculometric functions, Kowler and McKee measured the speed of pursuit eye movements in response to different stimulus speeds. Whenever the eye moved faster than the average over all trials, this was treated the same way as if the observer had given a “faster” judgment. When the steady-state phase of pursuit was analyzed, about 500 ms after the stimulus had started to move, the resulting speed discrimination thresholds for perceptual judgments and pursuit were remarkably similar for the whole range of speeds Kowler and McKee investigated, as can be seen in Figure 6. This basic finding of a rough equivalence of perceptual and pursuit thresholds has been replicated numerous times under slightly different circumstances, both for speed and direction changes (Beutter & Stone, 1998, 2000; Braun et al., 2006; Gegenfurtner, Xing, Scott, & Hawken, 2003; Kowler & McKee, 1987; Stone & Krauzlis, 2003; Tavassoli & Ringach, 2010). This overall good agreement between pursuit and perception for direction and speed indicates that the pursuit system uses all the existing information to calculate the motion for all types of visual motion stimuli. 
Figure 6
 
Weber fractions (discrimination/target velocity) for pursuit (red) and perception (bue) as a function of target velocity. Data are redrawn from Kowler and McKee Kowler and McKee (1987).
Figure 6
 
Weber fractions (discrimination/target velocity) for pursuit (red) and perception (bue) as a function of target velocity. Data are redrawn from Kowler and McKee Kowler and McKee (1987).
While the interpretation of these results seems relatively straightforward, they are not so easy to consolidate with standard thinking about the signals that are used for pursuit and motion perception. The processes involved in pursuit and perception are quite different, and it is not clear at all how to compare a dynamic motor response with a rating or judgment. For perceptual judgments, information is accumulated as long as the stimulus is present and can subsequently be analyzed and mentally compared with previous trails until most often a binary decision is made, typically a few seconds later. Pursuit as a dynamic continuous response is initiated about 100–150 ms after stimulus motion onset and has two quite different temporally distinct phases characterized also by different visual stimulation. Due to neuronal latencies, only about 30–50 ms of retinal motion stimulus can be processed before the eyes start to move; this is the initial open-loop phase of pursuit. Then gradually, the retinal target motion signal changes due to the continuous smooth eye rotations after pursuit onset. This visual feedback signal can then be used to refine the motion estimate when also the efference copy signal related to the eye velocity is available (Lisberger, 2010). This is the second, closed-loop phase of pursuit, or steady‐state. When perceptual and pursuit responses during steady‐state are given at the same time, the efference copy signal of the eye movement command is a potential source of information for motion perception (Braun et al., 2008; Braun, Schütz, & Gegenfurtner, 2010; Royden, Banks, & Crowell, 1992; Spering et al., 2011). As a result, interpreting the signals derived from a direct comparison of perception and pursuit is not simple and limited by the different nature of the responses. Surprisingly, more often than not, oculometric and psychometric thresholds closely match, but the interpretation of the underlying mechanisms is not straightforward. 
Nonetheless, most investigators agree that there is an initial common stage of motion analysis for perception and pursuit that is ultimately followed by divergent pathways. The common stage is determined by a common source of noise, while there are separate sources of noise at the segregated stages. It is the magnitude of these different noise sources that are of current interest. A number of studies have investigated these noise sources in detail using a range of conditions (Beutter & Stone, 2000; Braun et al., 2008; Gegenfurtner et al., 2003; Kowler & McKee, 1987; Osborne, Hohl, Bialek, & Lisberger, 2007; Osborne, Lisberger, & Bialek, 2005; Stone & Krauzlis, 2003; Tavassoli & Ringach, 2010). If pursuit and perception rely on the same sensory estimates and little noise is added thereafter, it would be predicted that there would be both a similar discrimination performance and a covariation of responses on a trial-by-trial basis. If, however, perception and pursuit rely on different sensory processing mechanisms or if specific or private noise is added downstream in the pursuit and the perceptual systems, no covariation of perceptual and pursuit responses would be predicted, yet discrimination performance could still be equal. 
Stone and Krauzlis (2003) measured direction discrimination thresholds for pursuit and perception. In their task, a bright white spot moved along one of the cardinal directions or along a direction slightly clockwise (cw) or counterclockwise (ccw) from the cardinal direction. Observers pursued the spot and indicated its direction (cw or ccw) after each trial. From both responses, oculometric and psychometric functions were constructed and direction discrimination thresholds turned out to be similar for perception and steady-state pursuit. Importantly, the authors also analyzed the trial-by-trial covariation and found a significant high correlation. For a given stimulus direction, when pursuit was, for example, directed more clockwise than the average pursuit response for that direction, observers tended to judge the spot direction as more clockwise, too. From this covariation, they estimated that a shared neural mechanism encoding the direction of target motion is responsible for the similar noise found for perception and pursuit, but that downstream also additional noise is added separately. 
Gegenfurtner et al. (2003) performed analogous experiments for speed discrimination. In their task, a small Gabor patch moved horizontally and briefly changed its speed. Observers had to pursue the patch and indicate whether it became faster or slower during the perturbation. Again, oculometric functions and psychometric functions indicated similar discriminability of speed differences, even though there was some degree of variation between observers. The major difference to the results for stimulus direction (Stone & Krauzlis, 2003) was that no trial-by-trial covariation was observed: For trials on which there was no perturbation but observers reported faster or slower speed, the change in eye speed was not correlated with the perceptual judgment. The estimated common variance for pursuit and perception was below 10% in these experiments, independently of whether long or short perturbations were used and independently of the particular analysis interval chosen for the comparison between pursuit and perception. This result was further strengthened by a similar analysis of the motion aftereffect on pursuit and perception (Braun et al., 2006) where the agreement in the magnitude of the motion aftereffect for both was excellent. However, for any given condition, the pursuit speed in response to the aftereffect was independent of the observer's judgment. These results suggest that under some conditions, at least, the crucial components of the pathways are independent. 
Despite the disagreement about the covariation between pursuit and perception, the studies mentioned above show good agreement between the discriminative abilities of pursuit and perception. In general, there seems to be a slight tendency for perception to be more accurate than steady-state pursuit, even though some examples to the contrary do exist (see observer LP in Gegenfurtner et al., 2003). More recently, Tavassoli and Ringach (2010) found that there could also be systematic advantages for the pursuit system over perception. They measured oculometric and psychometric thresholds for the classification of the polarity of sinusoidal speed perturbations as peak first or peak last. In their experiments, there was a range of perturbation magnitudes for which the pursuit system did better than perception. Does this mean that the two systems are completely separate? The results strengthen the argument that the requirements for perception and pursuit can be quite different. The pursuit response is continuous and therefore requires a quick response to small changes in speed to be able to track objects accurately. Latencies of pursuit to speed perturbations are as fast as 67 ms (Tavassoli & Ringach, 2009). During steady-state pursuit, the sensorimotor transformation gain is increased, which leads to a faster and stronger response to small changes in target velocity (Schwartz & Lisberger, 1994; Tanaka & Lisberger, 2001). The results of Tavassoli and Ringach suggest that the readout of motion information, for example, from area MT, could simply have different dynamics for pursuit and perception, which would also agree with the lack of covariation for pursuit and perception in terms of stimulus speed. A major dissociation between pursuit speed and perceptual speed judgments was found in a more complex situation by Spering and Gegenfurtner (2007). In this paradigm, observers had to pursue a target and judge its speed, while the speed of the target and/or of peripheral context stimuli was perturbed. Pursuit speed showed an integration of the target and context speed, whereas speed judgments showed a contrast between target and context speed. In other words, pursuit speed increased with increasing context speed, while perceived speed became slower. The differences between pursuit and perception do not necessarily imply a fundamentally different processing stream. It might be the case that information from low-level motion areas such as MT or MST are decoded differently for different tasks (Jazayeri & Movshon, 2007). 
While some of these studies emphasize the differences between perception and pursuit, a fundamentally different suggestion was recently put forward by Osborne et al. (2007, 2005). Their hypothesis is that nearly all of the pursuit variability can be explained by sensory errors exclusively and that no noise is added in separate processing for pursuit and perception or by the motor system. There are two arguments that led them to their rather radical conclusion. The first argument is theoretical. They measured the variability of the pursuit response of several monkey observers and found that three main factors, speed, direction, and timing, could explain most of the variance. They then go on to propose that these factors are purely sensory, an argument that appears difficult to evaluate directly. The second argument is based on empirical data, where they propose that the observed pursuit variability at the end of the open-loop phase—before any visual feedback is possible—is identical to the sensory noise. If this is the case, then any additional motor noise is inconsequential. The question of whether there is identical noise and therefore a common pathway should be easy to resolve by comparing pursuit and perception. The authors did not do this, however, in their study with monkeys. They compared pursuit variability of monkeys to psychophysical thresholds measured on human observers by different experimenters (De Bruyn & Orban, 1988) using different stimuli. Osborne et al. interpret the results of this comparison as favoring their hypothesis of zero motor noise, even though the pursuit variability at the end of the open-loop period is, in some cases, twice as high as the corresponding psychophysical thresholds. 
Rasche and Gegenfurtner (2009) tried to resolve the apparent discrepancy between Osborne et al. (2007, 2005) and earlier studies (Gegenfurtner et al., 2003; Kowler & McKee, 1987). They measured pursuit variability for stimuli differing in speed and also collected psychophysical speed discrimination data from the same three well-trained human observers. Their data are in good agreement with earlier measurements on human speed discrimination (De Bruyn & Orban, 1988; Kowler & McKee, 1987), but their results do not support Osborne et al.'s proposal of zero motor noise. Rasche and Gegenfurtner estimated that at the end of the open-loop period, an average of 60% of the total variability comes from motor sources. Findings similar to those of Rasche and Gegenfurtner were recently reported by Boström and Warzecha (2010) for open-loop ocular following responses. 
In principle, it should be possible to resolve the question to what extent motor behavior and perception are driven by shared neural signals if the variability of sensory and motor noises could be measured directly at the corresponding stages of the neural pathways. Unfortunately, it is not easy to directly measure some of the important quantities. Speed is coded by a population activity, and computing speed from the population requires assumptions about how many neurons are involved in controlling behavior and how the firing rates of individual neurons are correlated. Increasing the number of neurons can lead to a reduction of variability, but only if the noise correlation between the neurons is low (Zohary, Shadlen, & Newsome, 1994). Lisberger et al. have modeled various stages of the pursuit pathway, starting from area MT (Huang & Lisberger, 2009) to the frontal eye fields (Schoppik, Nagel, & Lisberger, 2008) down to the cerebellum (Medina & Lisberger, 2007). Different combinations of parameter values for neurons in the different brain regions lead to predictions that can explain most of the observed variance in pursuit. Lisberger (2010) favors a model in which neuronal activity is noisy and correlated in area MT and precise in downstream areas, leading to a sensory variability that matches the overall pursuit variability. Decreasing the amount of noise in MT would allow for a larger degree of noise at the decision or motor stages. The question of which of the many parameter combinations is correct is not resolved yet. Determining the degree of correlation between neurons in a brain region is quite a difficult question in its own and depends on many factors (Cohen & Kohn, 2011; Ecker et al., 2010). Even if the average degree of correlation could be determined, it is even more difficult to figure out how many neurons contribute to a certain behavior and which subpopulations of neurons do. Therefore, the physiological data do not yet provide a clear answer to the contribution of different noise sources. 
To summarize, an excellent agreement between perception and pursuit has been found for the average responses to different stimulus properties, for instance, contrast (Spering et al., 2005; Thompson, 1982), motion coherence (Schütz et al., 2010), and object coherence (Beutter & Stone, 1998, 2000). This shows that perception and pursuit use at least similar neural computations. However, the crucial test for the question on whether they use identical neural substrates is the analysis of common variability. The available evidence suggests that there might be different answers for direction and speed. While perception and pursuit covary to a substantial degree in direction discrimination (Stone & Krauzlis, 2003), there is no such correlation for speed discrimination (Gegenfurtner et al., 2003). Such a difference between direction and speed might appear unlikely at first sight, but it is already known that pursuit requires specific processing for motion speed but not for motion direction: The increase of sensorimotor transformation gain is direction specific (Schwartz & Lisberger, 1994), which means that it just affects the speed of the eye movements but not their direction. 
How does pursuit influence perception?
As we have described above, the sensorimotor transformation gain is increased during the execution of smooth pursuit (Schwartz & Lisberger, 1994; Tanaka & Lisberger, 2001). This leads to the question: Does pursuit solely modulate sensorimotor transformation or sensory processing as well? For the perception of stationary and moving objects during pursuit, several effects are well established for the initiation and the execution of pursuit. Filehne (1922) described that a briefly presented stationary object appeared to move in the opposite direction to the pursuit target. Aubert (1886) and Fleischl (1882) observed that moving objects appeared to move more slowly during pursuit than during fixation. Recently, these effects have been explained in terms of Bayesian integration of retinal and extraretinal signals (Freeman, Champion, & Warren, 2010). In addition, the trajectories of diagonally (Festinger, Sedgwick, & Holtzman, 1976; Morvan & Wexler, 2009) or perpendicularly (Souman, Hooge, & Wertheim, 2005) moving objects are reported to be rotated away from the pursuit direction. Several recent studies made clear that pursuit alters the retinal input specifically and that visual perception is indeed modulated by pursuit, not only passively by a change in retinal input but also actively (Hafed & Krauzlis, 2006; Schütz, Braun, Kerzel, & Gegenfurtner, 2008; Tong, Stevenson, & Bedell, 2008). Here, we concentrate on studies regarding the effects of pursuit on visual acuity, contrast sensitivity, attention, and spatial location, because effects of pursuit on perception of motion and direction have been recently reviewed in detail (Spering & Gegenfurtner, 2008; Spering & Montagnini, 2011). 
Visual acuity and object recognition
The principal purpose of pursuit stated in most textbooks is the stabilization of selected moving objects on the fovea to allow highest scrutiny for their perception. This is a plausible assumption, knowing that visual acuity is impaired (Westheimer & McKee, 1975) by fast retinal motion, which shifts the visible range to lower spatial frequencies (Burr & Ross, 1982). However, there are surprisingly few studies directly measuring more complex aspects of spatial vision such as object recognition during smooth pursuit. Ludvigh and Miller (1958) studied visual acuity during fixation and pursuit with Landolt stimuli. They found that visual acuity was not affected at low velocities but declined rapidly at higher velocities. They hypothesized that this is due to imperfect image stabilization by pursuit. Later, Brown (1972a, 1972b) and Methling and Wernicke (1968) studied the effect of stimulus contrast, size, and position on human dynamic acuity. These studies confirmed that dynamic visual acuity during pursuit depends solely on the retinal stabilization and is only limited by the accuracy of the eye movement. Recently, Schütz, Braun, and Gegenfurtner (2009) probed recognition of letters flashed within a noise patch, after a saccade was executed to the stationary or moving noise patch, so the saccade was effectively followed by fixation or smooth pursuit. Interestingly, very short presentation durations were sufficient for recognition even during pursuit. This indicates that longer pursuit epochs are probably not necessary for object recognition and may serve different purposes such as refinement of motion signals for prediction (Spering et al., 2011), heading (Royden et al., 1992), or perceptual coherence (Hafed & Krauzlis, 2006). Hence, retinal stabilization might be just one purpose of smooth pursuit, besides prediction of visual motion. This is also supported by the fact that in ball sports, players often pursue the ball (Land, 2006), although the players certainly do not need to recognize the ball. 
Contrast sensitivity and motion smear
Compared to saccades (Kowler, 2011; Ross, Morrone, Goldberg, & Burr, 2001), the degree of visual suppression and distortion is much smaller during pursuit (Schütz, Braun, & Gegenfurtner, 2007; Schütz & Morrone, 2010; van Beers, Wolpert, & Haggard, 2001). During saccades, the whole retinal image moves quickly, which confronts the visual system with the problem of maintaining a stable perception of the world. The visual system deals with that problem by active suppression of the visual input (Burr, Morrone, & Ross, 1994; Volkmann, 1962) and by forward and backward masking (Campbell & Wurtz, 1972; Castet, Jeanjean, & Masson, 2002). It is important to note that this leads just to threshold elevations for some types of stimuli (Burr et al., 1994; Castet & Masson, 2000), not to a complete block of visual processing. Saccades are short in duration and the distorted retinal input presumably provides little information, so that the system can afford to raise the thresholds. The situation is different for smooth pursuit, which can last for several seconds and induces only moderate retinal speeds. Compared to the strong suppression of luminance stimuli during saccades, there is almost no reduction of luminance contrast sensitivity during pursuit initiation (Schütz, Braun et al., 2007). While luminance sensitivity for low spatial frequencies is either unaffected by pursuit or only slightly impaired, larger and surprisingly beneficial effects have been observed for chromatic targets and for high-spatial-frequency luminance stimuli (Schütz et al., 2009, 2008). In these experiments, stimuli were oriented parallel to the pursuit trajectory and flashed for 10 ms to minimize retinal motion and to equate retinal stimulation during pursuit and fixation. Under these conditions, chromatic contrast sensitivity is better during pursuit than during fixation (Figures 7a and 7b; Schütz et al., 2008). The enhancement of sensitivity starts about 50 ms before pursuit onset (Figure 7c) and its magnitude scales with pursuit velocity (Schütz et al., 2009, 2008). By measuring the chromatic temporal impulse response function, it could be shown that the enhancement of sensitivity is rather caused by a general increase in contrast gain than by a change of the temporal integration (Schütz et al., 2009). However, this enhancement is not a pure color effect, since it also affects luminance stimuli but only for spatial frequencies above 3 cpd (Schütz et al., 2009). As the magnocellular pathway cannot process stimuli defined only by color or high spatial frequencies, it is likely that this enhancement originates in the parvocellular pathway. Since the temporal contrast sensitivity function has a low-pass shape for color and high-spatial-frequency luminance stimuli (Kelly, 1975, 1979, 1983), the retinal motion during pursuit will impair sensitivity especially for these stimuli. It might be that the enhancement, which has been measured with flashed stimuli that do move on the retina, aims at compensating the detrimental effect of retinal motion on physically stationary stimuli during pursuit. 
Figure 7
 
Chromatic contrast sensitivity. (a) Contrast sensitivity functions for one subject during pursuit (blue) and fixation (red). (b) Contrast sensitivity during pursuit and fixation for 11 subjects. The filled square represents the mean across subjects; the error bar shows the 95% confidence interval of the difference between pursuit and fixation. (c) Chromatic detection rate during pursuit initiation for one subject. Detection rate (blue) and eye velocity (green) are aligned to pursuit onset. The increase in detection rate (black circle) starts about 50 ms before pursuit onset. Data are redrawn from Schütz et al. (2008).
Figure 7
 
Chromatic contrast sensitivity. (a) Contrast sensitivity functions for one subject during pursuit (blue) and fixation (red). (b) Contrast sensitivity during pursuit and fixation for 11 subjects. The filled square represents the mean across subjects; the error bar shows the 95% confidence interval of the difference between pursuit and fixation. (c) Chromatic detection rate during pursuit initiation for one subject. Detection rate (blue) and eye velocity (green) are aligned to pursuit onset. The increase in detection rate (black circle) starts about 50 ms before pursuit onset. Data are redrawn from Schütz et al. (2008).
Contrast sensitivity for moving stimuli is especially interesting in the context of pursuit, because pursuit changes the retinal speed of the stimuli. In general, it has been shown that the sensitivity during pursuit depends on the retinal temporal frequency and not on the physical temporal frequency (Flipse, van der Wildt, Rodenburg, Keemink, & Knol, 1988; Murphy, 1978; Schütz, Delipetkos, Braun, Kerzel, & Gegenfurtner, 2007; Tong, Ramamurthy, Patel, Vu-Yu, & Bedell, 2009). However, temporal contrast sensitivity is not exactly the same as during fixation. There are interesting differences for stimuli moving in pursuit direction or against pursuit direction. One study found a general reduction of temporal contrast sensitivity for stimuli moving opposite to the pursuit direction (Schütz, Delipetkos et al., 2007) and another study found an acceleration of the temporal impulse response function for such stimuli (Tong et al., 2009). It has also been reported that the critical flicker fusion frequency for colored stimuli is higher during pursuit than during fixation but only for stimuli moving in the direction opposite to pursuit direction (Terao, Watanabe, Yagi, & Nishida, 2010). Despite the differences in these studies, it is clear that contrast sensitivity for motion against pursuit direction is different from contrast sensitivity for other motion directions. Such an asymmetry has also been observed for the extent of motion smear. Moving objects appear blurred when the eyes and the head are kept stable (Burr, 1980). During smooth pursuit eye movements, the whole stationary scene is moving on the retina and therefore should appear blurred. Research by Bedell et al. (reviewed in Bedell, Tong, and Aydin, 2010) has shown that the amount of motion smear is reduced during smooth pursuit (Bedell & Lott, 1996), specifically for directions opposite to pursuit (Tong, Aydin, & Bedell, 2007). This reduction is at least partially triggered by proprioceptive eye-muscle signals (Tong et al., 2008), which have recently been identified in primary somatosensory cortex (Wang, Zhang, Cohen, & Goldberg, 2007). Presumably, the perception of the stationary scene, which is moving opposite to the pursuit direction on the retina, benefits from this reduction of motion smear. These results about contrast sensitivity and perceived motion smear indicate that perception during pursuit is actively adapted to the changes in retinal input, induced by the eye movements. 
Visuospatial attention
As we have shown above, smooth pursuit eye movements can exhibit direct influence on visual contrast sensitivity, but they might also exhibit indirect influence via spatial attention. Visuospatial attention is known to enhance perceptual processing of a selected and spatially limited part of the visual scene. Several investigations have reported a close spatial and temporal coupling between visual attention and saccadic eye movements (Deubel & Schneider, 1996; Hoffman & Subramaniam, 1995; Kowler, Anderson, Dosher, & Blaser, 1995) and this seems also to be the case for smooth pursuit. In a classical study, Khurana and Kowler (1987) used a double task to study attention during pursuit. In 4 × 4 alphanumerical arrays, observers were asked to pursue one of the two pairs of rows moving at the same speed (target) surrounded by the non-target rows moving at a different speed. They found that the identification judgments of the pursued target rows were 2–3 times better than those in the non-target rows. In addition, observers were not able to share attention, i.e., keep fixation and attend to moving rows. These results led to the conclusion that pursuit and attention share resources. 
Recent studies suggest that the coupling between attention and pursuit is more complex. The role of attention for the maintenance of pursuit seems to depend on the properties of the pursuit target and the target where attention has to be diverted to. Diverting attention to a peripheral target reduces pursuit gain only if the attention target is salient and creates retinal motion (Kerzel, Souto, & Ziegler, 2008). This is presumably a pursuit-specific problem, because it is possible to divide attention between a retina- and a space-centered target (Niebergall, Huang, & Martinez-Trujillo, 2010). Furthermore, the size of the attention window can be flexibly adjusted (Heinen, Jin, & Watamaniuk, 2011): First, blanking the central spot of a pursuit target array improves performance at peripheral spots. Second, adding background dots that move with the same speed as the pursuit target improve performance overall. These results indicate that the size of the attentional window depends critically on the properties of the pursuit target. By manipulating the relative amount of attention dedicated to steady-state pursuit or a peripheral task, it was shown that attention cannot be traded 1 to 1 between the two tasks (Kerzel, Born, & Souto, 2009). This has been taken as evidence that attention interferes with pursuit only for target selection. Like for the maintenance of pursuit, the role of attention for the initiation of pursuit also depends on the properties of the attention target. If there is no conflicting motion stimulus, a division of attention delays the onset of catch-up saccades and closed-loop pursuit (Souto & Kerzel, 2008) but leaves the latency of the open-loop initiation of pursuit unaffected. This is compatible with the idea that open-loop tracking is a reflexive behavior that does not require an effort of will. However, when attention has to be diverted to a distractor moving in a direction opposite to the pursuit target, open-loop initiation is delayed. This indicates that the selection of a pursuit target among other targets critically relies on attention (Ferrera & Lisberger, 1995). 
The above-mentioned studies showed that attention is necessary for target selection and the entry to and the maintenance of the closed-loop phase of pursuit. Especially, distractors creating retinal motion impair pursuit. The precise spatial position of the attentional focus during pursuit, however, is still being actively investigated. Studies measuring saccadic and manual response latencies found faster latencies for locations broadly ahead of pursuit (Khan, Lefevre, Heinen, & Blohm, 2010; Van Donkelaar & Drew, 2002). However, the discrimination performance for non-transient stimuli seems to be best at the tracked target (Lovejoy, Fowler, & Krauzlis, 2009). To conclude, recent studies indicate that the coupling of smooth pursuit and attention is more flexible than initially assumed and depends on properties of the pursuit target as well as on properties of the attended, peripheral target. The precise spatial position of attention is either ahead or on the pursuit target, depending on the measured response type. 
Spatial localization
Another important aspect of perception is the location of stimuli. When a visual target is briefly flashed during pursuit, the retinal position of the flash and the eye position have to be combined in order to arrive at an accurate localization in external space. Usually observers make small but systematic errors: Along the pursuit trajectory, flashes are mislocalized in the direction of pursuit (forward shift), and orthogonal to the pursuit trajectory, flashes are mislocalized away from the fovea (expansion). The forward shift could be, in principle, a simple temporal misalignment between the retinal and extraretinal signals (Brenner, Smeets, & van den Berg, 2001). However, this explanation is not applicable for the expansion effect and there are several aspects of the forward shift that argue against a simple temporal explanation as well. First, the forward shift is not spatially uniform but larger for the hemifield in which the eyes are moving and larger for targets further away from the fovea (Kerzel, Aivar, Ziegler, & Brenner, 2006; Matsumiya & Uchikawa, 2000; Mitrani & Dimitrov, 1982; Rotman, Brenner, & Smeets, 2004; van Beers et al., 2001). Second, the magnitude of mislocalization is only influenced by spatial cues, not by temporal cues (Rotman, Brenner, & Smeets, 2002). The forward shift is also strongly affected by the spatial context: Adding a structured background reduces the mislocalization (Brenner et al., 2001) and presenting a wall between the actual position and the mislocalized position can block the mislocalization (Noguchi, Shimojo, Kakigi, & Hoshiyama, 2007). Interestingly, even the perceived shape and color of the flash are affected by the wall. This indicates that the flash is processed by an extended integration window in space and time. 
Rotman et al. (2004) investigated if the mislocalization depends on the motion of the pursuit target or on the motion of the eyes and if mislocalization is dependent on whether the motion occurs before or after the flash. They presented a flash before or after the pursuit target changed its direction and found that the mislocalization depends mainly on the motion of the eyes after the flash. This is consistent with the observation that the effect of the spatial context depends mainly on its presence after the flash (Noguchi et al., 2007). The mislocalization during pursuit seems to be modality specific, since there are large differences in the mislocalization of visual and auditory targets (Königs & Bremmer, 2010): Auditory targets are symmetrically compressed toward the pursuit target, presumably an instance of the ventriloquism effect. 
The asymmetry of the visual mislocalization also argues for a contribution of an extraretinal signal. Rotman, Brenner, and Smeets (2005) showed that flashes up to a duration of 200 ms and stimuli moving with the pursuit target are mislocalized. This led to the interpretation that the localization is achieved by summing the retinal and extraretinal signals. If the retinal signal does not contain motion against the pursuit direction, because the stimulus is just briefly flashed or because it moves with the eye, it is mislocalized in the pursuit direction. Further support for the involvement of an extraretinal signal comes from the finding that the mislocalization during pursuit initiation starts well before the eye movement onset (Blanke, Harsch, Knoll, & Bremmer, 2010). However, humans can use both types of signals, a purely retinal signal and a signal corrected for eye movements. Brenner and Cornelissen (2000) flashed either one or two targets during pursuit. In general, observers located the targets at the appropriate positions with the known bias in pursuit direction. When two targets were flashed, their relative separation was based on retinal information alone. As the eye movements are no longer compensated, two flashes appearing at the same spatial position are seen with a spatial offset that corresponds to the distance the eye traveled during the interflash interval. 
It is not only space but also time that is distorted during pursuit (Schütz & Morrone, 2010). Empty time intervals, defined by luminance or color flashes, are perceived as being shorter when they are presented during pursuit than during fixation. This temporal compression does not occur for auditory intervals. 
To summarize, stimuli that are flashed during pursuit tend to be mislocalized in the direction of pursuit and away from the fovea. The mislocalization is presumably caused by the combination of a retinal and an extraretinal signal. However, the visual system shows an enormous flexibility in the signals used for the localization: When two stimuli are flashed, their perceived separation depends only on retinal separation, without any compensation of pursuit (Brenner & Cornelissen, 2000) and stationary cues can greatly reduce the amount of mislocalization (Brenner et al., 2001; Noguchi et al., 2007; Rotman et al., 2002). 
Summary
In our review, we focused on three major questions that address the relationship between voluntary eye movements and perception: “Why do we look where we do?”, “Do motion perception and pursuit rely on the same signals?”, and “How does pursuit influence perception?” In the first part, we summarized studies that point to the existence of several interacting control loops for saccadic target selection. We think that a combination of factors determines where we look and that visual salience, object recognition, plans, and value are important contributors to the saccadic planning and target selection processes. The important question for the future is how these factors are combined and how their relative weights might depend on the task requirements. 
Given the necessity of visual motion for the execution of smooth pursuit, a natural questions to ask is “Do motion perception and pursuit rely on the same neural signals?” The vast majority of studies show that pursuit and perception often exhibit the same average biases when tested with the same stimuli. This indicates that they follow similar computations, but it does not directly address whether they use the exact same neural circuits. So far, studies measuring the common variability of perception and pursuit have not arrived at a consensus about the commonality of the circuits. For direction discrimination, the relative amount of noise common to pursuit and perception seems to be greater than for speed discrimination. It might seem unsatisfactory to arrive at two different answers for direction and speed, but it is certainly not difficult to imagine that these two quantities are treated differently by the primate motion system. 
In the last section, we reviewed studies measuring the influence of pursuit on the perception of different visual attributes. Studies about contrast sensitivity and motion smear provide evidence that visual perception is actively modulated by smooth pursuit eye movements, presumably to counteract the retinal consequences of pursuit. It is also clearly established that spatial attention is linked to smooth pursuit, but the exact coupling seems to depend on the phase of pursuit, the properties of the pursuit target, and the nature of the attention task. 
Given the strong coupling between motion perception and smooth pursuit eye movements, the execution of the pursuit movement itself might be highly informative for the visual system. The extraretinal signal and visual feedback are used for velocity judgments, heading directions, direction predictions, object coherence, and figure ground segmentation. Foveation of a moving target for better acuity is certainly a big benefit of pursuit. However, object recognition generally occurs at an extremely rapid time scale—less than 100 ms—and pursuit is typically generated for longer time durations, generally several hundred milliseconds or longer. Therefore, the information gained about the object motion might be an additional major benefit of pursuit. 
We have considered these research questions “Why do we look where we do?” and “How are perception and pursuit related?” separately. At first sight, they do seem to separate the field quite nicely into saccadic eye movements, on the one hand, and pursuit eye movements, on the other hand. However, this distinction is not as clear-cut as it seems, when we consider eye movements during natural viewing. When discussing data about visual salience and its contributions to saccadic target selection, we argued that a single object starting to move in the visual periphery is probably the most salient event. During everyday viewing, after making a saccade to this moving object, there is a high probability that we will start to pursue it. During pursuit, when the target changes its speed or when we need to catch up with it because our pursuit was too slow or because the target suddenly changes its direction, we will execute saccades. Recent work by Lefevre et al. (Blohm, Missal, & Lefevre, 2005; de Brouwer, Yuksel, Blohm, Missal, & Lefevre, 2002; Orban de Xivry, Bennett, Lefevre, & Barnes, 2006; Orban de Xivry & Lefevre, 2007) has shown how pursuit and saccades are very intricately interwoven to achieve accurate control of the eyes. Work on the physiology of the superior colliculus (Carello & Krauzlis, 2004; Krauzlis, 2004; Krauzlis, Basso, & Wurtz, 1997; Segraves & Goldberg, 1994) also indicates that pursuit and saccades are interrelated and not completely independent systems as has been thought for a long time. 
It seems that both saccades and pursuit eye movements support our visual system to fulfill its main functions, the generation of a conscious internal representation of our external world and the support for the guidance of our motor actions and mobility. 
Acknowledgments
We thank Jutta Billino, Lewis Chuang, Michael Dorr, Wolfgang Einhäuser, Mike Hawken, and Dirk Kerzel for helpful comments on an earlier version of this manuscript. We are particularly grateful to Mike Hawken, who also corrected our English. This work was supported by the DFG Forschergruppe 560 “Perception and Action.” 
Commercial relationships: none. 
Corresponding author: Karl R. Gegenfurtner. 
Address: Otto-Behaghel-Str. 10F, Giessen 35394, Germany. 
Footnotes
Footnotes
1  According to Web of Science, 2178 articles were published with the phrase “eye movement” in the title between the years 2000 and 2010, 57 of which appeared in this journal; 79,908 times articles with that specification were cited during that time period. Amazingly, the Journal of Vision, founded just 10 years ago, has made it into the top 5 journals in terms of published eye movement papers over the course of that time and among the top 3 since 2009.
Footnotes
2  Even these activities can get quite competitive (http://www.youtube.com/watch?v=LyU5v0ZYMjI).
References
Adam R. J. Manohar S. G. (2007). Does reward modulate actions or bias attention? Journal of Neuroscience, 27, 10919–10921. [CrossRef] [PubMed]
Adelson E. H. Movshon J. A. (1982). Phenomenal coherence of moving visual patterns. Nature, 300, 523–525. [CrossRef] [PubMed]
Araujo C. Kowler E. Pavel M. (2001). Eye movements during visual search: The costs of choosing the optimal path. Vision Research, 41, 3613–3625. [CrossRef] [PubMed]
Aubert H. (1886). Die Bewegungsempfindung. Pflügers Archiv, 39, 347–370. [CrossRef]
Baddeley R. J. Tatler B. W. (2006). High frequency edges (but not contrast predict where we fixate: A Bayesian system identification analysis. Vision Research, 46, 2824–2833. [CrossRef] [PubMed]
Bahill A. T. LaRitz T. (1984). Why can't batters keep their eyes on the ball. American Scientist, 72, 249–253.
Ballard D. H. Hayhoe M. M. Li F. Whitehead S. D. (1992). Hand-eye coordination during sequential tasks. Philosophical Transaction of the Royal Society of London B: Biological Sciences, 337, 331–338. [CrossRef]
Ballard D. H. Hayhoe M. M. Pelz J. B. (1995). Memory representations in natural tasks. Journal of Cognitive Neuroscience, 7, 66–80. [CrossRef] [PubMed]
Ballard D. H. Sprague N. (2005). Modeling the brain's operating system. In Brain, vision, and artificial intelligence, proceedings (vol. 3704, pp. 347–366). Berlin, Germany: Springer-Verlag.
Ballard D. H. Sprague N. (2006). Modeling the brain's operating system using virtual humanoids [proceedings paper]. International Journal of Pattern Recognition and Artificial Intelligence, 20, 797–815. [CrossRef]
Bayerl P. Neumann H. (2007). A fast biologically inspired algorithm for recurrent motion estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29, 246–260. [CrossRef] [PubMed]
Bedell H. E. Lott L. A. (1996). Suppression of motion-produced smear during smooth pursuit eye movements. Current Biology, 6, 1032–1034. [CrossRef] [PubMed]
Bedell H. E. Tong J. Aydin M. (2010). The perception of motion smear during eye and head movements. Vision Research, 50, 2692–2701. [CrossRef] [PubMed]
Berryhill M. E. Chiu T. Hughes H. C. (2006). Smooth pursuit of nonvisual motion. Journal of Neurophysiology, 96, 461–465. [CrossRef] [PubMed]
Betz T. Kietzmann T. C. Wilming N. König P. (2010). Investigating task-dependent top-down effects on overt visual attention. Journal of Vision, 10, (3):15, 1–14, http://www.journalofvision.org/content/10/3/15, doi:10.1167/10.3.15. [PubMed] [Article] [CrossRef] [PubMed]
Beutter B. R. Stone L. S. (1998). Human motion perception and smooth eye movements show similar directional biases for elongated apertures. Vision Research, 38, 1273–1286. [CrossRef] [PubMed]
Beutter B. R. Stone L. S. (2000). Motion coherence affects human perception and pursuit similarly. Visual Neuroscience, 17, 139–153. [CrossRef] [PubMed]
Bindemann M. (2010). Scene and screen center bias early eye movements in scene viewing. Vision Research, 50, 2577–2587. [CrossRef] [PubMed]
Blanke M. Harsch L. Knoll J. Bremmer F. (2010). Spatial perception during pursuit initiation. Vision Research, 50, 2714–2720. [CrossRef] [PubMed]
Block N. (1996). How can we find the neural correlate of consciousness? Trends in Neurosciences, 19, 456–459. [CrossRef] [PubMed]
Blohm G. Missal M. Lefevre P. (2005). Direct evidence for a position input to the smooth pursuit system. Journal of Neurophysiology, 94, 712–721. [CrossRef] [PubMed]
Born R. T. Pack C. C. Ponce C. R. Yi S. (2006). Temporal evolution of 2-dimensional direction signals used to guide eye movements. Journal of Neurophysiology, 95, 284–300. [CrossRef] [PubMed]
Boström K. J. Warzecha A. K. (2010). Open-loop speed discrimination performance of ocular following response and perception. Vision Research, 50, 870–882. [CrossRef] [PubMed]
Braun D. I. Mennie N. Rasche C. Schütz A. C. Hawken M. J. Gegenfurtner K. R. (2008). Smooth pursuit eye movements to isoluminant targets. Journal of Neurophysiology, 100, 1287–1300. [CrossRef] [PubMed]
Braun D. I. Pracejus L. Gegenfurtner K. R. (2006). Motion aftereffect elicits smooth pursuit eye movements. Journal of Vision, 6, (7):1, 671–684, http://www.journalofvision.org/content/6/7/1, doi:10.1167/6.7.1. [PubMed] [Article] [CrossRef] [PubMed]
Braun D. I. Schütz A. C. Gegenfurtner K. R. (2010). Localization of speed differences of context stimuli during fixation and smooth pursuit eye movements. Vision Research, 50, 2740–2749. [CrossRef] [PubMed]
Brenner E. Cornelissen F. W. (2000). Separate simultaneous processing of egocentric and relative positions. Vision Research, 40, 2557–2563. [CrossRef] [PubMed]
Brenner E. Smeets J. B. van den Berg A. V. (2001). Smooth eye movements and spatial localisation. Vision Research, 41, 2253–2259. [CrossRef] [PubMed]
Britten K. H. Shadlen M. N. Newsome W. T. Movshon J. A. (1992). The analysis of visual motion: A comparison of neuronal and psychophysical performance. Journal of Neuroscience, 12, 4745–4765. [PubMed]
Brouwer A. M. Franz V. H. Gegenfurtner K. R. (2009). Differences in fixations between grasping and viewing objects. Journal of Vision, 9, (1):18, 1–24, http://www.journalofvision.org/content/9/1/18, doi:10.1167/9.1.18. [PubMed] [Article] [CrossRef] [PubMed]
Brown B. (1972a). Dynamic visual acuity, eye movements and peripheral acuity for moving targets. Vision Research, 12, 305–321. [CrossRef]
Brown B. (1972b). The effect of target contrast variation on dynamic visual acuity and eye movements. Vision Research, 12, 1213–1224. [CrossRef]
Bruce C. J. Goldberg M. E. (1985). Primate frontal eye fields: I Single neurons discharging before saccades. Journal of Neurophysiology, 53, 603–635. [PubMed]
Burr D. C. (1980). Motion smear. Nature, 284, 164–165. [CrossRef] [PubMed]
Burr D. C. Morrone M. C. Ross J. (1994). Selective suppression of the magnocellular visual pathway during saccadic eye movements. Nature, 371, 511–513. [CrossRef] [PubMed]
Burr D. C. Ross J. (1982). Contrast sensitivity at high velocities. Vision Research, 22, 479–484. [CrossRef] [PubMed]
Buswell G. T. (1935). How people look at pictures. Chicago: University of Chicago Press.
Butzer F. Ilg U. J. Zanker J. M. (1997). Smooth-pursuit eye movements elicited by first-order and second-order motion. Experimental Brain Research, 115, 61–70. [CrossRef] [PubMed]
Campbell F. W. Wurtz R. H. (1972). Saccadic omission: Why we do not see a grey-out during a saccadic eye movement. Vision Research, 18, 1297–1303. [CrossRef]
Carandini M. Demb J. B. Mante V. Tolhurst D. J. Dan Y. Olshausen B. A. et al. (2005). Do we know what the early visual system does? Journal of Neuroscience, 25, 10577–10597. [CrossRef] [PubMed]
Carandini M. Heeger D. J. (1994). Summation and division by neurons in primate visual cortex. Science, 264, 1333–1336. [CrossRef] [PubMed]
Carandini M. Heeger D. J. Movshon J. A. (1997). Linearity and normalization in simple cells of the macaque primary visual cortex. Journal of Neuroscience, 17, 8621–8644. [PubMed]
Carello C. D. Krauzlis R. J. (2004). Manipulating intent: evidence for a causal role of the superior colliculus in target selection. Neuron, 43, 575–583. [CrossRef] [PubMed]
Carmi R. Itti L. (2006). Visual causes versus correlates of attentional selection in dynamic scenes. Vision Research, 46, 4333–4345. [CrossRef] [PubMed]
Carpenter R. H. S. (1988). Movements of the eyes. London: Pion.
Castet E. Jeanjean S. Masson G. S. (2002). Motion perception of saccade-induced retinal translation. Proceedings of the National Academy of Sciences of the United States of America, 99, 15159–15163. [CrossRef] [PubMed]
Castet E. Masson G. S. (2000). Motion perception during saccadic eye movements. Nature Neuroscience, 3, 177–183. [CrossRef] [PubMed]
Cerf M. Frady E. P. Koch C. (2009). Faces and text attract gaze independent of the task: Experimental data and computer model. Journal of Vision, 9, (12):10, 1–15, http://www.journalofvision.org/content/9/12/10, doi:10.1167/9.12.10. [PubMed] [Article] [CrossRef] [PubMed]
Churchland A. K. Gardner J. L. Chou I. Priebe N. J. Lisberger S. G. (2003). Directional anisotropies reveal a functional segregation of visual motion processing for perception and action. Neuron, 37, 1001–1011. [CrossRef] [PubMed]
Churchland M. M. Lisberger S. G. (2001). Experimental and computational analysis of monkey smooth pursuit eye movements. Journal of Neurophysiology, 86, 741–759. [PubMed]
Cohen M. R. Kohn A. (2011). Measuring and interpreting neuronal correlations. Nature Neuroscience, 14, 811–819. [CrossRef] [PubMed]
Collewijn H. Kowler E. (2008). The significance of microsaccades for vision and oculomotor control. Journal of Vision, 8, (14):20, 1–21, http://www.journalofvision.org/content/8/14/20, doi:10.1167/8.14.20. [PubMed] [Article] [CrossRef] [PubMed]
Collewijn H. van der Mark F. Jansen T. C. (1975). Precise recording of human eye movements. Vision Research, 15, 447–450. [CrossRef] [PubMed]
Cornsweet T. N. Crane H. D. (1973). Accurate two-dimensional eye tracker using first and fourth Purkinje images. Journal of the Optical Society of America, 63, 921–928. [CrossRef] [PubMed]
Crane H. D. (1994). The Purkinje image eyetracker, image stabilization and related forms of stimulus manipulation. In Kelly D. H. (Ed.), Visual science and engineering (pp. 15–90). New York: Marcel Dekker.
Crouzet S. M. Kirchner H. Thorpe S. J. (2010). Fast saccades toward faces: Face detection in just 100 ms. Journal of Vision, 10, (4):16, 1–17, http://www.journalofvision.org/content/10/4/16, doi:10.1167/10.4.16. [PubMed] [Article] [CrossRef] [PubMed]
Culham J. He S. Dukelow S. Verstraten F. A. (2001). Visual motion and the human brain: what has neuroimaging told us? Acta Psychologica, 107, 69–94. [CrossRef] [PubMed]
de Brouwer S. Yuksel D. Blohm G. Missal M. Lefevre P. (2002). What triggers catch-up saccades during visual tracking? Journal of Neurophysiology, 87, 1646–1650. [PubMed]
De Bruyn B. Orban G. A. (1988). Human velocity and direction discrimination measured with random dot patterns. Vision Research, 28, 1323–1335. [CrossRef] [PubMed]
Dement W. Kleitman N. (1957). Cyclic variations in EEG during sleep and their relation to eye movements, body motility, and dreaming. Electroencephalography and Clinical Neurophysiology, 9, 673–690. [CrossRef] [PubMed]
Deubel H. Schneider W. X. (1996). Saccade target selection and object recognition: evidence for a common attentional mechanism. Vision Research, 36, 1827–1837. [CrossRef] [PubMed]
Ditchburn R. W. Ginsborg B. L. (1952). Vision with a stabilized retinal image. Nature, 170, 36–37. [CrossRef] [PubMed]
Dobkins K. R. Stoner G. R. Albright T. D. (1998). Perceptual, oculomotor, and neural responses to moving color plaids. Perception, 27, 681–709. [CrossRef] [PubMed]
Dorr M. Martinetz T. Gegenfurtner K. R. Barth E. (2010). Variability of eye movements when viewing dynamic natural scenes. Journal of Vision, 10, (10):28, 1–17, http://www.journalofvision.org/content/10/10/28, doi:10.1167/10.10.28. [PubMed] [Article] [CrossRef] [PubMed]
Drewes J. Trommershauser J. Gegenfurtner K. R. (2011). Parallel visual search and rapid animal detection in natural scenes. Journal of Vision, 11, (2):20, 1–21, http://www.journalofvision.org/content/11/2/20, doi:10.1167/11.2.20. [PubMed] [Article] [CrossRef] [PubMed]
Dubner R. Zeki S. M. (1971). Response properties and receptive fields of cells in an anatomically defined region of the superior temporal sulcus in the monkey. Brain Research, 35, 528–532. [CrossRef] [PubMed]
Ecker A. S. Berens P. Keliris G. A. Bethge M. Logothetis N. K. Tolias A. S. (2010). Decorrelated neuronal firing in cortical microcircuits. Science, 327, 584–587. [CrossRef] [PubMed]
Einhäuser W. König P. (2003). Does luminance-contrast contribute to a saliency map for overt visual attention? European Journal of Neuroscience, 17, 1089–1097. [CrossRef] [PubMed]
Einhäuser W. Rutishauser U. Koch C. (2008). Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli. Journal of Vision, 8, (2):2, 1–19, http://www.journalofvision.org/content/8/2/2, doi:10.1167/8.2.2. [PubMed] [Article] [CrossRef] [PubMed]
Einhäuser W. Spain M. Perona P. (2008). Objects predict fixations better than early saliency. Journal of Vision, 8, (14):18, 1–26, http://www.journalofvision.org/content/8/14/18, doi:10.1167/8.14.18. [PubMed] [Article] [CrossRef] [PubMed]
Engbert R. Kliegl R. (2003). Microsaccades uncover the orientation of covert attention. Vision Research, 43, 1035–1045. [CrossRef] [PubMed]
Engbert R. Nuthmann A. Richter E. M. Kliegl R. (2005). SWIFT: A dynamical model of saccade generation during reading. Psychological Review, 112, 777–813. [CrossRef] [PubMed]
Engmann S. 't Hart B. M. Sieren T. Onat S. König P. Einhäuser W. (2009). Saliency on a natural scene background: Effects of color and luminance contrast add linearly. Attention, Perception & Psychophysics, 71, 1337–1352. [CrossRef] [PubMed]
Epelboim J. (1998). Gaze and retinal-image-stability in two kinds of sequential looking tasks. Vision Research, 38, 3773–3784. [CrossRef] [PubMed]
Epelboim J. Steinman R. M. Kowler E. Pizlo Z. Erkelens C. J. Collewijn H. (1997). Gaze-shift dynamics in two kinds of sequential looking tasks. Vision Research, 37, 2597–2607. [CrossRef] [PubMed]
Epelboim J. Suppes P. (2001). A model of eye movements and visual working memory during problem solving in geometry. Vision Research, 41, 1561–1574. [CrossRef] [PubMed]
Ferrera V. P. Lisberger S. G. (1995). Attention and target selection for smooth pursuit eye movements. Journal of Neuroscience, 15, 7472–7484. [PubMed]
Festinger L. Sedgwick H. A. Holtzman J. D. (1976). Visual perception during smooth pursuit eye movements. Vision Research, 16, 1377–1386. [CrossRef] [PubMed]
Filehne W. (1922). Über das optische Wahrnehmen von Bewegungen. Zeitschrift für Sinnesphysiologie, 2, 190–203.
Findlay J. M. Gilchrist I. D. (2003). Active vision: The psychology of looking and seeing. Oxford, UK: Oxford University Press.
Fleischl E. V. (1882). Physiologisch-optische Notizen. 2. Mitteilung der Sitzung Wiener Bereich der Akademie der Wissenschaften, 3, 7–25.
Flipse J. P. van der Wildt G. J. Rodenburg M. Keemink C. J. Knol P. G. (1988). Contrast sensitivity for oscillating sine wave gratings during ocular fixation and pursuit. Vision Research, 28, 819–826. [CrossRef] [PubMed]
Freeman T. C. A. Champion R. A. Warren P. A. (2010). A Bayesian model of perceived head-centered velocity during smooth pursuit eye movement. Current Biology, 20, 757–762. [CrossRef] [PubMed]
Frey H. P. Honey C. König P. (2008). What's color got to do with it The influence of color on visual attention in different categories. Journal of Vision, 8, (14):6, 1–17, http://www.journalofvision.org/content/8/14/6, doi:10.1167/8.14.6. [PubMed] [Article] [CrossRef] [PubMed]
Frey H. P. König P. Einhäuser W. (2007). The role of first- and second-order stimulus features for human overt attention. Perception & Psychophysics, 69, 153–161. [CrossRef] [PubMed]
Fuster J. M. (2004). Upper processing stages of the perception–action cycle. Trends in Cognitive Sciences, 8, 143–145. [CrossRef] [PubMed]
Gegenfurtner K. R. Xing D. Scott B. H. Hawken M. J. (2003). A comparison of pursuit eye movement and perceptual performance in speed discrimination. Journal of Vision, 3, (11):19, 865–876, http://www.journalofvision.org/content/3/11/19, doi:10.1167/3.11.19. [PubMed] [Article] [CrossRef]
Geisler W. S. Perry J. S. Najemnik J. (2006). Visual search: The role of peripheral information measured using gaze-contingent displays. Journal of Vision, 6, (9):1, 858–873, http://www.journalofvision.org/content/6/9/1, doi:10.1167/6.9.1. [PubMed] [Article] [CrossRef] [PubMed]
Glimcher P. W. (2003). Decisions, uncertainty, and the brain: The science of neuroeconomics. Cambridge, MA: MIT Press.
Glimcher P. W. (2009). Choice: Towards a standard back-pocket model. In Glimcher P. W. Camerer C. Poldrack R. A. Fehr E. (Eds.), Neuroeconomics: Decision making and the brain (pp. 501–519). New York: Academic Press.
Glimcher P. W. (2010). Foundations of neuroeconomic analysis. Oxford, UK: Oxford University Press.
Glimcher P. W. Camerer C. Poldrack R. A. Fehr E. (Eds.) (2008). Neuroeconomics: Decision making and the brain. New York: Academic Press.
Goldberg M. E. Bisley J. W. Powell K. D. Gottlieb J. (2006). Saccades, salience and attention: The role of the lateral intraparietal area in visual behavior. Progress in Brain Research, 155, 157–175. [PubMed]
Goodale M. A. Milner A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences, 15, 20–25. [CrossRef] [PubMed]
Gould T. D. (2003). The endophenotype concept in psychiatry: Etymology and strategic intentions. American Journal of Psychiatry, 160, 636–645. [CrossRef] [PubMed]
Hafed Z. M. Krauzlis R. J. (2006). Ongoing eye movements constrain visual perception. Nature Neuroscience, 9, 1449–1457. [CrossRef] [PubMed]
Hasson U. Landesman O. Knappmeyer B. Vallines I. Rubin N. Heeger D. J. (2008). Neurocinematics: The neuroscience of film. Projections, 2, 1–26. [CrossRef]
Hasson U. Nir Y. Levy I. Fuhrmann G. Malach R. (2004). Intersubject synchronization of cortical activity during natural vision. Science, 303, 1634–1640. [CrossRef] [PubMed]
Hasson U. Yang E. Vallines I. Heeger D. J. Rubin N. (2008). A hierarchy of temporal receptive windows in human cortex. Journal of Neuroscience, 28, 2539. [CrossRef] [PubMed]
Hawken M. J. Gegenfurtner K. R. (2001). Pursuit eye movements to second-order motion targets. Journal of Optical Society of America A, Optics, Image Science, and Vision, 18, 2282–2296. [CrossRef]
Hayhoe M. (2000). Vision using routines: A functional account of vision. Visual Cognition, 7, 43–64. [CrossRef]
Hayhoe M. Ballard D. (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9, 188–194. [CrossRef] [PubMed]
Hayhoe M. Mennie N. Sullivan B. Gorgos K. (2005). The role of internal models and prediction in catching balls. Paper presented at the “From Reactive to Anticipatory Cognitive Embodied Systems,” Arlington, VA.
Heinen S. J. Jin Z. Watamaniuk S. N. (2011). Flexibility of foveal attention during ocular pursuit. Journal of Vision, 11, (2):9, 1–12, http://www.journalofvision.org/content/11/2/9, doi:10.1167/11.2.9. [PubMed] [Article] [CrossRef] [PubMed]
Henderson J. M. (2003). Human gaze control during real-world scene perception. Trends in Cognitive Science, 7, 498–504. [CrossRef]
Henderson J. M. Brockmole J. R. Castelhano M. S. Mack M. (2007). Visual saliency does not account for eye movements during visual search in real-world scenes. In Gompel R. v. Fischer M. Murry W. Hill R. (Eds.), Eye movements: A window on mind and brain (pp. 537–562). Oxford, UK: Elsevier.
Herst A. N. Epelboim J. Steinman R. M. (2001). Temporal coordination of the human head and eye during a natural sequential tapping task. Vision Research, 41, 3307–3318. [CrossRef] [PubMed]
Hikosaka O. (2007). Basal ganglia mechanisms of reward-oriented eye movement. Annals of the New York Academy of Sciences, 1104, 229–249. [CrossRef] [PubMed]
Hikosaka O. Nakamura K. Nakahara H. (2006). Basal ganglia orient eyes to reward. Journal of Neurophysiology, 95, 567–584. [CrossRef] [PubMed]
Hikosaka O. Takikawa Y. Kawagoe R. (2000). Role of the basal ganglia in the control of purposive saccadic eye movements. Physiological Review, 80, 953–978.
Hoffman J. E. Subramaniam B. (1995). The role of visual attention in saccadic eye movements. Perception & Psychophysics, 57, 787–795. [CrossRef] [PubMed]
Huang X. Lisberger S. G. (2009). Noise Correlations in Cortical Area MT and Their Potential Impact on Trial-by-Trial Variation in the Direction and Speed of Smooth-Pursuit Eye Movements. Journal of Neurophysiology, 101, 3012–3030. [CrossRef] [PubMed]
Huey E. B. (1898). Preliminary experiments in the physiology and psychology of reading. American Journal of Psychology, 9, 575–586. [CrossRef]
Ikeda T. Hikosaka O. (2003). Reward-dependent gain and bias of visual responses in primate superior colliculus. Neuron, 39, 693–700. [CrossRef] [PubMed]
Ilg U. J. Churan J. (2004). Motion perception without explicit activity in areas MT and MST. Journal of Neurophysiology, 92, 1512–1523. [CrossRef] [PubMed]
Ilg U. J. Thier P. (2008). The neural basis of smooth pursuit eye movements in the rhesus monkey brain. Brain and Cognition, 68, 229–240. [CrossRef] [PubMed]
Ipata A. E. Gee A. L. Bisley J. W. Goldberg M. E. (2009). Neurons in the lateral intraparietal area create a priority map by the combination of disparate signals. Experimental Brain Research, 192, 479–488. [CrossRef] [PubMed]
Itti L. Braun J. Lee D. K. Koch C. (1998). A model of early visual processing. In Jordan M. I. Kearns M. J. Solla S. A. (Eds.), Advances in neural information processing systems (vol. 10, p. 173). Cambridge, MA: The MIT Press.
Itti L. Koch C. (2000). A saliency-based search mechanism for overt and covert shifts of visual attention. Vision Research, 40, 1489–1506. [CrossRef] [PubMed]
Itti L. Koch C. (2001). Computational modelling of visual attention. Nature Reviews Neuroscience, 2, 194–203. [CrossRef] [PubMed]
Itti L. Koch C. Niebur E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, 1254–1259. [CrossRef]
Jansen L. Onat S. König P. (2009). Influence of disparity on fixation and saccades in free viewing of natural scenes. Journal of Vision, 9, (1):29, 1–19, http://www.journalofvision.org/content/9/1/29, doi:10.1167/9.1.29. [PubMed] [Article] [CrossRef] [PubMed]
Jazayeri M. Movshon J. A. (2007). A new perceptual illusion reveals mechanisms of sensory decoding. Nature, 446, 912–915. [CrossRef] [PubMed]
Johansson R. S. Westling G. Backstrom A. Flanagan J. R. (2001). Eye–hand coordination in object manipulation. Journal of Neuroscience, 21, 6917–6932. [PubMed]
Kelly D. H. (1975). Luminous and chromatic flickering patterns have opposite effects. Science, 188, 371–372. [CrossRef] [PubMed]
Kelly D. H. (1979). Motion and vision. II. Stabilized spatio-temporal threshold surface. Journal of the Optical Society of America, 69, 1340–1349. [CrossRef] [PubMed]
Kelly D. H. (1983). Spatiotemporal variation of chromatic and achromatic contrast thresholds. Journal of the Optical Society of America, 73, 742–750. [CrossRef] [PubMed]
Kerzel D. Aivar M. P. Ziegler N. E. Brenner E. (2006). Mislocalization of flashes during smooth pursuit hardly depends on the lighting conditions. Vision Research, 46, 1145–1154. [CrossRef] [PubMed]
Kerzel D. Born S. Souto D. (2009). Smooth pursuit eye movements and perception share target selection, but only some central resources. Behavioral Brain Research, 201, 66–73. [CrossRef]
Kerzel D. Souto D. Ziegler N. E. (2008). Effects of attention shifts to stationary objects during steady-state smooth pursuit eye movements. Vision Research, 48, 958–969. [CrossRef] [PubMed]
Khan A. Z. Lefevre P. Heinen S. J. Blohm G. (2010). The default allocation of attention is broadly ahead of smooth pursuit. Journal of Vision, 10, (13):7, 1–17, http://www.journalofvision.org/content/10/13/7, doi:10.1167/10.13.7. [PubMed] [Article] [CrossRef] [PubMed]
Khurana B. Kowler E. (1987). Shared attentional control of smooth eye movement and perception. Vision Research, 27, 1603–1618. [CrossRef] [PubMed]
Kienzle W. Franz M. O. Schölkopf B. Wichmann F. A. (2009). Center–surround patterns emerge as optimal predictors for human saccade targets. Journal of Vision, 9, (5):7, 1–15, http://www.journalofvision.org/content/9/5/7, doi:10.1167/9.5.7. [PubMed] [Article] [CrossRef] [PubMed]
Klein C. Ettinger U. (2008). A hundred years of eye movement research in psychiatry. Brain and Cognition, 68, 215–218. [CrossRef] [PubMed]
Koch C. Ullman S. (1985). Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology, 4, 219–227. [PubMed]
Koene A. R. Zhaoping L. (2007). Feature-specific interactions in salience from combined feature contrasts: Evidence for a bottom-up saliency map in V1. Journal of Vision, 7, (7):6, 1–14, http://www.journalofvision.org/content/7/7/6, doi:10.1167/7.7.6. [PubMed] [Article] [CrossRef] [PubMed]
Königs K. Bremmer F. (2010). Localization of visual and auditory stimuli during smooth pursuit eye movements. Journal of Vision, 10, (8):8, 1–14, http://www.journalofvision.org/content/10/8/8, doi:10.1167/10.8.8. [PubMed] [Article] [CrossRef] [PubMed]
Körding K. P. Wolpert D. M. (2006). Bayesian decision theory in sensorimotor control. Trends Cognitive Sciences, 10, 319–326. [CrossRef]
Kowler E. (1989). Cognitive expectations, not habits, control anticipatory smooth oculomotor pursuit. Vision Research, 29, 1049–1057. [CrossRef] [PubMed]
Kowler E. (2011). Eye movements: The past 25 years. Vision Research, 51, 1457–1483. [CrossRef] [PubMed]
Kowler E. Anderson E. Dosher B. Blaser E. (1995). The role of attention in the programming of saccades. Vision Research, 35, 1897–1916. [CrossRef] [PubMed]
Kowler E. McKee S. P. (1987). Sensitivity of smooth eye movement to small differences in target velocity. Vision Research, 27, 993–1015. [CrossRef] [PubMed]
Kowler E. Steinman R. M. (1979a). The effect of expectations on slow oculomotor control: I. Periodic target steps. Vision Research, 19, 619–632. [CrossRef]
Kowler E. Steinman R. M. (1979b). The effect of expectations on slow oculomotor control: II. Single target displacements. Vision Research, 19, 633–646. [CrossRef]
Kowler E. Steinman R. M. (1979c). Miniature saccades: Eye movements that do not count. Vision Research, 19, 105–108. [CrossRef]
Krauskopf J. Cornsweet T. N. Riggs L. A. (1960). Analysis of eye movements during monocular and binocular fixation. Journal of the Optical Society of America, 50, 572–578. [CrossRef] [PubMed]
Krauzlis R. J. (2004). Recasting the smooth pursuit eye movement system. Journal of Neurophysiology, 91, 591–603. [CrossRef] [PubMed]
Krauzlis R. J. (2005). The control of voluntary eye movements: New perspectives. Neuroscientist, 11, 124–137. [CrossRef] [PubMed]
Krauzlis R. J. Basso M. A. Wurtz R. H. (1997). Shared motor error for multiple eye movements. Science, 276, 1693–1695. [CrossRef] [PubMed]
Krukowski A. E. Stone L. S. (2005). Expansion of direction space around the cardinal axes revealed by smooth pursuit eye movements. Neuron, 45, 315–323. [CrossRef] [PubMed]
Kusunoki M. Gottlieb J. Goldberg M. E. (2000). The lateral intraparietal area as a salience map: The representation of abrupt onset, stimulus motion, and task relevance. Vision Research, 40, 1459–1468. [CrossRef] [PubMed]
Land M. F. (2006). Eye movements and the control of actions in everyday life. Progress in Retinal and Eye Research, 25, 296–324. [CrossRef] [PubMed]
Land M. F. McLeod P. (2000). From eye movements to actions: How batsmen hit the ball. Nature Neuroscience, 3, 1340–1345. [CrossRef] [PubMed]
Land M. F. Mennie N. Rusted J. (1999). The roles of vision and eye movements in the control of activities of daily living. Perception, 28, 1311–1328. [CrossRef] [PubMed]
Land M. F. Tatler B. W. (2009). Looking and acting: Vision and eye movements in natural behaviour. Oxford, UK: Oxford University Press.
Lau B. Glimcher P. W. (2007). Action and outcome encoding in the primate caudate nucleus. Journal of Neuroscience, 27, 14502–14514. [CrossRef] [PubMed]
Legge G. E. Klitz T. S. Tjan B. S. (1997). Mr Chips: An ideal-observer model of reading. Psychological Review, 104, 524–553. [CrossRef] [PubMed]
Leigh R. J. Kennard C. (2004). Using saccades as a research tool in the clinical neurosciences. Brain, 127, 460–477. [CrossRef] [PubMed]
Leigh R. J. Zee D. S. (1999). The neurology of eye movements. New York: Oxford University Press.
Leon M. I. Shadlen M. N. (1999). Effect of expected reward magnitude on the response of neurons in the dorsolateral prefrontal cortex of the macaque. Neuron, 24, 415–425. [CrossRef] [PubMed]
Li Z. (2002). A saliency map in primary visual cortex. Trends in Cognitive Sciences, 6, 9–16. [CrossRef] [PubMed]
Lisberger S. G. (2010). Visual guidance of smooth-pursuit eye movements: Sensation, action, and what happens in between. Neuron, 66, 477–491. [CrossRef] [PubMed]
Lorenceau J. Shiffrar M. (1992). The influence of terminators on motion integration across space. Vision Research, 32, 263–273. [CrossRef] [PubMed]
Lovejoy L. P. Fowler G. A. Krauzlis R. J. (2009). Spatial allocation of attention during smooth pursuit eye movements. Vision Research, 49, 1275–1285. [CrossRef] [PubMed]
Ludvigh E. Miller J. W. (1958). Study of visual acuity during the ocular pursuit of moving test objects: I. Introduction. Journal of the Optical Society of America, 48, 799–802. [CrossRef] [PubMed]
Madelain L. Krauzlis R. J. (2003). Pursuit of the ineffable: perceptual and motor reversals during the tracking of apparent motion. Journal of Vision, 3, (11):1, 642–653, http://www.journalofvision.org/content/3/11/1, doi:10.1167/3.11.1. [PubMed] [Article]. [CrossRef] [PubMed]
Majaj N. J. Carandini M. Movshon J. A. (2007). Motion integration by neurons in macaque MT is local, not global. Journal of Neuroscience, 27, 366–370. [CrossRef] [PubMed]
Martinez-Conde S. Macknik S. L. Hubel D. H. (2004). The role of fixational eye movements in visual perception. Nature Reviews Neuroscience, 5, 229–240. [CrossRef] [PubMed]
Masson G. S. Rybarczyk Y. Castet E. Mestre D. R. (2000). Temporal dynamics of motion integration for the initiation of tracking eye movements at ultra-short latencies. Visual Neuroscience, 17, 753–767. [CrossRef] [PubMed]
Masson G. S. Stone L. S. (2002). From following edges to pursuing objects. Journal of Neurophysiology, 88, 2869–2873. [CrossRef] [PubMed]
Matsumiya K. Uchikawa K. (2000). Distortion of Visual Space During Pursuit Eye Movements. Optical Review, 7, 241–248. [CrossRef]
Maunsell J. H. Van Essen D. C. (1983). Functional properties of neurons in middle temporal visual area of the macaque monkey: I. Selectivity for stimulus direction, speed, and orientation. Journal of Neurophysiology, 49, 1127–1147. [PubMed]
Mays L. E. Sparks D. L. (1980). Dissociation of visual and saccade-related responses in superior colliculus neurons. Journal of Neurophysiology, 43, 207–232. [PubMed]
Mazer J. A. Gallant J. L. (2003). Goal-related activity in V4 during free viewing visual search Evidence for a ventral stream visual salience map. Neuron, 40, 1241–1250. [CrossRef] [PubMed]
Medina J. F. Lisberger S. G. (2007). Variation, signal, and noise in cerebellar sensory-motor processing for smooth-pursuit eye movements. Journal of Neuroscience, 27, 6832–6842. [CrossRef] [PubMed]
Methling D. Wernicke J. (1968). Visual acuity in horizontal tracking movements of the eye. Vision Research, 8, 554–565. [CrossRef] [PubMed]
Milner A. D. Goodale M. A. (2006). The visual brain in action (2nd ed.). Oxford, UK: Oxford University Press.
Milstein D. M. Dorris M. C. (2007). The influence of expected value on saccadic preparation. Journal of Neuroscience, 27, 4810–4818. [CrossRef] [PubMed]
Mitrani L. Dimitrov G. (1982). Retinal location and visual localization during pursuit eye movement. Vision Research, 22, 1047–1051. [CrossRef] [PubMed]
Montagnini A. Spering M. Masson G. S. (2006). Predicting 2D target velocity cannot help 2D motion integration for smooth pursuit initiation. Journal of Neurophysiology, 96, 3545–3550. [CrossRef] [PubMed]
Morvan C. Wexler M. (2009). The nonlinear structure of motion perception during smooth eye movements. Journal of Vision, 9, (7):1, 1–13, http://www.journalofvision.org/content/9/7/1, doi:10.1167/9.7.1. [PubMed] [Article] [CrossRef] [PubMed]
Morvan C. Zhang H. Maloney L. T. (2010). Observers are inconsistent and inaccurate in judging their own visual detection ability at different retinal locations [Abstract]. Journal of Vision, 10, (7):1303, 1303a, http://www.journalofvision.org/content/10/7/1303, doi:10.1167/10.7.1303. [CrossRef]
Mota C. Stuke I. Aach T. Barth E. (2005). Spatial and spectral analysis of occluded motions. Signal Processing—Image Communication, 20, 529–536. [CrossRef]
Movshon J. A. Lisberger S. G. Krauzlis R. J. (1990). Visual cortical signals supporting smooth pursuit eye movements. Cold Spring Harbor Symposia on Quantitative Biology, 55, 707–716. [CrossRef] [PubMed]
Movshon J. A. Newsome W. T. (1992). Neural foundations of visual motion perception. Current Directions in Psychological Science, 1, 35–39. [CrossRef]
Munoz D. P. Everling S. (2004). Look away: The anti-saccade task and the voluntary control of eye movement. Nature Reviews Neuroscience, 5, 218–228. [CrossRef] [PubMed]
Murphy B. J. (1978). Pattern thresholds for moving and stationary gratings during smooth eye movement. Vision Research, 18, 521–530. [CrossRef] [PubMed]
Najemnik J. Geisler W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387–391. [CrossRef] [PubMed]
Najemnik J. Geisler W. S. (2008). Eye movement statistics in humans are consistent with an optimal search strategy. Journal of Vision, 8, (3):4, 1–14, http://www.journalofvision.org/content/8/3/4, doi:10.1167/8.3.4. [PubMed] [Article] [CrossRef] [PubMed]
Navalpakkam V. Koch C. Rangel A. Perona P. (2010). Optimal reward harvesting in complex perceptual environments. Proceedings of the National Academy of Sciences of the United States of America, 107, 5232–5237. [CrossRef] [PubMed]
Newsome W. T. Britten K. H. Salzman C. D. Movshon J. A. (1990). Neuronal mechanisms of motion perception. Cold Spring Harbor Symposia on Quantitative Biology, 55, 697–705. [CrossRef] [PubMed]
Newsome W. T. Pare E. B. (1988). A selective impairment of motion perception following lesions of the middle temporal visual area (MT). Journal of Neuroscience, 8, 2201–2211. [PubMed]
Newsome W. T. Wurtz R. H. Dursteler M. R. Mikami A. (1985). Deficits in visual motion processing following ibotenic acid lesions of the middle temporal visual area of the macaque monkey. Journal of Neuroscience, 5, 825–840. [PubMed]
Niebergall R. Huang L. Martinez-Trujillo J. C. (2010). Similar perceptual costs for dividing attention between retina- and space-centered targets in humans. Journal of Vision, 10, (12):4, 1–14, http://www.journalofvision.org/content/10/12/4, doi:10.1167/10.12.4. [PubMed] [Article] [CrossRef] [PubMed]
Niebur E. Koch C. (1996). Control of selective visual attention: Modeling the “where” pathway. In Touretzky D. S. Mozer M. C. Hasselmo M. E. (Eds.), Advances in Neural Information Processing Systems (vol. 8, pp. 802–808). Cambridge, MA: MIT Press.
Noguchi Y. Shimojo S. Kakigi R. Hoshiyama M. (2007). Spatial contexts can inhibit a mislocalization of visual stimuli during smooth pursuit. Journal of Vision, 7, (13):13, 1–15, http://www.journalofvision.org/content/7/13/13, doi:10.1167/7.13.13. [PubMed] [Article] [CrossRef] [PubMed]
Nothdurft H. (2000). Salience from feature contrast: Additivity across dimensions. Vision Research, 40, 1183–1201. [CrossRef] [PubMed]
Nuthmann A. Henderson J. M. (2010). Object-based attentional selection in scene viewing. Journal of Vision, 10, (8):20, 1–19, http://www.journalofvision.org/content/10/8/20, doi:10.1167/10.8.20. [PubMed] [Article] [CrossRef] [PubMed]
O'Regan J. K. (1992). Solving the “real” mysteries of visual perception: The world as an outside memory. Canadian Journal of Psychology, 46, 461–488. [CrossRef] [PubMed]
Onat S. Libertus K. König P. (2007). Integrating audiovisual information for the control of overt attention. Journal of Vision, 7, (10):11, 1–16, http://www.journalofvision.org/content/7/10/11, doi:10.1167/7.10.11. [PubMed] [Article] [CrossRef] [PubMed]
Orban de Xivry J. J. Bennett S. J. Lefevre P. Barnes G. R. (2006). Evidence for synergy between saccades and smooth pursuit during transient target disappearance. Journal of Neurophysiology, 95, 418–427. [CrossRef] [PubMed]
Orban de Xivry J. J. Coppe S. Lefevre P. Missal M. (2010). Biological motion drives perception and action. Journal of Vision, 10, (2):6, 1–11, http://www.journalofvision.org/content/10/2/6, doi:10.1167/10.2.6. [PubMed] [Article] [CrossRef] [PubMed]
Orban de Xivry J. J. Lefevre P. (2007). Saccades and pursuit: Two outcomes of a single sensorimotor process. The Journal of Physiology, 584, 11–23. [CrossRef] [PubMed]
Orschansky J. (1899). Eine Methode, die Augenbewegungen direct zu untersuchen. Centralblatt für Physiologie, 12, 785–790.
Osborne L. C. Hohl S. S. Bialek W. Lisberger S. G. (2007). Time course of precision in smooth-pursuit eye movements of monkeys. Journal of Neuroscience, 27, 2987–2998. [CrossRef] [PubMed]
Osborne L. C. Lisberger S. G. Bialek W. (2005). A sensory source for motor variation. Nature, 437, 412–416. [CrossRef] [PubMed]
Pack C. C. Born R. T. (2001). Temporal dynamics of a neural solution to the aperture problem in visual area MT of macaque brain. Nature, 409, 1040–1042. [CrossRef] [PubMed]
Peck C. J. Jangraw D. C. Suzuki M. Efem R. Gottlieb J. (2009). Reward modulates attention independently of action value in posterior parietal cortex. Journal of Neuroscience, 29, 11182–11191. [CrossRef] [PubMed]
Peters R. J. Iyer A. Itti L. Koch C. (2005). Components of bottom-up gaze allocation in natural images. Vision Research, 45, 2397–2416. [CrossRef] [PubMed]
Platt M. L. Glimcher P. W. (1999). Neural correlates of decision variables in parietal cortex. Nature, 400, 233–238. [CrossRef] [PubMed]
Priebe N. J. Lisberger S. G. (2004). Estimating target speed from the population response in visual area MT. Journal of Neuroscience, 24, 1907–1916. [CrossRef] [PubMed]
Rasche C. Gegenfurtner K. R. (2009). Precision of speed discrimination and smooth pursuit eye movements. Vision Research, 49, 514–523. [CrossRef] [PubMed]
Rashbass C. (1961). The relationship between saccadic and smooth tracking eye movements. Journal of Physiology, 159, 326–338. [CrossRef] [PubMed]
Rayner K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372–422. [CrossRef] [PubMed]
Renninger L. W. Verghese P. Coughlan J. (2007). Where to look next Eye movements reduce local uncertainty. Journal of Vision, 7, (3):6, 1–17, http://www.journalofvision.org/content/7/3/6, doi:10.1167/7.3.6. [PubMed] [Article] [CrossRef] [PubMed]
Rensink R. A. (2000). The dynamic representation of scenes. Visual Cognition, 7, 17–42. [CrossRef]
Rensink R. A. (2002). Change detection. Annual Review of Psychology, 53, 245–277. [CrossRef] [PubMed]
Ringach D. L. Hawken M. J. Shapley R. (1996). Binocular eye movements caused by the perception of three-dimensional structure from motion. Vision Research, 36, 1479–1492. [CrossRef] [PubMed]
Robinson D. A. (1963). A method of measuring eye movement using a scleral search coil in a magnetic field. IEEE Transactions on Biomedical Engineering, 10, 137–145. [PubMed]
Robinson D. A. (1965). The mechanics of human smooth pursuit eye movement. The Journal of Physiology, 180, 569–591. [CrossRef] [PubMed]
Robinson D. A. (1972). Eye movements evoked by collicular stimulation in the alert monkey. Vision Research, 12, 1795–1808. [CrossRef] [PubMed]
Robinson D. A. Fuchs A. F. (1969). Eye movements evoked by stimulation of frontal eye fields. Journal of Neurophysiology, 32, 637–648. [PubMed]
Rolfs M. (2009). Microsaccades: Small steps on a long way. Vision Research, 49, 2415–2441. [CrossRef] [PubMed]
Ross J. Morrone M. C. Goldberg M. E. Burr D. C. (2001). Changes in visual perception at the time of saccades. Trends in Neurosciences, 24, 113–121. [CrossRef] [PubMed]
Rothkopf C. A. Ballard D. H. Hayhoe M. M. (2007). Task and context determine where you look. Journal of Vision, 7, (14):16, 1–20, http://www.journalofvision.org/content/7/14/16, doi:10.1167/7.14.16. [PubMed] [Article] [CrossRef] [PubMed]
Rotman G. Brenner E. Smeets J. B. (2002). Spatial but not temporal cueing influences the mislocalisation of a target flashed during smooth pursuit. Perception, 31, 1195–1203. [CrossRef] [PubMed]
Rotman G. Brenner E. Smeets J. B. (2004). Mislocalization of targets flashed during smooth pursuit depends on the change in gaze direction after the flash. Journal of Vision, 4, (7):4, 564–574, http://journalofvision.org/4/7/4, doi:10.1167/4.7.4. [PubMed] [Article] [CrossRef]
Rotman G. Brenner E. Smeets J. B. J. (2005). Flashes are localised as if they were moving with the eyes. Vision Research, 45, 355–364. [CrossRef] [PubMed]
Royden C. S. Banks M. S. Crowell J. A. (1992). The perception of heading during eye movements. Nature, 360, 583–587. [CrossRef] [PubMed]
Salzman C. D. Murasugi C. M. Britten K. H. Newsome W. T. (1992). Microstimulation in visual area MT: Effects on direction discrimination performance. Journal of Neuroscience, 12, 2331–2355. [PubMed]
Schall J. D. Thompson K. G. (1999). Neural selection and control of visually guided eye movements. Annual Reviews in Neuroscience, 22, 241–259. [CrossRef]
Schoppik D. Nagel K. I. Lisberger S. G. (2008). Cortical mechanisms of smooth eye movements revealed by dynamic covariations of neural and behavioral responses. Neuron, 58, 248–260. [CrossRef] [PubMed]
Schultz W. (2000). Multiple reward signals in the brain. Nature Reviews Neuroscience, 1, 199–207. [CrossRef] [PubMed]
Schultz W. Dayan P. Montague P. R. (1997). A neural substrate of prediction and reward. Science, 275, 1593–1599. [CrossRef] [PubMed]
Schultz W. Tremblay L. Hollerman J. R. (2003). Changes in behavior-related neuronal activity in the striatum during learning. Trends in Neurosciences, 26, 321–328. [CrossRef] [PubMed]
Schumann F. Einhäuser-Treyer W. Vockeroth J. Bartl K. Schneider E. König P. (2008). Salient features in gaze-aligned recordings of human visual input during free exploration of natural environments. Journal of Vision, 8, (14):12, 1–17, http://www.journalofvision.org/content/8/14/12, doi:10.1167/8.14.12. [PubMed] [Article] [CrossRef] [PubMed]
Schütz A. C. Braun D. I. Gegenfurtner K. R. (2007). Contrast sensitivity during the initiation of smooth pursuit eye movements. Vision Research, 47, 2767–2777. [CrossRef] [PubMed]
Schütz A. C. Braun D. I. Gegenfurtner K. R. (2009). Object recognition during foveating eye movements. Vision Research, 49, 2241–2253. [CrossRef] [PubMed]
Schütz A. C. Braun D. I. Kerzel D. Gegenfurtner K. R. (2008). Improved visual sensitivity during smooth pursuit eye movements. Nature Neuroscience, 11, 1211–1216. [CrossRef] [PubMed]
Schütz A. C. Braun D. I. Movshon J. A. Gegenfurtner K. R. (2010). Does the noise matter Effects of different kinematogram types on smooth pursuit eye movements and perception. Journal of Vision, 10, (13):26, 1–22, http://www.journalofvision.org/content/10/13/26, doi:10.1167/10.13.26. [PubMed] [Article] [CrossRef] [PubMed]
Schütz A. C. Delipetkos E. Braun D. I. Kerzel D. Gegenfurtner K. R. (2007). Temporal contrast sensitivity during smooth pursuit eye movements. Journal of Vision, 7, (13):3, 1–15, http://www.journalofvision.org/content/7/13/3, doi:10.1167/7.13.3. [PubMed] [Article] [CrossRef] [PubMed]
Schütz A. C. Gegenfurtner K. R. (2010). Dynamic integration of saliency and reward information for saccadic eye movements [Abstract]. Journal of Vision, 10, (7):551, 551a, http://www.journalofvision.org/content/10/7/551, doi:10.1167/10.7.551. [CrossRef]
Schütz A. C. Morrone M. C. (2010). Compression of time during smooth pursuit eye movements. Vision Research, 50, 2702–2713. [CrossRef] [PubMed]
Schwartz J. D. Lisberger S. G. (1994). Initial tracking conditions modulate the gain of visuo-motor transmission for smooth pursuit eye movements in monkeys. Visual Neuroscience, 11, 411–424. [CrossRef] [PubMed]
Segraves M. A. Goldberg M. E. (1994). Effect of stimulus position and velocity upon the maintenance of smooth pursuit eye velocity. Vision Research, 34, 2477–2482. [CrossRef] [PubMed]
Shapiro F. (1989). Eye movement desensitization: A new treatment for post-traumatic stress disorder. Journal of Behavior Therapy and Experimental Psychiatry, 20, 211–217. [CrossRef] [PubMed]
Sohn J. W. Lee D. (2006). Effects of reward expectancy on sequential eye movements in monkeys. Neural Networks, 19, 1181–1191. [CrossRef] [PubMed]
Sommer M. A. Wurtz R. H. (2008). Brain circuits for the internal monitoring of movements. Annual Reviews in Neuroscience, 31, 317–338. [CrossRef]
Souman J. L. Hooge I. T. Wertheim A. H. (2005). Perceived motion direction during smooth pursuit eye movements. Experimental Brain Research, 164, 376–386. [CrossRef] [PubMed]
Souto D. Kerzel D. (2008). Dynamics of attention during the initiation of smooth pursuit eye movements. Journal of Vision, 8, (14):3, 1–16, http://www.journalofvision.org/content/8/14/3, doi:10.1167/8.14.3. [PubMed] [Article] [CrossRef] [PubMed]
Spering M. Gegenfurtner K. R. (2007). Contrast and assimilation in motion perception and smooth pursuit eye movements. Journal of Neurophysiology, 98, 1355–1363. [CrossRef] [PubMed]
Spering M. Gegenfurtner K. R. (2008). Contextual effects on motion perception and smooth pursuit eye movements. Brain Research, 1225, 76–85. [CrossRef] [PubMed]
Spering M. Kerzel D. Braun D. I. Hawken M. J. Gegenfurtner K. R. (2005). Effects of contrast on smooth pursuit eye movements. Journal of Vision, 5, (5):6, 455–465, http://www.journalofvision.org/content/5/5/6, doi:10.1167/5.5.6. [PubMed] [Article] [CrossRef]
Spering M. Montagnini A. (2011). Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: A review. Vision Research, 51, 836–852. [CrossRef] [PubMed]
Spering M. Schütz A. C. Braun D. I. Gegenfurtner K. R. (2011). Keep your eyes on the ball: Smooth pursuit eye movements enhance prediction of visual motion. Journal of Neurophysiology, 105, 1756–1767. [CrossRef] [PubMed]
Sprague N. Ballard D. Robinson A. (2007). Modeling embodied visual behaviors. ACM Transactions on Applied Perception, 4, 1–23. [CrossRef]
Steinbach M. J. (1976). Pursuing the perceptual rather than the retinal stimulus. Vision Research, 16, 1371–1376. [CrossRef] [PubMed]
Stone L. S. Beutter B. R. Lorenceau J. (2000). Visual motion integration for perception and pursuit. Perception, 29, 771–787. [CrossRef] [PubMed]
Stone L. S. Krauzlis R. J. (2003). Shared motion signals for human perceptual decisions and oculomotor actions. Journal of Vision, 3, (11):7, 725–736, http://www.journalofvision.org/content/3/11/7, doi:10.1167/3.11.7. [PubMed] [Article] [CrossRef]
Stritzke M. Trommershauser J. Gegenfurtner K. R. (2009). Effects of salience and reward information during saccadic decisions under risk. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 26, B1–B13. [CrossRef] [PubMed]
Sugrue L. P. Corrado G. S. Newsome W. T. (2004). Matching behavior and the representation of value in the parietal cortex. Science, 304, 1782–1787. [CrossRef] [PubMed]
Sunaert S. Van Hecke P. Marchal G. Orban G. A. (1999). Motion-responsive regions of the human brain. Experimental Brain Research, 127, 355–370. [CrossRef] [PubMed]
Tailby C. Majaj N. J. Movshon J. A. (2010). Binocular integration of pattern motion signals by MT neurons and by human observers. Journal of Neuroscience, 30, 7344–7349. [CrossRef] [PubMed]
Tanaka M. Lisberger S. G. (2001). Regulation of the gain of visually guided smooth-pursuit eye movements by frontal cortex. Nature, 409, 191–194. [CrossRef] [PubMed]
Tatler B. W. (2009). Current understanding of eye guidance. Visual Cognition, 17, 777–789. [CrossRef]
Tatler B. W. Vincent B. T. (2009). The prominence of behavioural biases in eye guidance. Visual Cognition, 17, 1029–1054. [CrossRef]
Tavassoli A. Ringach D. L. (2009). Dynamics of smooth pursuit maintenance. Journal of Neurophysiology, 102, 110–118. [CrossRef] [PubMed]
Tavassoli A. Ringach D. L. (2010). When your eyes see more than you do. Current Biology, 20, R93–R94. [CrossRef] [PubMed]
Terao M. Watanabe J. Yagi A. Nishida S. (2010). Smooth pursuit eye movements improve temporal resolution for color perception. PLoS ONE, 5, e11214.
't Hart B. M. Vockeroth J. Schumann F. Bartl K. Schneider E. König P. et al. (2009). Gaze allocation in natural stimuli: Comparing free exploration to head-fixed viewing conditions. Visual Cognition, 17, 1132–1158. [CrossRef]
Thier P. Ilg U. J. (2005). The neural basis of smooth-pursuit eye movements. Current Opinion in Neurobiology, 15, 645–652. [CrossRef] [PubMed]
Thompson K. G. Bichot N. P. (2005). A visual salience map in the primate frontal eye field. Progress in Brain Research, 147, 251–262. [PubMed]
Thompson P. (1982). Perceived rate of movement depends on contrast. Vision Research, 22, 377–380. [CrossRef] [PubMed]
Thorpe S. J. Fize D. Marlot C. (1996). Speed of processing in the human visual system. Nature, 381, 520–522. [CrossRef] [PubMed]
Tong J. Aydin M. Bedell H. E. (2007). Direction and extent of perceived motion smear during pursuit eye movement. Vision Research, 47, 1011–1019. [CrossRef] [PubMed]
Tong J. Ramamurthy M. Patel S. S. Vu-Yu L. P. Bedell H. E. (2009). The temporal impulse response function during smooth pursuit. Vision Research, 49, 2835–2842. [CrossRef] [PubMed]
Tong J. Stevenson S. B. Bedell H. E. (2008). Signals of eye-muscle proprioception modulate perceived motion smear. Journal of Vision, 8, (14):7, 1–6, http://www.journalofvision.org/content/8/14/7, doi:10.1167/8.14.7. [PubMed] [Article] [CrossRef] [PubMed]
Torralba A. Oliva A. (2003). Statistics of natural image categories. Network, 14, 391–412. [CrossRef] [PubMed]
Treisman A. M. Gelade G. (1980). Feature-integration theory of attention. Cognitive Psychology, 12, 97–136. [CrossRef] [PubMed]
Trommershäuser J. Glimcher P. W. Gegenfurtner K. R. (2009). Visual processing, learning and feedback in the primate eye movement system. Trends in Neurosciences, 32, 583–590. [CrossRef] [PubMed]
Trommershäuser J. Maloney L. T. Landy M. S. (2003). Statistical decision theory and the selection of rapid, goal-directed movements. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 20, 1419–1433. [CrossRef] [PubMed]
Trommershäuser J. Maloney L. T. Landy M. S. (2008). Decision making, movement planning and statistical decision theory. Trends in Cognitive Sciences, 12, 291–297. [CrossRef] [PubMed]
Tseng P. H. Carmi R. Cameron I. G. Munoz D. P. Itti L. (2009). Quantifying center bias of observers in free viewing of dynamic natural scenes. Journal of Vision, 9, (7):4, 1–16, http://www.journalofvision.org/content/9/7/4, doi:10.1167/9.7.4. [PubMed] [Article] [CrossRef] [PubMed]
van Beers R. J. Wolpert D. M. Haggard P. (2001). Sensorimotor integration compensates for visual localization errors during smooth pursuit eye movements. Journal of Neurophysiology, 85, 1914–1922. [PubMed]
Van der Stigchel S. (2010). Recent advances in the study of saccade trajectory deviations. Vision Research, 50, 1619–1627. [CrossRef] [PubMed]
Van Donkelaar P. Drew A. S. (2002). The allocation of attention during smooth pursuit eye movements. Progress in Brain Research, 140, 267–277. [PubMed]
Verghese P. (2010). Active search for multiple targets is inefficient [Abstract]. Journal of Vision, 10, (7):1296, 1296a, http://www.journalofvision.org/content/10/7/1296, doi:10.1167/10.7.1296. [CrossRef]
Volkmann F. C. (1962). Vision during voluntary saccadic eye movements. Journal of the Optical Society of America, 52, 571–578. [CrossRef] [PubMed]
Wade N. J. Tatler B. W. (2005). The moving tablet of the eye. Oxford, UK: Oxford University Press.
Wallace J. M. Stone L. S. Masson G. S. (2005). Object motion computation for the initiation of smooth pursuit eye movements in humans. Journal of Neurophysiology, 93, 2279–2293. [CrossRef] [PubMed]
Walther D. Koch C. (2006). Modeling attention to salient proto-objects. Neural Networks, 19, 1395–1407. [CrossRef] [PubMed]
Wang X. Zhang M. Cohen I. S. Goldberg M. E. (2007). The proprioceptive representation of eye position in monkey primary somatosensory cortex. Nature Neuroscience, 10, 640–646. [CrossRef] [PubMed]
Watamaniuk S. N. Heinen S. J. (2007). Storage of an oculomotor motion aftereffect. Vision Research, 47, 466–473. [CrossRef] [PubMed]
Westheimer G. McKee S. P. (1975). Visual acuity in the presence of retinal-image motion. Journal of the Optical Society of America, 65, 847–850. [CrossRef] [PubMed]
Wichmann F. A. Drewes J. Rosas P. Gegenfurtner K. R. (2010). Animal detection in natural scenes: Critical features revisited. Journal of Vision, 10, (4):6, 1–27, http://www.journalofvision.org/content/10/4/6, doi:10.1167/10.4.6. [PubMed] [Article] [CrossRef] [PubMed]
Wurtz R. H. (2008). Neuronal mechanisms of visual stability. Vision Research, 48, 2070–2089. [CrossRef] [PubMed]
Wurtz R. H. Goldberg M. E. (1972). Activity of superior colliculus in behaving monkey. 3. Cells discharging before eye movements. Journal of Neurophysiology, 35, 575–586. [PubMed]
Wyatt H. J. Pola J. (1979). The role of perceived motion in smooth pursuit eye movements. Vision Research, 19, 613–618. [CrossRef] [PubMed]
Xu-Wilson M. Zee D. S. Shadmehr R. (2009). The intrinsic value of visual information affects saccade velocities. Experimental Brain Research, 196, 475–481. [CrossRef] [PubMed]
Yang J. Lisberger S. G. (2009). Relationship between adapted neural population responses in MT and motion adaptation in speed and direction of smooth-pursuit eye movements. Journal of Neurophysiology, 101, 2693–2707. [CrossRef] [PubMed]
Yarbus A. L. (1967). Eye movements and vision. New York: Plenum.
Yasui S. Young L. R. (1975). Perceived visual motion as effective stimulus to pursuit eye movement system. Science, 190, 906–908. [CrossRef] [PubMed]
Zhao Q. Koch C. (2011). Learning a saliency map using fixated locations in natural scenes. Journal of Vision, 11, (3):9, 1–15, http://www.journalofvision.org/content/11/3/9, doi:10.1167/11.3.9. [PubMed] [Article] [CrossRef] [PubMed]
Zohary E. Shadlen M. N. Newsome W. T. (1994). Correlated neuronal discharge rate and its implications for psychophysical performance. Nature, 370, 140–143. [CrossRef] [PubMed]
Figure 1
 
Framework for the control of saccadic eye movements. There are several interacting layers of control that influence saccadic target selection. Figure modified after Fuster (2004).
Figure 1
 
Framework for the control of saccadic eye movements. There are several interacting layers of control that influence saccadic target selection. Figure modified after Fuster (2004).
Figure 2
 
Difference between fixated and non-fixated image patches. (a) Dots represent fixation locations from eye movements of 14 observers. The patches on the right display the areas around all fixated locations. (b) Dots represent fixation locations from another scene (inset). These fixation locations are used to obtain non-fixated image patches (right). The contrast of the fixated image patches seems higher than that of the non-fixated patches, but there are no obvious structural differences. This indicates that high contrast attracts eye movements. Figure reproduced from Kienzle et al. (2009).
Figure 2
 
Difference between fixated and non-fixated image patches. (a) Dots represent fixation locations from eye movements of 14 observers. The patches on the right display the areas around all fixated locations. (b) Dots represent fixation locations from another scene (inset). These fixation locations are used to obtain non-fixated image patches (right). The contrast of the fixated image patches seems higher than that of the non-fixated patches, but there are no obvious structural differences. This indicates that high contrast attracts eye movements. Figure reproduced from Kienzle et al. (2009).
Figure 3
 
Scan path coherence for three different movies. Scan path coherence is a measure of agreement between scan paths of different observers, with high values representing high agreement. In the Ducks_boat movie (red), a duck is flying (from 5 to 10 s and from 11 to 13 s) in front of a natural scene. In the Roundabout movie (black), several small moving objects are distributed across the whole scene and coherence is low. Much higher coherence is found for the War of the Worlds movie (blue, dashed), a Hollywood movie trailer. The black horizontal line represents the average across all natural movies. There is only a high agreement between the scan paths in natural scenes, if a single moving object appears. Figure reproduced from Dorr et al. (2010).
Figure 3
 
Scan path coherence for three different movies. Scan path coherence is a measure of agreement between scan paths of different observers, with high values representing high agreement. In the Ducks_boat movie (red), a duck is flying (from 5 to 10 s and from 11 to 13 s) in front of a natural scene. In the Roundabout movie (black), several small moving objects are distributed across the whole scene and coherence is low. Much higher coherence is found for the War of the Worlds movie (blue, dashed), a Hollywood movie trailer. The black horizontal line represents the average across all natural movies. There is only a high agreement between the scan paths in natural scenes, if a single moving object appears. Figure reproduced from Dorr et al. (2010).
Figure 4
 
Scan path of a person who makes a peanut butter and jelly sandwich. The yellow circles represent fixation locations, with size proportional to duration. The red lines connect consecutive fixations. Task-relevant objects are fixated almost exclusively. Figure reproduced from Hayhoe and Ballard (2005).
Figure 4
 
Scan path of a person who makes a peanut butter and jelly sandwich. The yellow circles represent fixation locations, with size proportional to duration. The red lines connect consecutive fixations. Task-relevant objects are fixated almost exclusively. Figure reproduced from Hayhoe and Ballard (2005).
Figure 5
 
Temporal dynamics of the solution of the aperture problem. A bar was either orthogonal (red) or tilted (blue and green) relative to its motion direction. Smooth pursuit eye movements and neural responses in area MT were measured. (a) Eye velocity perpendicular to the target motion. (b) Eye velocity parallel to the target motion. (c) The preferred direction responses of 60 MT neurons show a continuous transition from orientation-dependent to motion-dependent responses (at about 140 ms) evolving over 60 ms. Figure modified from Pack and Born (2001).
Figure 5
 
Temporal dynamics of the solution of the aperture problem. A bar was either orthogonal (red) or tilted (blue and green) relative to its motion direction. Smooth pursuit eye movements and neural responses in area MT were measured. (a) Eye velocity perpendicular to the target motion. (b) Eye velocity parallel to the target motion. (c) The preferred direction responses of 60 MT neurons show a continuous transition from orientation-dependent to motion-dependent responses (at about 140 ms) evolving over 60 ms. Figure modified from Pack and Born (2001).
Figure 6
 
Weber fractions (discrimination/target velocity) for pursuit (red) and perception (bue) as a function of target velocity. Data are redrawn from Kowler and McKee Kowler and McKee (1987).
Figure 6
 
Weber fractions (discrimination/target velocity) for pursuit (red) and perception (bue) as a function of target velocity. Data are redrawn from Kowler and McKee Kowler and McKee (1987).
Figure 7
 
Chromatic contrast sensitivity. (a) Contrast sensitivity functions for one subject during pursuit (blue) and fixation (red). (b) Contrast sensitivity during pursuit and fixation for 11 subjects. The filled square represents the mean across subjects; the error bar shows the 95% confidence interval of the difference between pursuit and fixation. (c) Chromatic detection rate during pursuit initiation for one subject. Detection rate (blue) and eye velocity (green) are aligned to pursuit onset. The increase in detection rate (black circle) starts about 50 ms before pursuit onset. Data are redrawn from Schütz et al. (2008).
Figure 7
 
Chromatic contrast sensitivity. (a) Contrast sensitivity functions for one subject during pursuit (blue) and fixation (red). (b) Contrast sensitivity during pursuit and fixation for 11 subjects. The filled square represents the mean across subjects; the error bar shows the 95% confidence interval of the difference between pursuit and fixation. (c) Chromatic detection rate during pursuit initiation for one subject. Detection rate (blue) and eye velocity (green) are aligned to pursuit onset. The increase in detection rate (black circle) starts about 50 ms before pursuit onset. Data are redrawn from Schütz et al. (2008).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×