There is a large literature concerning the ability of the visual system to compensate for the retinal effects of smooth pursuit eye movement. This literature emphasizes perceptual errors that arise when the visual system estimates location, speed, and direction during tracking eye movements, as well as more complex judgments such as depth and heading. Examples include the misperception of flashed locations during pursuit (Brenner, Smeets, & Van den Berg,
2001; Mitrani, Dimitrov, Yakimoff, & Mateef,
1979), the changes in perceived speed that occur when moving objects are tracked (Dichgans, Wist, Diener, & Brandt,
1975; Sumnall, Freeman, & Snowden,
2003), the illusory motion of stationary backgrounds over which the eye movement is made (Freeman & Sumnall,
2002; Mack & Herman,
1978), and the misperception of object direction when a separate target is pursued (Hansen,
1979; Souman, Hooge, & Wertheim,
2005). One general account of these effects starts with the idea that estimates of retinal position or motion are added to estimates of eye position or velocity to transform images into a head-centered frame (Freeman,
2001; Haarmeier, Bunjes, Lindner, Berret, & Thier,
2001; Rotman, Brenner, & Smeets,
2005; Souman & Freeman,
2008; Souman, Hooge, & Wertheim,
2006; Turano & Massof,
2001; Wertheim,
1994). The mistakes exhibited by observers are then either a function of different errors associated with retinal and eye-based inputs or lie within the combination stage itself. An important implication of this general account is that relatively early on in the processing pathway there exist mechanisms tuned to position and velocity in a head-centered frame. Unfortunately, the psychophysical evidence for this is somewhat indirect, largely based on the measurement of bias (e.g., matching and nulling).