Free
Research Article  |   July 2009
The nonlinear structure of motion perception during smooth eye movements
Author Affiliations
Journal of Vision July 2009, Vol.9, 1. doi:https://doi.org/10.1167/9.7.1
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Camille Morvan, Mark Wexler; The nonlinear structure of motion perception during smooth eye movements. Journal of Vision 2009;9(7):1. https://doi.org/10.1167/9.7.1.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

To perceive object motion when the eyes themselves undergo smooth movement, we can either perceive motion directly—by extracting motion relative to a background presumed to be fixed—or through compensation, by correcting retinal motion by information about eye movement. To isolate compensation, we created stimuli in which, while the eye undergoes smooth movement due to inertia, only one object is visible—and the motion of this stimulus is decoupled from that of the eye. Using a wide variety of stimulus speeds and directions, we rule out a linear model of compensation, in which stimulus velocity is estimated as a linear combination of retinal and eye velocities multiplied by a constant gain. In fact, we find that when the stimulus moves in the same direction as the eyes, there is little compensation, but when movement is in the opposite direction, compensation grows in a nonlinear way with speed. We conclude that eye movement is estimated from a combination of extraretinal and retinal signals, the latter based on an assumption of stimulus stationarity. Two simple models, in which the direction of eye movement is computed from the extraretinal signal and the speed from the retinal signal, account well for our results.

Introduction
During smooth pursuit, a stationary background projected on the retina undergoes a shift in the direction opposite to that of the eyes. Despite this pursuit-induced modification of the retinal image, we usually perceive a stable world. For instance, when pursuing a car that moves in front of stationary buildings an observer generally perceives the car as moving and the buildings as stationary, despite opposite retinal information. The problem faced by the visual system during pursuit is schematized in Figure 1: if a stimulus moves in the world with a given velocity (vector v), and the eyes are engaged in smooth motion with a given velocity (vector e), the direction of the retinal projection of stimulus motion is the vector difference:  
r = v e .
(1)
Hence, the direction of stimulus motion in the world and its direction on the retina are different, and the position and velocity of the stimulus in space must be computed from information on its retinal projection and, possibly, some information about eye movement. 
Figure 1
 
(A) Directions of the stimulus in the world ( v, red) and corresponding retinal image ( r, blue) for a given eye movement ( e, green). (B) Representation of perfect compensation: the visual system adds the eye velocity to the retinal velocity (cyan vector) and recovers the real direction of the stimulus on screen: the perceived direction (black vector) equals the real direction on the screen. (C) Consequences of undercompensation: if the visual system underestimates the eye velocity, the perceived direction lies between the real direction on screen and the direction on the retina.
Figure 1
 
(A) Directions of the stimulus in the world ( v, red) and corresponding retinal image ( r, blue) for a given eye movement ( e, green). (B) Representation of perfect compensation: the visual system adds the eye velocity to the retinal velocity (cyan vector) and recovers the real direction of the stimulus on screen: the perceived direction (black vector) equals the real direction on the screen. (C) Consequences of undercompensation: if the visual system underestimates the eye velocity, the perceived direction lies between the real direction on screen and the direction on the retina.
Perceiving the real direction of objects during pursuit requires the existence of active brain mechanisms. Two main hypotheses have been put forward to explain veridical motion perception during pursuit: a direct and an indirect theory. Gibson (1966), with the direct theory, proposed that the physical motion of objects can simply be derived from the optic flow: the visual system would subtract out any uniform motion in the flow field, perceiving only relative motion between stimulus and background. This direct image-based compensation has also been used to explain spatial constancy in the context of micro eye movements (Murakami & Cavanagh, 1998). The indirect theory, put forward by von Helmholtz (1867), has been more influential: it assumes that the visual system has some information about eye movement derived from a copy of the motor command sent to the eye muscles. Spatial stability would involve combining the retinal signal with this extraretinal eye velocity estimate (see also Sperry, 1950; von Holst & Mittelstaedt, 1950). In the domain of motion perception, in geometrical terms, it would consist of adding a vector, parallel to the eye velocity, to the retinal stimulus velocity vector. If the eye velocity were estimated correctly, as shown in Figure 1B, the perceived direction would be the real direction of the stimulus in the world. On the other hand, if the eye velocity were underestimated, as concluded by several studies (Aubert, 1887; Blohm, Missal, & Lefèvre, 2005; Filenhne, 1922; Fleischl, 1882; Mack & Herman, 1973), the perceived direction would lie between the retinal direction and the direction in space, as shown in Figure 1C. If eye velocity is under- or overestimated by a constant factor, κ, we obtain the following model: 
p=r+κe,
(2)
which is commonly assumed to describe the perception of motion during smooth eye movements. This model is sometimes called linear in the literature, but we will call it the zeroth-order model because κ is a constant. 
In the original indirect theory, the eye velocity estimate is derived only from extraretinal sources. However, as Brenner and van den Berg (1994) pointed out, relative motion between the pursuit target and the background could also be used to estimate the eye velocity. Such a relative-motion-based estimate would be correct only if a stationary object—stationary in the world—is taken as a reference. In natural conditions, the background, during pursuit, is often composed of large objects that are stationary, and, as indicated by the Duncker illusion, the visual system in fact tends to perceive large objects as stationary. In this illusion, a small stationary dot surrounded by a moving frame will be erroneously perceived as moving in the direction opposite to the motion of the larger frame (Duncker, 1929). In Brenner and van den Berg's (1994) study, ocular pursuit was performed in front of a textured background, and the perceived target velocity was influenced by the relative motion between the target and the background in the way predicted by an assumption of background stationarity. This shows that, when a background is present, the eye velocity estimate derives, at least partly, from visual information. We will discuss this point further below. 
Relative motion raises another issue in the study of perception during pursuit. Pursuit can usually be performed only in the presence of a moving target, but this target precisely provides a reference relative to which stimulus motion could be judged. Gogel (1974) and Johansson (1950) have shown that, if a visual reference is present, the observer tends to base his motion perception on relative motion or according to the configurational changes. In addition, Wallach (1959) has shown that the visual system is more sensitive to relative motion between objects than to isolated object motion. Relative motion between the pursuit target—which is usually close to being stationary on the retina—and the stimulus would thus introduce a strong bias toward a more retinocentric perception of motion. An illustration of this bias is shown, in the case of collinear motion, in the experiments of Stoper (1967, 1973); taken up by Mack and Herman (1978). Stoper's (1967) study concerned perceived position: subjects reported whether the second of two successively flashed points appeared to the right or to the left of the first while tracking a moving target. Both the temporal interval and the distance between flashes were varied. With short interstimulus intervals (306 ms), the point of subjective alignment was much closer to retinal than to spatial alignment (76% of constancy loss, or κ = 0.24 in terms of the zeroth-order model). However, Mack and Herman (1978) showed that this constancy loss actually resulted from the influence of relative motion on motion perception. In their study, the target was switched off during the presentation of the moving stimulus while the eyes, already engaged in pursuit, continued their smooth movement. They showed that if the target was eliminated during the presentation of the stimulus, the constancy loss was only 26% (κ = 0.74) for a stimulus presentation of 200 ms. They attributed the high constancy loss found by Stoper to contamination by relative position or motion changes between the target and the stimulus. 
As already mentioned above, it is commonly believed that the visual system undercompensates during pursuit. Two illusions illustrate this undercompensation: the Filehne illusion, where a stationary background dot viewed during pursuit is perceived as moving slightly against the pursuit direction (Filenhne, 1922; Mack & Herman, 1973), and the Aubert–Fleischl phenomenon, where a moving dot is perceived as moving slower when pursued than when viewed while fixating (Aubert, 1887; Fleischl, 1882). Those two phenomena can be explained by an underestimation of the eye velocity. For instance, in the case of the Filehne illusion, the image of the background moves on the retina with a speed equal to that of the eyes and in the opposite direction. If the estimated eye speed is less than the actual speed, only a part of the retinal motion is compensated, the remaining part leading to an illusory backward motion. Most studies of compensation, inspired by this phenomenon, used stimuli either stationary or moving collinearly with the eyes (Blohm et al., 2005; De Graaf & Wetheim, 1988; Mack & Herman, 1973; Mack & Herman, 1978; Morvan & Wexler, 2005; Stoper, 1967, 1973; Wallach, Becklen, & Nitzberg, 1985). Few studies have extended this question to stimuli moving noncollinearly to the pursuit target and ones that have done so have yielded conflicting results: either eye velocity was underestimated, similarly to the case of collinearly moving stimulus (Mack & Herman, 1973; Souman, Hooge, & Wertheim, 2005; Swanston & Wade, 1988), or that there was no compensation at all (Becklen, Wallach, & Nitzberg, 1984; Festinger, Sedgwick, & Holtzman, 1976). The experimental designs were very different between the studies making the identification of the crucial factors difficult. 
In the present study, we investigated the compensation mechanism with stimuli moving in many different directions with respect to the eye movement. We used a larger range of stimulus angles and speeds than have previously been employed. In our experiment, the eyes moved roughly horizontally, and the horizontal speeds of the stimulus ranged between −4.7 and +4.7 times the average eye speed and the vertical speed between −3.1 and +3.1 times the average eye speed. 
Since in the present study we were interested in the contribution of the extraretinal signals to compensation, it was important to eliminate any relative motion signals, which, as mentioned above, create an important bias in motion perception. The first precaution was to perform the experiment in complete darkness to avoid any visual reference. In addition, the edges of the screen were rendered invisible by a filter that blocked stray monitor glow. Secondly, we avoided the influence of relative motion between the pursuit target and stimulus by relying on residual smooth eye movements after the pursuit target was extinguished, as did Mack and Herman (1978). Indeed, it has been shown that after the disappearance of the pursuit target, the eyes continue to move in a smooth-pursuit-like fashion during several hundred milliseconds with decreasing speed (Barnes & Wells, 1999; Becker & Fuchs, 1985; Mitrani & Dimitrov, 1978). Our stimulus was presented during the first 100 ms after target extinction, when the eye speed was still sufficiently high. In addition, since the duration of the pursuit-like period is shorter if the disappearance of the target can be predicted (Stork, Neggers, & Müsseler, 2002), the duration of pursuit before the presentation of the stimulus was variable and hard to predict. 
Methods
Visual display and procedure
The experiment was performed in a very dark room. Stimuli were displayed on a Clinton Monoray monitor with fast-decaying, yellow–green DP104 phosphor. The monitor had 0.29-mm dot pitch at a resolution of 1280 × 960 pixels and a vertical refresh rate of 100 Hz, with the visible display size approximately 37 × 29 cm. The center of the monitor was located approximately 57 cm from the subject's eye and oriented frontally. The monitor was covered by two neutral gelatin filters (Lee Filters) in order to block all stray light. As a result, even after many minutes of dark adaptation, subjects could not perceive the borders of the monitor, nor anything else other than the stimulus and some diffuse light from the eye tracker (see below). The subject's head was placed in a chinrest in order to stabilize head position. 
All linear dimensions and speeds will be given in degrees and degrees per second, computed at the tangent point at the center of the monitor at a notional distance of 57.3 cm. Trials began with the appearance of the tracking dot—the target—a Gaussian blob with width of 0.17°, on the left or right side of the screen (eccentricity 10°). Subjects were instructed to fixate the stationary target and to press a mouse button when ready. This initiated the pursuit target movement rightward or leftward (depending on whether it appeared on the left or right side of the screen, respectively), with its speed increasing from rest to 20°/s at constant acceleration of 40°/s 2; this acceleration phase lasted 0.5 s, during which the target advanced 5°. Upon reaching the speed of 20°/s, the target continued moving at this constant speed. When the target's position reached a randomly chosen value between 5 and 10° on the opposite side of the screen center from where it began (at this stage, it was always moving at constant speed of 20°/s), the dot changed direction, became brighter, moved at constant velocity for 100 ms, and then disappeared. We refer to the dot on this second phase as the stimulus
The horizontal and vertical components of the stimulus velocity were chosen from the sets {0, ±10, ±20, ±30, ±60} and {5, 10, 20, 40}°/s, respectively. Horizontal speeds are positive when in the same direction as the target, and vertical speeds are positive upward (note that the stimulus always moved with a positive upward component). This yielded stimulus speeds between 5 and about 72°/s, and directions between about 5° and 175°. 
Following the disappearance of the stimulus, the subject's task was to report the perceived direction of stimulus motion by orienting a line whose starting point corresponded to the beginning of the stimulus trajectory and whose endpoint was controlled by a computer mouse. For the subject's reference, the target's trajectory was displayed during the response phase as a dotted line. When satisfied with his or her answer, the subject pressed a mouse button. 
The experimental design was factorial, with 2 directions of target motion, 9 stimulus horizontal speeds, 4 vertical speeds, and 7 repetitions, for a total of 504 trials per subject, performed in one block in random order. 
Four subjects participated in the experiment: the two authors and two experienced psychophysical observers who were naive as to the purposes of the experiment. All subjects had normal or corrected-to-normal vision and no known neurological deficits. 
Control experiment
It has been shown that in some situations judgments of motion direction are biased (Post & Chaderjian, 1987). In order to calibrate subjects' responses for any such bias, we performed a control experiment in which the parameters were similar to those of the main experiment, but without smooth pursuit: instead of moving at 20°/s, the target remained stationary on the screen. The stimulus had the same range of velocities on the monitor as in the main experiment. During the response phase, subjects were asked to indicate both the direction of the stimulus motion, as in the main experiment, but also the length of its trajectory. The same subjects participated in the control experiment some time after the main experiment. 
Eye movement recording and analysis
Eye movements were monitored with an infrared video head-mounted eye tracker (EyeLink II, SR Research). The eye tracker was operated in 500-Hz pupil-only mode, with (partial) compensation for small head movements by filming stationary infrared markers. In order to cut down on the amount of stray light from these diodes (some of which is visible to the dark-adapted eye), we used specially supplied markers that emit in farther infrared and are invisible even under dark adaptation. However, there was some stray light from the infrared diodes that illuminate the main cameras of the eye tracker. This could be perceived after several minutes of dark adaptation but was very dim and quite diffuse, offering no sharp points or edges that could serve as visual references. 
The eye tracker was calibrated using a 3 × 3 grid, with the tracker subsequently recording from the most accurate eye. A calibration was performed at the start of each block, and every 50 trials thereafter. Data were recorded for offline analysis with temporal markers in order to synchronize it with the phases of each trial. 
To compute the eye velocity during the 100-ms stimulus phase, the eye position was fitted as a quadratic function of time. The eye speed was defined as the first derivative of the fitted function halfway through the stimulus phase. Two selection conditions were applied offline to the trials, one concerning saccades and blinks and the other concerning pursuit velocity. Trials were discarded if a saccade occurred in a period 100 ms before to 100 ms after the presentation of the stimulus, or a blink within 25 ms of the stimulus. We used the standard EyeLink saccade filter, which defined saccades as having eye speed of over 30°/s, and an acceleration over 9500°/s 2. Trials were also discarded if the horizontal speed during the stimulus phase was below 5°/s or over 35°/s (in practice, the upper bound was never encountered) or if the vertical eye speed was over 5°/s. Approximately 16% of trials from the main experiment were discarded due to these conditions. 
In the control experiment, two selection conditions were applied offline to the trials, one concerning saccades and the other concerning blinks. The saccade and blink criteria were the same as in the main experiment. Approximately 10% of trials were discarded in the control experiment due to these conditions. 
Results
Eye movements
In the main experiment, we relied on the subjects' being able to continue their smooth eye movements even after the target changed direction. Having eliminated trials with saccades or blinks, or ones in which smooth eye movement deviated too far from the velocity of the prior pursuit target, we were left with 84% of the trials. The mean and standard deviation of the horizontal component of the eye movement are shown in Figure 2. The mean speed was 12.7°/s, which is somewhat lower than the target speed of 20°/s, and corresponds to a gain of about 0.64 with respect to that speed. It should be kept in mind, however, that the gain in ordinary smooth pursuit—where the target is visible—is also usually below 1 (for instance, the pursuit gain during the final 100 ms before target disappearance was 0.77); when the target disappears or changes direction and this modification is predictable, the eyes normally decelerate. In our case, the disappearance of the target was not exactly predictable, but the subjects knew it was occurring around the end of the trial. The distribution of mean eye velocities is shown in Figure S2 in the auxiliary files. As can be seen in that figure, trajectories deviated somewhat upward, with a mean angle of about 6°. This was probably not so much an immediate reaction to the upward direction of the stimulus (which lasted only 100 ms), but rather due to the fact that all stimuli had a predictable upward component. There seems to be no effect of the speed of the stimulus on that of the eye ( R = 0.006, n.s.). The corresponding distribution of the retinal velocities is shown in Figure S2B
Figure 2
 
Average eye position projected onto the direction of pursuit target motion—during the stimulus phase (in degrees as a function of time in seconds), during the 100 ms preceding the stimulus (when the pursuit target was present), and during the subsequent 100 ms of the stimulus phase (when the pursuit target was absent). The black line shows the mean trajectory (for all trials in all four subjects), the gray bars show the standard deviation, and the red line shows the trajectory of the pursuit target when it was present (full) and when it was absent (dashed). Trials in which pursuit was to the left were flipped before averaging. Trajectories were aligned in space and time so that the origin corresponded to stimulus onset (or pursuit target disappearance).
Figure 2
 
Average eye position projected onto the direction of pursuit target motion—during the stimulus phase (in degrees as a function of time in seconds), during the 100 ms preceding the stimulus (when the pursuit target was present), and during the subsequent 100 ms of the stimulus phase (when the pursuit target was absent). The black line shows the mean trajectory (for all trials in all four subjects), the gray bars show the standard deviation, and the red line shows the trajectory of the pursuit target when it was present (full) and when it was absent (dashed). Trials in which pursuit was to the left were flipped before averaging. Trajectories were aligned in space and time so that the origin corresponded to stimulus onset (or pursuit target disappearance).
Control experiment
We start by presenting the results and analysis of the control data, since these will be used to calibrate the results of the main experiment. For each target velocity v, we had a response vector R, obtained by dividing the reported trajectory vector by the duration of the stimulus, 100 ms. We assumed that there was no left–right asymmetry, and so in order to increase the statistical power of our data, we “folded” responses along the vertical axis: when v x was negative, we performed the transformation v → (− v x, v y) and R → (− R x, R y). These data are shown in Figure S3. The results show a distortion between the physical velocities and the velocities reported by the subjects. This could be the result of a perceptual or response bias such as a range effect (Poulton, 1973) or spatial anisometropy (Post & Chaderjian, 1987). 
In our analysis of the main results, we will use this data to calibrate subjects' responses. Since we will need data not just at the discrete values of v that we used in the control experiment but at other points in velocity space, we will need an interpolating function, R( v) ≈ ( B x( v x, v y), B y( v x, v y)). We fitted B x and B y as third-order polynomials in v x, v y, with the plausible constraints that horizontal and vertical motions are perceived as perfectly horizontal and vertical, which removed several terms in the polynomials. The results of these fits, for the data of all four subjects taken together, are shown in Figure S3 alongside the averaged data. For all data taken together, the goodness-of-fit measures of the model for the two components are R x 2 = 0.92 and R y 2 = 0.91. We could have obtained a better fit by going to higher order, but we did not want to overfit the data. When analyzing individual subjects (see below), we also performed these fits on the individual subjects' data. 
Main experiment
The direction of eye movement (left or right) was found to have no significant effect on the results. We therefore pooled the data for the two directions, transforming velocities so that a positive horizontal component implies motion in the same direction as the initial pursuit target, and therefore in roughly the same direction as the eye movement. 
To illustrate the main effect, before diving into our somewhat complex analysis, Figure 3 shows raw data from four representative conditions. (The complete data for all the stimulus velocities can be found in Figure S4.) As can be seen, there is a striking, systematic difference between trials with forward motion—motion on retina with a component in the same direction as the eye movement—and backward motion. The general tendency can be observed: when the stimulus moves opposite to the eyes (left part of Figure 3), retinal motion is strongly compensated for eye movement—so that the perceived motion is the direction of motion on the screen—or even overcompensated. On the other hand, when the stimulus moves in the same direction as the eyes (right part of Figure 3), the response is much closer to the retinal direction. In terms of the classical zeroth-order model, it would mean that compensation for eye movement is estimated correctly or overestimated when the stimulus moves opposite to the eyes: κ, the extraretinal gain, would be close to or more than 1. On the other hand, this gain decreases toward 0 when the stimulus moves faster and faster in the direction of eye movement. The spatial and retinal angles of motion are plotted in Figure 4 (and given for each stimulus velocity in Figure S5, note that stimulus with the same spatial direction but different speeds have different directions on the retina, as explained in Figure S1). 
Figure 3
 
Examples of perceived motion direction in several conditions ( V y = 40 and V x = −60, −20, 20, and 60 from left to right). The green arrows indicate mean eye velocity, the red arrows indicate the motion of the target in space, the blue arrows indicate the mean motion of the target on the retina, and the black lines indicate the mean direction of perceived target motion. The black segments on the bottom left corner indicate the scale. Data are averaged for all four subjects, and trials with leftward eye movement are reflected about a vertical axis before averaging; each diagram represents data from about 50 trials, with standard errors on the responses substantially smaller than the differences between the directions of motion on the screen and on the retina.
Figure 3
 
Examples of perceived motion direction in several conditions ( V y = 40 and V x = −60, −20, 20, and 60 from left to right). The green arrows indicate mean eye velocity, the red arrows indicate the motion of the target in space, the blue arrows indicate the mean motion of the target on the retina, and the black lines indicate the mean direction of perceived target motion. The black segments on the bottom left corner indicate the scale. Data are averaged for all four subjects, and trials with leftward eye movement are reflected about a vertical axis before averaging; each diagram represents data from about 50 trials, with standard errors on the responses substantially smaller than the differences between the directions of motion on the screen and on the retina.
Figure 4
 
Response angles from the main and control experiments, and ideal data. All angles are given with respect to the direction of the pursuit target. The responses obtained in the main experiment (left graph) are plotted against retinal angle. The red diagonal represents retinal, uncompensated responses ( κ = 0). The middle graph shows ideal responses for κ = 0 (in blue) and κ = 1 (in green) plotted against the retinal angle. The right graph shows control data as a function of retinal and spatial angles, as those are confounded in the control case.
Figure 4
 
Response angles from the main and control experiments, and ideal data. All angles are given with respect to the direction of the pursuit target. The responses obtained in the main experiment (left graph) are plotted against retinal angle. The red diagonal represents retinal, uncompensated responses ( κ = 0). The middle graph shows ideal responses for κ = 0 (in blue) and κ = 1 (in green) plotted against the retinal angle. The right graph shows control data as a function of retinal and spatial angles, as those are confounded in the control case.
The raw response angles from all trials are presented in Figure 4. The left scatter plot shows response angle as a function of angle on the retina. Angle zero corresponds to the pursuit target direction, so angles 0–90° correspond to motions with a positive forward component on the retina with respect to pursuit, while angles 90–180° correspond to backward motions. For no compensation ( κ = 0), the response would just be equal the retinal angle, shown as a dashed red line. The middle graph plots the screen angle as a function of retinal angle for each trial: thus, this graph shows what responses in the left graph would look like in the case of perfect compensation ( κ = 1). Finally, the rightmost graph shows data obtained in the control (fixation) condition. The asymmetry of compensation between forward and backward moving stimuli is quite visible here. For motion directions close to 0° (forward motion), responses seem to be on the diagonal and thus totally uncompensated. On the other hand, as the angle increases past 90° and approaches 180° (backward motion), the responses systematically sink below the diagonal; comparing with the middle graph, we see that they approach the compensated or spatial directions. Although part of this might be due to an oblique-like response bias, which is quantified in the rightmost graph of Figure 4, the magnitude of the effect seems to be beyond anything predicted by a simple response bias. In the following analyses, we will attempt to quantify these impressions. 
Could these results be due to something other than an effect of retinal motion on compensation—such as response bias, for instance? Looking at the control data in Figure 4, we do see some evidence for a small horizontal repulsion effect: as stimulus angles approach 180°, for example, response angles approach 180° slower than the stimulus angles do. However, this small effect is unlikely to account for the much larger deviations from the diagonal in the experimental data as the retinal angle approaches 180°. These visual intuitions will be quantified in our analysis, which takes into account response bias as measured in the control experiment. 
To quantify the effect of retinal motion on compensation, we proceeded in two steps. We first estimated the zeroth-order model gain κ independently for each stimulus velocity v, using a least mean square method and calibrating with the fitted control data. In detail, for each trial i corresponding to a given target velocity v in space, we calculated a cost function for a given value of κ, by first calculating the unbiased prediction from the zeroth-order model, namely r i + κ e i, where the retinal velocity r i = ve i and e i is the measured eye velocity on that trial. We then applied the bias polynomial computed from the corresponding control data to obtain the biased response, B( r i + κ e i), and calculated the direction of this predicted response. We then calculated the sum of squared differences between the predicted and actual response ( θ i) directions, and finally, calculated κ by minimizing this sum of squares:  
κ = \min κ i [ θ i \arctan B ( r i + κ e i ) ] 2
(3)
with the difference taken in the angular sense. We also calculated the mean retinal velocity in each condition, -r( v), in order to express κ as a function of -r; the variability in r being due to variable eye velocities on different trials. 
The resulting values of κ as a function of retinal velocity are shown in Figure 5. The above calculations were performed separately for each subject (including the fitted calibration by control data), and for all subjects pooled together (R 2 = 0.970). The data plotted in this figure are presented in Figure S5. This table also shows the predictions of this model, to allow the reader to compare the fit to the data. 
Figure 5
 
The values of the zeroth-order model gain, κ, as a function of the x-component of the retinal velocity of the stimulus and the corresponding second-order fit. Each set of points plots data for different y-components of the spatial velocity of the stimulus and each curve plots the result of the second-order fit. The top graph shows results for all subjects pooled together and the four bottom graphs display results for each individual subjects. The error bars represent standard error, calculated using bootstrap. To perform the bootstrap analysis, we generated 500 random, independent bootstrap resamples of the data, by randomly selecting, with replacement, the same number of trials as in the original data set. For each bootstrap resample, we calculated κ ( Equation 3), with the standard deviation of the bootstrap κ providing an estimate for the standard error of the mean κ. The fits were performed using least mean squares. We used Equation 4 for the second-order fit.
Figure 5
 
The values of the zeroth-order model gain, κ, as a function of the x-component of the retinal velocity of the stimulus and the corresponding second-order fit. Each set of points plots data for different y-components of the spatial velocity of the stimulus and each curve plots the result of the second-order fit. The top graph shows results for all subjects pooled together and the four bottom graphs display results for each individual subjects. The error bars represent standard error, calculated using bootstrap. To perform the bootstrap analysis, we generated 500 random, independent bootstrap resamples of the data, by randomly selecting, with replacement, the same number of trials as in the original data set. For each bootstrap resample, we calculated κ ( Equation 3), with the standard deviation of the bootstrap κ providing an estimate for the standard error of the mean κ. The fits were performed using least mean squares. We used Equation 4 for the second-order fit.
The zeroth-order model predicts that κ is uniform, i.e., the fitted curves should be flat. This is clearly not the case: in all subjects, κ seems to have a strong dependence on rx, decreasing nonlinearly as rx increases, then flattening out and then perhaps rising again. This is a more quantitative measure of the difference between forward and backward motions than was discussed above and shown in Figure 3. In some subjects, there also seems to be an effect of r y. Standard errors for the κ estimates are shown in Figure S6
In order to statistically quantify the deviation from the zeroth-order model—the dependence of κ on r—we fitted κ( r) as a second-order polynomial:  
κ ( r x , r y ) a 0 + a 1 , 0 r x + a 0 , 1 r y + a 2 , 0 r x 2 + a 1 , 1 r x r y + a 0 , 2 r y 2
(4)
using least squares. (The two-step procedure ( Equations 3 and 4) may seem needlessly complicated: perhaps the reader is wondering why we did not perform the fit in one step. This is due to the inherent nonlinearity of fitting the zeroth-order model to direction data ( Equation 3). Nonlinear fits on even moderately noisy data with multiple parameters (the a i,j) are rather unstable and are best avoided. For purposes of robustness, we therefore segregate the nonlinear fit to a single parameter, κ, for each value of v, and then perform a linear fit ( Equation 4) on multiple parameters.) 
We then calculated the confidence intervals on the a i,j by bootstrap (Efron & Tibshirani, 1994), in order to determine if they are statistically different than zero. To perform this analysis, we generated random, independent bootstrap resamples of the data, generating samples with the same number of trials as the actual data by selecting trials randomly with replacement. For each bootstrap resample, we calculated κ(r) (Equation 3), and then performed the fit to calculate ai,j (Equation 4). The standard deviations of the bootstrap values of ai,j provide estimates for standard errors for the values calculated on the actual data; for further details of the bootstrap technique, see Efron and Tibshirani (1994). 
The results of the second-order model are given in Table 1 and plotted in Figure 5. In addition to the constant term, all subjects have a significant effect of r x on κ, either through r x 2 (note the small between-subject variation in this coefficient), or through both r x 2 and r x. In addition, one of the authors has a significant r x r y interaction. Therefore, in all subjects, we can exclude the zeroth-order model of compensation for eye movements in motion perception, which predicts uniform κ, and therefore in which only the constant term would be different than zero. In addition, in terms of goodness of fit (presented in the second column of Table 1), the R 2 value indicates that the second-order fit accounts for 97% of the variance in the data. All four subjects have a significant dependence of κ on the component of retinal velocity, r x, roughly parallel to the eye movement. The significantly positive r x 2 coefficient (in all subjects) shows that compensation is greater for faster motion in the direction of eye movement. In pooled data, as well as in two individual subjects, we find a significantly negative r x coefficient, which indicates an asymmetry in compensation: backward stimulus motion (with respect to eye movement) is compensated more than forward motion. 
Table 1
 
Dependence of κ on the retinal velocity: the fitted values of the polynomial coefficients (see Equation 4), as well as R 2 values for the polynomial fit. The colors code the results of the bootstrap test (with 1000 bootstrap resamples in each case) for significant difference from zero: pink means p < 0.05, red means p < 0.01. Nonzero coefficients, other than the constant, indicate deviation from the zeroth-order model.
Table 1
 
Dependence of κ on the retinal velocity: the fitted values of the polynomial coefficients (see Equation 4), as well as R 2 values for the polynomial fit. The colors code the results of the bootstrap test (with 1000 bootstrap resamples in each case) for significant difference from zero: pink means p < 0.05, red means p < 0.01. Nonzero coefficients, other than the constant, indicate deviation from the zeroth-order model.
Subj. R 2 Const. r x r y r x 2 r x r y r y 2
CB 0.92 0.28 −0.018 0.016 0.00028 0.00022 −0.000025
CM 0.78 0.48 0.0014 −0.0093 0.00031 −0.000018 −0.000017
MV 0.76 0.75 −0.0045 −0.029 0.00036 −0.00013 0.00046
MW 0.93 0.33 −0.012 −0.010 0.00025 0.00022 0.00014
All 0.97 0.40 −0.0087 −0.0045 0.00033 0.000067 0.000022
In order to exclude any spurious effects of our calibration through control data, we repeated the above analysis without any calibration. We found that all subjects still had a significant dependence of κ on r x
We have also fitted the dependence of κ on r by a first-order polynomial. The fitting procedure was the same as for the second-order model, the only difference is the absence of the quadratic terms. The R 2 of the first-order model is well below that of the second-order model: 0.564 compared to 0.967 for the second-order model. In the first-order model too, there is a significant effect of r x on κ (coefficient: −0.0157, significantly negative by bootstrap test, p < 0.01), as well as a significant effect of r y (coefficient: −0.0043, significantly negative by bootstrap test, p < 0.01) that is lost when fitting the second-order model. The first-order model accounts for 57% more variance than the zeroth-order model, and the second-order model for 97%. Together, those two models—first-order and second-order—show that κ significantly depends on r, and therefore that the zeroth-order model is not sufficient to describe compensation. 
We also performed a fit of the zeroth-order model on all the trials, all categories confounded, in this case we find κ = 0.33 (R 2 = 0.942). The R 2 is very high because most of the variance is actually captured by the retinal velocity term in the model. Indeed, the perceived direction highly correlates with the retinal direction, probably because of the very brief presentation duration of stimulus, as it has been shown that presentation duration affects the compensation (Souman et al., 2005). The better goodness of fit for the second-order model is shown, first by a slightly higher R2 (0.970) and by the angular error (difference between the response angle predicted by the model and the real response angle): 9° for all trials together as opposed to 5.9° for the fit per categories. 
Because the velocity of the stimulus directly influences its retinal eccentricity, it is also important to estimate the eccentricity of the stimulus in the course of a trial. The eyes and the target were not aligned when the dot changed direction because the pursuit gain was less than 1, the mean retinal error was 1.13° (1.04 SD) in horizontal and −1.77 (1.6 SD) in vertical dimensions. During the stimulus phase, the eyes moved on average by 1.27°, which means that, by the end of the stimulus, the eyes caught up with their initial lag. Therefore, at the end of the stimulus phase, the eccentricities of stimuli with the same speeds were similar on average—the correlation between the horizontal eccentricity (he) and the horizontal retinal speed (hrs) is r = 0.96 (r2 = 0.91), and the equation of the fitted line is he = 1.01 + 0.10hrs, the ordinate at the origin results from the retinal error, and the slope roughly equals the duration of the stimulus because we regress positions as a function of speed. Given that the eccentricity depends linearly on the retinal velocity, if subjects were using the final retinal position of the stimulus to judge its motion, there should be no difference between backward and forward moving stimuli. Eccentricity is thus unlikely to account for our effect given the linear dependence of eccentricity on the retinal velocity. 
Discussion
Our results show that stimulus motion on the retina has an influence on compensation during smooth eye movement, and therefore the classical model of compensation, which predicts a constant compensation gain, is insufficient to describe motion perception during smooth eye movement. 
Although subjects may have misestimated the extent of target motion due to Fröhlich-like effects, our measure was immune to such effects as it only concerned the direction of motion. 
For a wide range of stimuli angles and speeds, we computed the compensation gain κ, defined as the ratio of the estimated eye speed to real eye speed, through Equation 2. In the range of stimulus speeds that have usually been used (around the same speed as that of the eyes), we have found values of κ less than 1, thus showing an underestimation of eye motion, and in rough agreement with previous studies (Becklen et al., 1984; De Graaf & Wertheim, 1988; Festinger et al., 1976; Mack & Herman, 1973; Mack & Herman, 1978; Morvan & Wexler, 2005; Souman et al., 2005; Stoper, 1967, 1973; Swanston & Wade, 1988; Wallach et al., 1985). On the other hand, the compensation gain found for atypical stimulus velocities was far out of the range of the κ classically found. This was highly unexpected given that the compensation gain is commonly assumed to be uniform. On the contrary, our results reveal the varying nature of κ: we have found, in every subject, a significant effect of the retinal velocity along the pursuit axis (rx) on κ, either as a first-order or second-order polynomial function of rx
The linear dependence, with a negative coefficient, implies an asymmetry: backward stimulus motion (with respect to eye movement) is compensated more than forward motion. The quadratic dependence, with a positive coefficient, implies that faster stimulus motion (in the direction of eye movement) is compensated more than slower motion. In any case, we can exclude a uniform compensation gain as a function of stimulus velocity. 
We have thus shown that in the perception of stimulus motion, eye movement is not estimated from the extraretinal signal alone—otherwise there would be no effect ofr x on κ. The other source of information about eye movement would be from stimulus motion. On the assumption of background stationarity, uniform motion on the retina is compatible with equal-and-opposite motion of the eye. Thus, if stimulus velocity on the retina is r, and since there is no other background, according to this retinal information the eye velocity ought to be e est = − r. However, this retinal information cannot be used in isolation. If it were, subjects would never be able to perceive motion during the stimulus phase, in which only one object appeared on the retina: any retinal slip would be canceled by the equal-and-opposite estimated eye velocity, and perceived stimulus motion would always be null. Our subjects, however, did perceive motion in the stimulus phase. Therefore, other information must have been involved in the estimation of eye velocity. 
Another source of information about stimulus velocity might have been relative motion, between the stimulus and either the pursuit target or the background. If this were used, the visual system might bypass estimating eye velocity altogether, and (mis)take relative velocity for absolute velocity, as is in the Duncker illusion. However, relative motion can be ruled out in our experiment, as there was no pursuit target during the stimulus phase and no visible background that could be used as reference. The only other information available for estimating eye movement is the extraretinal signal. 
Since we have shown that both retinal and extraretinal signals must contribute to the estimation of eye movement, we now turn to the question of how they may be combined. One simple way this may happen is through linear combination. Consider that we have an extraretinal signal e and a retinal signal − r, and from these the brain computes an estimated eye velocity e est by  
e e s t = ( 1 μ ) e μ r ,
(5)
where the weight μ, which ranges between 0 and 1, might be determined in a Bayesian optimal manner as suggested by Landy, Maloney, Johnston, & Young (1995). Although this type of combination is perfectly plausible, unfortunately we can learn nothing further about it from our data, since we only have direction-of-motion responses. To understand why, consider what happens when we use Equation 5 to estimate eye velocity: 
p(1μ)r+(1μ)e.
(6)
Since in our data we only have the direction of perceived motion, θp = arctan(py/px), which is independent of μ, we are unable to calculate μ from our results. 
Another way that retinal and extraretinal signals may be combined is given as follows. In the context of saccades, there is more noise in the oculomotor system in the direction parallel to eye movements than in the perpendicular direction (van Opstal and van Gisbergen, 1989), and this difference has been implicated in models of spatial constancy phenomena (Niemeier, Crawford, & Tweed, 2007). In the context of smooth eye movement, Festinger et al. (1976) presented data that they interpreted as indicating that the perceptual system knows next to nothing from extraretinal signals about the speed of eye movements but does know their approximate direction. Let us therefore start with a rather bold hypothesis: in estimating eye velocity, extraretinal information is used for direction, and retinal information for amplitude. Given the uncertainty in the amplitude of an eye movement corresponding to a given motor command, this is probably a fairly optimal strategy (the uncertainty on the direction being probably lower than on the amplitude; Krukowski, Piroq, Beutter, Brooks, & Stone, 2003; Schwartz & Lisberger, 1994). Formally, our model computes eye velocity as follows: 
M1:eest=|r|ê,
(7)
where ê is a unit vector in the direction of eye movement. To examine the consequences of model M1's rule for combining retinal and extraretinal signals, we calculated the corresponding response direction from Equations 7 and 2 for each of our actual trials, and repeated the nonlinear fit of Equation 3 to calculate κ as a function of stimulus retinal velocity r. The results, when we plug κ = 0.4 into Equation 2 in order to calculate perceived velocity on each trial (the value of κ we use is close to 0.33, the value we obtain when we fit all data to the zeroth-order model), are shown in Figure 7. Model M1 gives a decent fit to the actual κ surfaces that we calculated, and that were shown in Figure 6. The one feature of model M1's prediction that does not seem to match the data is the rise in κ that is too fast when rx becomes large (Figure 7). 
Figure 6
 
The values of the linear model gain, κ, as a function of the x- and y-components of the retinal velocity of the stimulus (rx, ry). Each surface plots the κ values fitted for each combination of (rx, ry). Inset curves represent averages over values of ry, showing κ as a function of rx. Results for all subjects pooled together and for separate subjects.
Figure 6
 
The values of the linear model gain, κ, as a function of the x- and y-components of the retinal velocity of the stimulus (rx, ry). Each surface plots the κ values fitted for each combination of (rx, ry). Inset curves represent averages over values of ry, showing κ as a function of rx. Results for all subjects pooled together and for separate subjects.
Figure 7
 
Compensation gain κ as a function of retinal velocity r: prediction of model M 1 ( Equation 7) applied to our actual conditions. Compare to Figure 6.
Figure 7
 
Compensation gain κ as a function of retinal velocity r: prediction of model M 1 ( Equation 7) applied to our actual conditions. Compare to Figure 6.
The problem with model M 1 is that when extraretinal and retinal estimates of eye velocity— e and − r, respectively—go in opposite directions, it predicts a somewhat nonsensical result. For instance, whether v = 0.5 e or v = 1.5 e (where v is the velocity of the stimulus in space), it predicts the same result, namely e est = 0.5 e. We can therefore try a more refined but still extremely simple model:  
M 2 : e e s t = { | r | ê i f e · r < 0 0 i f e · r > 0 .
(8)
Model M 2 assumes the same estimated eye velocity as the previous model when the extraretinal and retinal signals are in the same direction; when they are in opposite directions, it simply assumes that the eyes are still. The results predicted by model M 2 applied to our conditions are shown in Figure 8. In contrast to M 1's predictions, here κ is too flat for r x > 0, which probably means that our second assumption probably goes in the right direction but is also a bit too drastic. These toy models are not meant to be the final word in the explanation of our effects: a more systematic treatment, probably in the Bayesian framework, is in order. Nevertheless, models M 1 and M 2 are encouraging because of their simplicity, physiological and computational plausibility, and decent fit to our data with only one parameter. Although the second-order model presented above provides a reasonable fit for the data, it may turn out that other models—for instance, a piecewise first-order model—may do as well or better. Our point is that the zeroth-order model is not sufficient to account for the nonuniform nature of compensation, which depends on retinal velocity. 
Figure 8
 
Compensation gain κ as a function of retinal velocity r: prediction of model M 2 ( Equation 8) applied to our actual conditions. Compare to Figure 6.
Figure 8
 
Compensation gain κ as a function of retinal velocity r: prediction of model M 2 ( Equation 8) applied to our actual conditions. Compare to Figure 6.
In estimating eye velocity, using extraretinal and retinal signals has both advantages and disadvantages. An important advantage of the extraretinal signal, which is presumably based on an efference copy of the eye movement command, is being available earlier than the visual reafference required for the retinal signal. A disadvantage of the extraretinal signal is that it is quite an imprecise way of estimating the eye velocity given that the same motor command can result in different eye movements, depending on the state of the oculomotor periphery. The main advantage of the retinal signal is its robustness: there is almost always a large, stationary background available. The trouble is when there is not one, or when the visual scene contains many objects moving in different directions, or when the observer is performing a complex movement. Another problem with the retinal signal is that the gain of background motion perception may vary with its visual properties (Freeman & Banks, 1998). Combining the extraretinal and retinal signals, as we have demonstrated here, may yield a more robust eye movement estimate—and therefore a more accurate perception of object motion. 
A few previous studies have come to conclusions similar to ours, either showing an influence of the retinal motion on compensation or proposing that the extraretinal signal provides information on the direction of pursuit. The latter has been proposed initially by Festinger et al. (1976), who found little compensation for smooth pursuit and concluded that the extraretinal information available to the visual system is rather general, containing only information about direction and starting movement. Later, Krukowski et al. (2003) provided experimental support to the idea that the pursuit direction is indeed available. Krukowski et al. compared the accuracy of the judgement of motion direction in fixation and in pursuit. Their results show that the discrimination thresholds for direction are similar during fixation and pursuit, and this was true for short (200 ms) and longer (800 ms) stimulus presentation. 
Several authors have investigated the influence of visual signals on compensation. In particular, two studies (Brenner & van den Berg, 1994; Turano & Heidenreich, 1999) have shown that the compensation during pursuit can be affected by a moving background. Both studies found that compensation is higher when the pursuit is performed in front of a textured background that moves backward, and that compensation is smaller when the background moves forward, with respect to the compensation obtained for a stationary background. More precisely, they found that motion perception was influenced by background motion—as long as this motion is compatible with the real direction of pursuit. When the conflict between the background and eye directions becomes too strong, the background motion influence vanishes. This effect is similar to the floor effect we observed when the stimulus motion is increasing in the pursuit direction and, as we proposed, may show the dominance of the extraretinal signal with respect to the retinal one in encoding motion direction. There is a clear similarity between our results and those of Brenner and van den Berg (1994) and Turano and Heidenreich (1999); however, there was an important difference between our study and theirs: in the older studies, the tracking target was always visible. As mentioned above, the tracking target introduces a bias: the background motion may be judged relatively to the tracking target movement. Secondly, the relative motion between the background and the target can be used to estimate the eye velocity. Hence, in this case, there is a confusion between the effect of the relative motion between the target and the background and the effect of the retinal slip of the background. The fact that despite this important difference we obtained similar results may indicate that the relevant influence in their experiments was the retinal slip and not only the relative motion. However, two differences between the older studies and ours make a precise comparison difficult: a background was present in those experiments, and in ours there was only the stimulus—but which could have been taken for a background; and in the older studies the stimulus had a longer duration. 
As mentioned above, the experiments on motion perception have led to conflicting results. Since these studies had rather heterogenous designs, the identification of the crucial factors explaining the discrepancies is difficult (Becklen et al., 1984; Blohm et al., 2005; De Graaf & Wertheim, 1988; Festinger et al., 1976; Mack & Herman, 1973; Mack & Herman, 1978; Morvan & Wexler, 2005; Souman et al., 2005; Stoper, 1967, 1973; Swanston & Wade, 1988; Wallach et al., 1985). Stimulus velocities (and the ratio between stimulus and eye speeds) vary from one study to the next, and therefore part of the discrepancies in the results can be explained by different retinal signals. For instance, Souman et al. (2005) found different values of compensation gain κ for stimuli moving with different velocities. Souman et al.'s study, in which the target was moving at 10°/s and the stimuli at 3°/s or 8°/s, found higher κ for the slower stimuli, but nevertheless concluded that the zeroth-order model with uniform compensation gain was adequate for describing motion perception during pursuit. It is possible that the difference obtained between the two stimulus speeds is due to different retinal signals. 
In our opinion, the main strength of the present study is in its simplicity. Our stimulus consists of one visible object, whose motion is decoupled from the concurrent smooth motion of the eyes. This design shields our results from any spurious influence of relative motion, either between the stimulus and the background, or between the stimulus and the pursuit target. Another strength of our method is that we study a wide range of motions on the retina, which pays off in demonstrating a clear pattern in the dependence of the compensation gain on retinal velocity. 
To conclude, our results indicate that in the perception of visual motion, compensation for smooth eye movement is higher for stimuli that move backward rather than forward with respect to the eye movement, and that compensation is higher for stimuli that move faster in the direction of eye movement. We interpret this effect as a sign that that the estimation of eye velocity is based on a combination of extraretinal and retinal signals, the latter being based on a hypothesis of object stationarity. We propose that the extraretinal and retinal signals are combined in a particular way: the final estimate is based on the direction of the extraretinal signal and the amplitude of the retinal signal. We have shown that two very simple models based on this idea reproduce our results quite well with just one parameter. It would be desirable to expand these into full-blown models. The Bayesian framework would be ideal for this, since the problem involves the combination of multi-dimensional signals that are more or less reliable in different dimensions (Ernst & Bülthoff, 2004; Landy et al., 1995). Such a model could then be tested by either strengthening one of the signals—for example, the retinal signal could probably be strengthened by making the stimulus larger, and therefore a more likely “background”—or weakening one of them. For instance, to weaken the retinal signal one might add visual noise; to weaken the extraretinal signal, one might adapt the subject with smooth pursuit accompanied by backgrounds moving in random directions (Haarmeier & Thier, 1996). 
Supplementary Materials
Figure S1 - Figure S1 
Figure S1. Example showing how two stimuli moving in the same direction but with different speeds have different directions on the retina. the eye velocity is shown in green, two spatial velocities are shown in black, and their corresponding retinal velocities in blue (the solid black vector corresponds to the solid blue one and the dashed black vector corresponds to the dashed blue one). The two spatial velocities have the same direction but different speeds; after subtracting the eye velocity, the corresponding retinal velocities have different directions. 
Figure S2 - Figure S2 
Figure S2. The distribution of horizontal and vertical components of eye velocity, assuming rightward prusuit, during the stimulus phase, for all subjects (in °/s). Trials corresponding to light blue points were eliminated from the analysis. This smooth eye movement \modiftwo{follows the direction and roughly the speed of the pursuit target, even though the target has actually changed direction and speed during this phase of the trial. (B) The distribution of horizontal and vertical components of stimulus velocity on the retina for all subjects during the stimulus phase, in °/s. The distribution of the horizontal retinal speed is somewhat spread out, due to the variability in the horizontal eye speed. The vertical retinal speeds are clustered because the vertical eye speed was close to zero, and less variable. 
Figure S3 - Figure S3 
Figure S3. Control data and analysis. Each black dot represents a stimulus velocity condition in °/s. The corresponding red dot represents the mean response, averaged over 4 subjects. The green dot represents the results of our polynomial fit to these data. All velocities were folded about a vertical axis. 
Figure S4 - Figure S4 
Figure S4. Perceived motion direction for each stimulus velocity condition. The green arrows indicate mean eye velocity, the red arrows the motion of the target in space, the blue arrows the mean motion of the target on the retina, and the black lines the mean direction of perceived target motion. The black segments on the bottom left corner indicate the scale. Data is averaged for 4 subjects, and trials with leftward eye movement are reflected about a vertical axis before averaging; each diagram represents data from about 50 trials, with standard errors on the responses substantially smaller than the differences between the directions of motion on the screen and on the retina. In trials where target motion is backward, in the opposite direction from that of eye movement, subjects tend to perceive motion in space—i.e., motion on the retina compensated for eye movement. In trials where target motion is forward, on the other hand, subjects tend to perceive uncompensated, retinal motion. 
Figure S5 - Figure S5 
Figure S5. Main experiment data, zeroth-order fit coefficient and zeroth-order model predictions. All subjects pooled together. Meaning of the columns: SCR–Screen, EYE–Eye, RET–Retinal, RESP–Response, (x, y)–mean of the cartesian components of a given vector, a–mean angle in degrees, (sd)–standard deviation, (se)–standard error calculated using bootstrap. DIFF scr/ret: test of the difference between the angle on the screen and the angle on the retina (t-test, bonferroni corrected), a red star indicates that the difference is significantly different from zero, KAPPA–κ estimated by fitting the zeroth-order model (Equation 3) to the data by category, RESPpred–response angle predicted by the zeroth-order model. 
Figure S6 - Figure S6 
Figure S6. The values of the zeroth-order model gain, κ, as a function of the x component of the retinal velocity of the stimulus. Each curve plots data for different y-components of the spatial velocity of the stimulus. Results for all subjects pooled together. The error bars represent standard error, calculated using a bootstrap. To perform the bootstrap analysis, we generated 500 random, independent bootstrap resamples of the data, by randomly selecting, with replacement, the same number of trials as in the original data set. For each bootstrap resample, we calculated κ (Equation 3), with the standard deviation of the bootstrap κ providing an estimate for the standard error of the mean κ
Acknowledgments
We wish to thank Francis Colas, Jacques Droulez, and Tom Freeman for useful suggestions. This work was supported by the European integrated project BACS. 
Commercial relationships: none. 
Corresponding author: Camille Morvan. 
Email: camille.morvan@gmail.com. 
Address: Maloney lab–2nd floor, office 275, Department of Psychology & Center for Neural Science, New York University, 6 Washington Place, New York, NY 10003, USA. 
References
Aubert, H. (1887). Die bewegungsempfindung. Pflügers Archiv, 40, 459–480. [CrossRef]
Freeman, T. C. Banks, M. S. (1998). Perceived head-centric speed is affected by both extra-retinal and retinal errors. Vision Research, 38, 941–945. [PubMed] [CrossRef] [PubMed]
Barnes, G. R. Wells, S. G. Mergner,, T. Becker,, W. Deubel, H. (1999). Modelling prediction in ocular pursuit: The importance of short-term storage. Current oculomotor research: Physiological and psychological aspects. (pp. 97–107). New York: Plenum Press.
Becker, W. Fuchs, A. F. (1985). Prediction in the oculomotor system: Smooth pursuit during transient disappearance of a visual target. Experimental Brain Research, 57, 562–575. [PubMed] [CrossRef] [PubMed]
Becklen, R. Wallach, H. Nitzberg, D. (1984). A limitation of position constancy. Journal of Experimental Psychology: Human Perception and Performance, 10, 713–723. [PubMed] [CrossRef] [PubMed]
Blohm, G. Missal, M. Lefèvre, P. (2005). Processing of retinal and extraretinal signals for memory-guided saccades during smooth pursuit. Journal of Neurophysiology, 93, 1510–1522. [PubMed] [Article] [CrossRef] [PubMed]
Brenner, E. van den Berg, A. V. (1994). Judging object velocity during smooth pursuit eye movements. Experimental Brain Research, 99, 316–324. [PubMed] [CrossRef] [PubMed]
Ernst, M. O. Bülthoff, H. H. (2004). Merging the senses into a robust percept. Trends in Cognitive Sciences, 8, 162–169. [PubMed] [CrossRef] [PubMed]
Niemeier, M. Crawford, J. D. Tweed, D. B. (2007). Optimal inference explains dimension-specific contractions of spatial perception. Experimental Brain Research, 179, 313–323. [PubMed] [CrossRef] [PubMed]
De Graaf, B. Wertheim, A. H. (1988). The perception of object motion during smooth pursuit eye movements: Adjacency is not a factor contributing to the Filehne illusion. Vision Research, 28, 497–502. [PubMed] [CrossRef] [PubMed]
Mitrani, L. Dimitrov, G. (1978). Pursuit eye movements of a disappearing moving target. Vision Research, 18, 537–539. [PubMed] [CrossRef] [PubMed]
Duncker, K. (1929). Über induzierte bewegung. Psychologische Forschung, 12, 180–259. [CrossRef]
Efron, B. Tibshirani, R. J. (1994). An introduction to the bootstrap. New York: Chapman & Hall/CRC.
Festinger, L. Sedgwick, H. A. Holtzman, J. D. (1976). Visual perception during smooth pursuit eye movements. Vision Research, 16, 1377–1386. [PubMed] [CrossRef] [PubMed]
Filenhne, W. (1922). Über das optische wahrnehmen von bewegungen. Zeitschrift für Sinnephysiologie, 53, 134–145.
Fleischl, E. V. (1882). Physiologischoptische notizen, 2 Mitteilung. Sitzung Wiener Bereich der Akademie der Wissenschaften, 3, 7–25.
Gibson, J. J. (1966). The senses considered as perceptual systems. London: George Allen and Unwin.
Gogel, W. C. (1974). Relative motion and the adjacency principle. Quarterly Journal of Experimental Psychology, 26, 425–437. [CrossRef]
Haarmeier, T. Thier, P. (1996). Modification of the Filehne illusion by conditioning visual stimuli. Vision Research, 36, 741–750. [PubMed] [CrossRef] [PubMed]
Turano, K. A. Heidenreich, S. M. (1999). Eye movements affect the perceived speed of visual motion. Vision Research, 39, 1177–1187. [PubMed] [CrossRef] [PubMed]
Mack, A. Herman, E. (1973). Position constancy during pursuit eye movement: An investigation of the Filehne illusion. Quarterly Journal of Experimental Psychology, 25, 71–84. [PubMed] [CrossRef] [PubMed]
Johansson, G. (1950). Configurations in event perception. Uppsala, Sweden: Almqvist and Wiksell.
Krukowski, A. E. Piroq, K. A. Beutter, B. R. Brooks, K. R. Stone, L. S. (2003). Human discrimination of visual direction of motion with and without smooth pursuit eye movements. Journal of Vision, 3, (11):16, 831–840, http://journalofvision.org/3/11/16/, doi:10.1167/3.11.16. [PubMed] [Article] [CrossRef]
Schwartz, J. D. Lisberger, S. G. (1994). Initial tracking conditions modulate the gain of visuo-motor transmission for smooth pursuit eye movements in monkeys. Visual Neuroscience, 11, 411–424. [PubMed] [CrossRef] [PubMed]
Mack, A. Herman, E. (1978). The loss of position constancy during pursuit eye movements. Vision Research, 18, 55–62. [PubMed] [CrossRef] [PubMed]
Morvan, C. Wexler, M. (2005). Reference frames in early motion detection. Journal of Vision, 5, (2):4, 131–138, http://journalofvision.org/5/2/4/, doi:10.1167/5.2.4. [PubMed] [Article] [CrossRef]
Murakami, I. Cavangh, P. (1998). A jitter after-effect reveals motion-based stabilization of vision. Nature, 395, 798–801. [PubMed] [CrossRef] [PubMed]
Stork, S. Neggers, S. F. Müsseler, J. (2002). Intentionally-evoked modulations of smooth pursuit eye movements. Human Movement Science, 21, 335–348. [PubMed] [CrossRef] [PubMed]
Post, R. B. Chaderjian, M. (1987). Perceived path of oblique motion: Horizontal–vertical and stimulus-orientation effects. Perception, 16, 23–28. [PubMed] [CrossRef] [PubMed]
Poulton, E. C. (1973). Unwanted range effects from using within-subject experimental designs. Psychological Bulletin, 80, 113–121. [CrossRef]
Souman, J. L. Hooge, I. T. Wertheim, A. H. (2005). Perceived motion direction during smooth pursuit eye movements. Experimental Brain Research, 164, 376–386. [PubMed] [CrossRef] [PubMed]
Sperry, R. W. (1950). Neural basis of the spontaneous optokinetic response produced by visual inversion. Journal of Comparative and Physiological Psychology, 43, 482–489. [PubMed] [CrossRef] [PubMed]
Stoper, A. (1967). Vision during pursuit eye movements: The role of oculomotor information.
Stoper, A. (1973). Apparent motion of stimuli presented stoboscopically during pursuit eye movements. Perception & Psychophysics, 7, 201–211. [CrossRef]
Swanston, M. T. Wade, N. J. (1988). The perception of visual motion during movements of the eyes and of the head. Perception & Psychophysics, 43, 559–566. [PubMed] [CrossRef] [PubMed]
van Opstal, A. J. van Gisbergen, J. A. (1989). Scatter in the metrics of saccades and properties of collicular motor map. Vision Research, 29, 1183–1196. [PubMed] [CrossRef] [PubMed]
von Helmholtz, H. (1867). Handbuch der physiologischen optik. Hamburg, Germany: Voss.
von Holst, E. Mittelstaedt, H. (1950). Das reafferenzprinzip. Naturwissenschaften, 37, 464–476. [CrossRef]
Wallach, H. (1959). The perception of motion. Scientific American, 201, 56–60. [PubMed] [CrossRef] [PubMed]
Wallach, H. Becklen, R. Nitzberg, D. (1985). The perception of motion during colinear eye movements. Perception & Psychophysics, 38, 18–22. [PubMed] [CrossRef] [PubMed]
Landy, M. S. Maloney, L. T. Johnston, E. B. Young, M. (1995). Measurement and modeling of depth cue combination: In defense of weak fusion. Vision Research, 35, 389–412. [PubMed] [CrossRef] [PubMed]
Figure 1
 
(A) Directions of the stimulus in the world ( v, red) and corresponding retinal image ( r, blue) for a given eye movement ( e, green). (B) Representation of perfect compensation: the visual system adds the eye velocity to the retinal velocity (cyan vector) and recovers the real direction of the stimulus on screen: the perceived direction (black vector) equals the real direction on the screen. (C) Consequences of undercompensation: if the visual system underestimates the eye velocity, the perceived direction lies between the real direction on screen and the direction on the retina.
Figure 1
 
(A) Directions of the stimulus in the world ( v, red) and corresponding retinal image ( r, blue) for a given eye movement ( e, green). (B) Representation of perfect compensation: the visual system adds the eye velocity to the retinal velocity (cyan vector) and recovers the real direction of the stimulus on screen: the perceived direction (black vector) equals the real direction on the screen. (C) Consequences of undercompensation: if the visual system underestimates the eye velocity, the perceived direction lies between the real direction on screen and the direction on the retina.
Figure 2
 
Average eye position projected onto the direction of pursuit target motion—during the stimulus phase (in degrees as a function of time in seconds), during the 100 ms preceding the stimulus (when the pursuit target was present), and during the subsequent 100 ms of the stimulus phase (when the pursuit target was absent). The black line shows the mean trajectory (for all trials in all four subjects), the gray bars show the standard deviation, and the red line shows the trajectory of the pursuit target when it was present (full) and when it was absent (dashed). Trials in which pursuit was to the left were flipped before averaging. Trajectories were aligned in space and time so that the origin corresponded to stimulus onset (or pursuit target disappearance).
Figure 2
 
Average eye position projected onto the direction of pursuit target motion—during the stimulus phase (in degrees as a function of time in seconds), during the 100 ms preceding the stimulus (when the pursuit target was present), and during the subsequent 100 ms of the stimulus phase (when the pursuit target was absent). The black line shows the mean trajectory (for all trials in all four subjects), the gray bars show the standard deviation, and the red line shows the trajectory of the pursuit target when it was present (full) and when it was absent (dashed). Trials in which pursuit was to the left were flipped before averaging. Trajectories were aligned in space and time so that the origin corresponded to stimulus onset (or pursuit target disappearance).
Figure 3
 
Examples of perceived motion direction in several conditions ( V y = 40 and V x = −60, −20, 20, and 60 from left to right). The green arrows indicate mean eye velocity, the red arrows indicate the motion of the target in space, the blue arrows indicate the mean motion of the target on the retina, and the black lines indicate the mean direction of perceived target motion. The black segments on the bottom left corner indicate the scale. Data are averaged for all four subjects, and trials with leftward eye movement are reflected about a vertical axis before averaging; each diagram represents data from about 50 trials, with standard errors on the responses substantially smaller than the differences between the directions of motion on the screen and on the retina.
Figure 3
 
Examples of perceived motion direction in several conditions ( V y = 40 and V x = −60, −20, 20, and 60 from left to right). The green arrows indicate mean eye velocity, the red arrows indicate the motion of the target in space, the blue arrows indicate the mean motion of the target on the retina, and the black lines indicate the mean direction of perceived target motion. The black segments on the bottom left corner indicate the scale. Data are averaged for all four subjects, and trials with leftward eye movement are reflected about a vertical axis before averaging; each diagram represents data from about 50 trials, with standard errors on the responses substantially smaller than the differences between the directions of motion on the screen and on the retina.
Figure 4
 
Response angles from the main and control experiments, and ideal data. All angles are given with respect to the direction of the pursuit target. The responses obtained in the main experiment (left graph) are plotted against retinal angle. The red diagonal represents retinal, uncompensated responses ( κ = 0). The middle graph shows ideal responses for κ = 0 (in blue) and κ = 1 (in green) plotted against the retinal angle. The right graph shows control data as a function of retinal and spatial angles, as those are confounded in the control case.
Figure 4
 
Response angles from the main and control experiments, and ideal data. All angles are given with respect to the direction of the pursuit target. The responses obtained in the main experiment (left graph) are plotted against retinal angle. The red diagonal represents retinal, uncompensated responses ( κ = 0). The middle graph shows ideal responses for κ = 0 (in blue) and κ = 1 (in green) plotted against the retinal angle. The right graph shows control data as a function of retinal and spatial angles, as those are confounded in the control case.
Figure 5
 
The values of the zeroth-order model gain, κ, as a function of the x-component of the retinal velocity of the stimulus and the corresponding second-order fit. Each set of points plots data for different y-components of the spatial velocity of the stimulus and each curve plots the result of the second-order fit. The top graph shows results for all subjects pooled together and the four bottom graphs display results for each individual subjects. The error bars represent standard error, calculated using bootstrap. To perform the bootstrap analysis, we generated 500 random, independent bootstrap resamples of the data, by randomly selecting, with replacement, the same number of trials as in the original data set. For each bootstrap resample, we calculated κ ( Equation 3), with the standard deviation of the bootstrap κ providing an estimate for the standard error of the mean κ. The fits were performed using least mean squares. We used Equation 4 for the second-order fit.
Figure 5
 
The values of the zeroth-order model gain, κ, as a function of the x-component of the retinal velocity of the stimulus and the corresponding second-order fit. Each set of points plots data for different y-components of the spatial velocity of the stimulus and each curve plots the result of the second-order fit. The top graph shows results for all subjects pooled together and the four bottom graphs display results for each individual subjects. The error bars represent standard error, calculated using bootstrap. To perform the bootstrap analysis, we generated 500 random, independent bootstrap resamples of the data, by randomly selecting, with replacement, the same number of trials as in the original data set. For each bootstrap resample, we calculated κ ( Equation 3), with the standard deviation of the bootstrap κ providing an estimate for the standard error of the mean κ. The fits were performed using least mean squares. We used Equation 4 for the second-order fit.
Figure 6
 
The values of the linear model gain, κ, as a function of the x- and y-components of the retinal velocity of the stimulus (rx, ry). Each surface plots the κ values fitted for each combination of (rx, ry). Inset curves represent averages over values of ry, showing κ as a function of rx. Results for all subjects pooled together and for separate subjects.
Figure 6
 
The values of the linear model gain, κ, as a function of the x- and y-components of the retinal velocity of the stimulus (rx, ry). Each surface plots the κ values fitted for each combination of (rx, ry). Inset curves represent averages over values of ry, showing κ as a function of rx. Results for all subjects pooled together and for separate subjects.
Figure 7
 
Compensation gain κ as a function of retinal velocity r: prediction of model M 1 ( Equation 7) applied to our actual conditions. Compare to Figure 6.
Figure 7
 
Compensation gain κ as a function of retinal velocity r: prediction of model M 1 ( Equation 7) applied to our actual conditions. Compare to Figure 6.
Figure 8
 
Compensation gain κ as a function of retinal velocity r: prediction of model M 2 ( Equation 8) applied to our actual conditions. Compare to Figure 6.
Figure 8
 
Compensation gain κ as a function of retinal velocity r: prediction of model M 2 ( Equation 8) applied to our actual conditions. Compare to Figure 6.
Table 1
 
Dependence of κ on the retinal velocity: the fitted values of the polynomial coefficients (see Equation 4), as well as R 2 values for the polynomial fit. The colors code the results of the bootstrap test (with 1000 bootstrap resamples in each case) for significant difference from zero: pink means p < 0.05, red means p < 0.01. Nonzero coefficients, other than the constant, indicate deviation from the zeroth-order model.
Table 1
 
Dependence of κ on the retinal velocity: the fitted values of the polynomial coefficients (see Equation 4), as well as R 2 values for the polynomial fit. The colors code the results of the bootstrap test (with 1000 bootstrap resamples in each case) for significant difference from zero: pink means p < 0.05, red means p < 0.01. Nonzero coefficients, other than the constant, indicate deviation from the zeroth-order model.
Subj. R 2 Const. r x r y r x 2 r x r y r y 2
CB 0.92 0.28 −0.018 0.016 0.00028 0.00022 −0.000025
CM 0.78 0.48 0.0014 −0.0093 0.00031 −0.000018 −0.000017
MV 0.76 0.75 −0.0045 −0.029 0.00036 −0.00013 0.00046
MW 0.93 0.33 −0.012 −0.010 0.00025 0.00022 0.00014
All 0.97 0.40 −0.0087 −0.0045 0.00033 0.000067 0.000022
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×