November 2010
Volume 10, Issue 13
Free
Research Article  |   November 2010
Does the noise matter? Effects of different kinematogram types on smooth pursuit eye movements and perception
Author Affiliations
Journal of Vision November 2010, Vol.10, 26. doi:https://doi.org/10.1167/10.13.26
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alexander C. Schütz, Doris I. Braun, J. Anthony Movshon, Karl R. Gegenfurtner; Does the noise matter? Effects of different kinematogram types on smooth pursuit eye movements and perception. Journal of Vision 2010;10(13):26. https://doi.org/10.1167/10.13.26.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We investigated how the human visual system and the pursuit system react to visual motion noise. We presented three different types of random-dot kinematograms at five different coherence levels. For transparent motion, the signal and noise labels on each dot were preserved throughout each trial, and noise dots moved with the same speed as the signal dots but in fixed random directions. For white noise motion, every 20 ms the signal and noise labels were randomly assigned to each dot and noise dots appeared at random positions. For Brownian motion, signal and noise labels were also randomly assigned, but the noise dots moved at the signal speed in a direction that varied randomly from moment to moment. Neither pursuit latency nor early eye acceleration differed among the different types of kinematograms. Late acceleration, pursuit gain, and perceived speed all depended on kinematogram type, with good agreement between pursuit gain and perceived speed. For transparent motion, pursuit gain and perceived speed were independent of coherence level. For white and Brownian motions, pursuit gain and perceived speed increased with coherence but were higher for white than for Brownian motion. This suggests that under our conditions, the pursuit system integrates across all directions of motion but not across all speeds.

Introduction
Primates use smooth pursuit eye movements to stabilize the image of a moving object of interest on the retina. Depending on viewing distance and object's size, the extent of its retinal image can vary widely. Depending on the object's background and the presence of additional moving objects in the vicinity, there might be many different motion signals on the retina. Because of this heterogeneity, the pursuit system must be flexible in the spatial scale of target selection, segmenting and integrating motion information according to the demands of the moment (for review, see Braddick, 1993). Consider segmentation: pursuit eye movements can track small objects across a textured stationary background, although with a reduced initial acceleration (Keller & Khan, 1986; Kimmig, Miles, & Schwarz, 1992; Niemann & Hoffmann, 1997). The pursuit system can also select and pursue one of several moving targets (Ferrera & Lisberger, 1997), although the initial pursuit response is biased by the distracter targets (Lisberger & Ferrera, 1997; Masson & Stone, 2002; Spering, Gegenfurtner, & Kerzel, 2006; Wallace, Stone, & Masson, 2005). Consider integration: tracking performance improves with the extent of moving random-dot stimuli (Heinen & Watamaniuk, 1998; Watamaniuk & Heinen, 1999). This spatial integration is clearly advantageous for motion analysis but might also impair performance by integrating motion signals from irrelevant context stimuli (see Spering & Gegenfurtner, 2008 for a review). In most of these studies, the pursuit target and the distracters were separate objects (Lisberger & Ferrera, 1997; Spering et al., 2006), sometimes even spatially displaced (Miura, Kobayashi, & Kawano, 2009; Spering & Gegenfurtner, 2007a, 2007b). 
The different phases of smooth pursuit eye movement are useful behavioral tools to use to study motion perception over time. Cortical motion analysis starts with direction selective neurons in the primary visual cortex. Local velocity measurements can differ depending on contour orientation of the moving object and the receptive field size and the spatial position of the motion detector. Because of their small receptive fields, V1 neurons have a limited view of a moving object and encode only the component of motion perpendicular to a contour, which may differ from the true motion of the whole object. At the next stage of visual motion analysis, area MT in the superior temporal sulcus, which has larger receptive fields, is a good candidate site for motion integration (Lisberger & Movshon, 1999). Pursuit initiation is tightly linked with the activity and properties of direction selective cells in area MT (Lisberger & Movshon, 1999) and lesions in MT impair smooth pursuit initiation for stimuli presented in the corresponding visual field (Newsome, Wurtz, Dursteler, & Mikami, 1985). The control of pursuit during steady-state pursuit, when the moving target is stabilized on the retina, cannot be accounted for by MT neurons, because they are silent in the absence of retinal motion (Newsome, Wurtz, & Komatsu, 1988). Target motion, irrespective of retinal stabilization, is represented in area MST (Ilg, Schumann, & Thier, 2004; Inaba, Shinomoto, Yamane, Takemura, & Kawano, 2007; Ono & Mustari, 2006). 
The question of how the human visual system pools motion signals across different speeds, directions, and spatial positions has received some attention. This pooling process seems to be quite flexible. For instance, it has been shown that the balance between spatial segmentation and integration can be adapted to meet the task requirements for perception (Burr, Baldassi, Morrone, & Verghese, 2009) and that form cues can contribute to that selection process (Maruya, Amano, & Nishida, 2010). Contextual motion changes the perceived direction of bistable motion (Baker & Graf, 2010). Furthermore, several solutions to resolve global motion seem to exist in parallel and compete against each other (Bowns & Alais, 2006). Interestingly, mechanism-based decoding algorithms provide a better explanation of perception than simple stimulus-based statistics (Webb, Ledgeway, & McGraw, 2007). 
Random-dot displays can contain varying strengths of motion signal and mix signal and noise in the same part of visual space. Because of these advantages, random-dot stimuli are often used in psychophysical and neurophysiological studies. Here we investigated how pursuit is affected by motion noise. We used three different forms of noise (Pilly & Seitz, 2009; Scase, Braddick, & Raymond, 1996) to explore the way that the pursuit system integrates direction and speed information and to investigate the ability of the smooth pursuit system to integrate signal and discard noise. In addition, we measured the perceived speed for the kinematogram types to relate the pursuit results to motion perception. 
Methods
Subjects
The authors ACS and DIB and three naive subjects participated in these experiments. The naive subjects were students of the Justus Liebig University and were paid for participation. All subjects were experienced with eye movement experiments. 
Equipment
Subjects were seated in a dimly lit room facing a 21-inch SONY GDM-F520 CRT monitor driven by an Nvidia Quadro NVS 290 graphics board with a refresh rate of 100 Hz non-interlaced. At a viewing distance of 47 cm, the active screen area subtended 45 deg in the horizontal direction and 36 deg vertical on the subject's retina. With a spatial resolution of 1280 × 1024 pixels, this results in 28 pixels/deg. The subject's head was fixed in place using a chin rest and the display was viewed binocularly. 
Eye movement recording and analysis
Eye position signals were recorded with a head-mounted, video-based eye tracker (EyeLink 1000; SR Research, Osgoode, Ontario, Canada) and were sampled at 2000 Hz. Stimulus display and data collection were controlled by a PC. By digital differentiation of eye position signals over time, we obtained eye velocity signals. The eye position and velocity signals were filtered by a Butterworth filter with cut-off frequencies of 30 and 20 Hz, respectively. 
We used the EyeLink saccade detection algorithm to set saccade onset and offsets for offline analysis. This algorithm uses a velocity threshold of 22 deg/s to which average velocity over last 40 ms is added (often negligible before the first catch-up saccade), and an acceleration threshold of 3,800 deg/s 2. Saccades were removed from the velocity traces by linear interpolation. We calculated the latency of saccades and the frequency of forward and backward saccades per trial. 
To determine smooth pursuit onset, we used a two-step procedure: first, we determined the onset of average velocity traces and, second, the onset of individual velocity traces. To determine the onset of average traces, we aligned all velocity traces to motion onset and averaged them for each coherence level separately. Then, we fitted regression lines with a length of 80 ms starting with every sample between 0 and 500 ms after motion onset to each average trace. The best fitting regression line with a slope between 10 and 200 deg/s 2 was selected and the interception of the regression line with the x-axis defined the onset of the average trace (Schütz, Braun, & Gegenfurtner, 2007). To determine the latency of individual traces, we shifted each individual trace along the time axis to minimize the deviation to its corresponding average trace (Osborne, Lisberger, & Bialek, 2005). We obtained the onset of individual traces by adding the shift of the individual trace to the onset of the average trace. In the final step, all traces were aligned to their individual pursuit onset. We included only trials with a pursuit latency between 50 and 250 ms (85% of trials). 
We analyzed eye acceleration separately in three time bins relative to pursuit onset: 0–50 ms, 50–100 ms, and 100–150 ms. Eye acceleration was signed with the direction of the eye movements, i.e., acceleration in a direction opposite to the target direction was counted negatively. We calculated steady-state pursuit gain as the ratio of the average eye velocity 500 to 600 ms after pursuit onset and the signal speed. 
Visual stimuli
All stimuli were presented on a black background with a luminance of 0.04 cd/m 2. Our random-dot kinematograms appeared within a circular aperture of 10 deg radius. Individual dots were displayed in white (87 cd/m 2) and had a size of 0.14 × 0.14 deg. If not otherwise stated, the dot density was 2 dots/deg 2 and the signal motion speed was 10 deg/s. The coherence was varied in five levels (20, 40, 60, 80, and 100%), corresponding to five vector average speeds (2, 4, 6, 8, and 10 deg/s). We tested three different kinematogram types ( Figures 1 and 2). These differed in the way that noise was added; at 100% coherence, all three kinematogram types were identical (Scase et al., 1996). 
Figure 1
 
Movie of the three kinematogram types at different coherence levels.
Figure 1
 
Movie of the three kinematogram types at different coherence levels.
Figure 2
 
Space–time plots for the three kinematogram types (rows) at two coherence levels (columns). Space represents one horizontal line of pixels in the display. Stimulus dots are plotted in white; the dashed yellow line represents the signal speed; the dashed cyan line represents the vector average speed; and the dashed magenta line represents the measured pursuit speed. (A, B) Transparent motion. (C, D) White motion. (E, F) Brownian motion. (A, C, E) Eighty percent coherence. (B, D, F) Twenty percent coherence.
Figure 2
 
Space–time plots for the three kinematogram types (rows) at two coherence levels (columns). Space represents one horizontal line of pixels in the display. Stimulus dots are plotted in white; the dashed yellow line represents the signal speed; the dashed cyan line represents the vector average speed; and the dashed magenta line represents the measured pursuit speed. (A, B) Transparent motion. (C, D) White motion. (E, F) Brownian motion. (A, C, E) Eighty percent coherence. (B, D, F) Twenty percent coherence.
Transparent motion
In this condition, signal and noise dots formed two distinct populations. Dot lifetime was 200 ms for both signal and noise dots. Signal dots moved in the signal direction and were randomly repositioned at the end of their lifetime. Noise dots were assigned a random direction and kept this direction until the end of their lifetime. When the lifetime of a noise dot ended, it was repositioned and assigned a new random motion direction. We call this stimulus transparent motion because subjects usually perceive it as two superimposed transparent surfaces (Snowden & Verstraten, 1999). 
White motion
Here signal and noise labels were randomly reassigned to all dots in every 20 ms. Noise dots moved every second frame with a random speed in a random direction. Signal dots moved every frame with the signal speed and direction. We reduced the dot density to 1 dot/deg 2 because the apparent dot density was higher in this condition, especially at low coherence levels. We call this condition white motion because all noise directions and speeds are equally likely. This is a variation on the early random-dot stimulus introduced by Morgan and Ward (1980) and has been used extensively to study the properties of motion-sensitive neurons (Britten, Newsome, Shadlen, Celebrini, & Movshon, 1996; Britten, Shadlen, Newsome, & Movshon, 1992, 1993; Salzman, Murasugi, Britten, & Newsome, 1992). 
Brownian motion
Here signal and noise labels were also randomly reassigned to all dots in every frame. Signal dots moved every frame with the signal speed and direction. Noise dots moved every frame with the signal speed in a random direction. In contrast to the white motion, this stimulus contains only one speed, and the noise is purely directional. 
Experimental procedure
At the beginning of each trial, a white bull's-eye with an outer radius of 0.3 deg and an inner radius of 0.075 deg appeared at the screen center. The subjects had to fixate the bull's-eye and press a button to start the trial, at which time the EyeLink 1000 System performed a fixation check. If the fixation check succeeded, the initial bull's-eye disappeared and the random-dot kinematogram appeared. Motion started as soon as the dots appeared. The coherent motion was always horizontal randomly leftward or rightward on each trial. The random-dot kinematogram was presented for 1,000 ms. The subject was asked to track the motion of the stimulus. Each subject performed at least 1,600 trials in total. 
Theoretical speed prediction
We assume that the vector average gain ( G VA) is computed by a weighted sum of the signal speed ( S S) and the noise speed ( S N). The relative weight is determined by the coherence C:  
G V A = S S C + S N ( 100 C ) S S ( C + ( 100 C ) ) .
(1)
In all our stimuli, the noise was balanced across all directions. Hence, the combination of all noise vectors leads to a noise speed ( S N) of zero. As a result, we can leave out the noise speed and the signal speed from the following equation:  
G V A = C C + ( 100 C ) .
(2)
For the five coherence levels of 20, 40, 60, 80, and 100% and a signal speed of 10 deg/s, this corresponds to vector average speeds of 2, 4, 6, 8, and 10 deg/s. 
Equation 2 is valid for the extreme case that signal and noise are integrated completely. However, it might be that the signal can be segmented partially or completely from the noise, which would presumably reduce the influence of the noise. To account for that possibility, we introduced a free parameter α, which specifies the amount of effective noise:  
G = C C + ( 100 C ) α .
(3)
An α of unity indicates full integration of noise; an α of zero indicates complete segmentation. Note the non-linear effect of α on the gain. 
Statistical analysis
To test the effect of coherence and kinematogram type on the eye movements, we calculated repeated-measures analysis of variances (ANOVA). We did not include the 100% coherence condition, because there were no differences between the kinematogram types in this condition. 
Control experiment with varying speed at 100% coherence
Here we measured smooth pursuit responses at 100% coherence with 5 different speeds (2, 4, 6, 8, and 10 deg/s). These five speeds correspond to the vector average speed of the five coherence levels in the other experimental conditions. With these measurements, we wanted to estimate which effects in the main experiment are caused rather by the reduction of vector average speed than by the reduction of coherence per se. As the three kinematogram types did not differ at 100% coherence, we tested only one motion condition. Each subject performed at least 400 trials in this experiment. 
Control experiment to measure perceived speed
Here we measured the perceived speed of coherent motion at 5 coherence levels (20, 40, 60, 80, and 100%) during central fixation. We presented two different stimuli in two intervals and asked subjects to judge which interval contained the faster motion. In the standard interval, coherent motion with a speed of 10 deg/s was presented at one of the five coherence levels. In the test interval, 100% coherent motion was presented with a speed that was adjusted by an adaptive staircase procedure (Levitt, 1971). The duration of each interval was randomized between 400 ms and 500 ms, to make the distance traveled by the dots uninformative for stimulus speed. The order of standard and test intervals was randomized, but the motion direction was always the same in both intervals. To facilitate fixation in the stimulus center, we presented a red bull's-eye throughout the trial. The bull's-eye was surrounded by an annulus of 0.9 deg diameter in which no motion dots appeared. Additionally, we reduced the dot density to 1 dot/deg2 for all kinematogram types. Each subject performed at least 1,600 trials in total. 
Control experiment at zero coherence
Here we investigated the ability to maintain smooth pursuit when the coherence level fell to zero after an initial period of coherent motion. We presented five different levels of coherent left- or rightward motion (20, 40, 60, 80, and 100%) for four different durations (50, 70, 100, and 150 ms). Then motion coherence was set to 0% and the trial ended after 1 s. Subjects were asked to pursue the motion and to indicate the perceived direction of the initial motion via button press after each trial. Three of the five subjects participated in this experiment and performed 6,400 trials each. We only analyzed trials with correct psychophysical judgments (92%). As it was difficult to determine the pursuit onset reliably in some conditions, we aligned eye movement traces to motion onset and measured pursuit gain from 600 to 700 ms after motion onset. We analyzed three additional parameters to quantify the decay of the eye movements: (i) the peak eye velocity from 0 to 400 ms after motion onset, (ii) the time this peak velocity was reached, and (iii) the exponential decay of the eye velocity in a time interval of 0 to 500 ms from the time of peak velocity using  
v ( t ) = a e t b + c ,
(4)
where a, b, and c are fitted free parameters, t denotes the time from peak velocity, and v is the eye velocity. Only trials with a decay in the fit ( a > 0) and a time constant b smaller than 500 ms were included in this analysis (62% for white and 51% for Brownian motion). 
Results
Figure 2 shows space–time plots of the different types of random-dot kinematograms, with space representing one horizontal line of the stimulus and time representing the whole stimulus duration. Lines with a negative slope represent signal motion, lines with a positive slope represent noise that moves exactly opposite to the signal motion, and dots or short line segments represent noise that crosses the horizontal line at another angle. At high coherence, all kinematogram types are similar, showing long line segments with a negative slope, which is signal. However, they differ strongly at lower coherence. In the transparent motion, there are still a few line segments with negative slope, representing the signal distinct from the noise. In the white motion, it is hard to distinguish signal from noise because of the very short lifetimes at low coherence levels. In the Brownian motion display, the slope of the signal, which corresponds to the speed (Adelson & Bergen, 1985; Fahle & Poggio, 1981; Heeger, 1987), seems to get lower at low coherence levels. 
Another way to look at the kinematogram types is to compute the spatiotemporal frequency distributions of the stimuli. We applied a 3D Fourier analysis to horizontal and vertical space and time. In Figure 3, we show only the energy in the layer with zero vertical frequency component. Like the space–time plots, the motion energy shows no clear differences between the kinematogram types at high coherence. The signal motion is represented by large energy in a spatiotemporal frequency band, which is clearly distinguishable from other components. This is analogous to the long line segments with negative slope in the space–time plots. In the lower coherence conditions, motion energy is, in general, more widely spread. However, this is different across the three kinematogram types. Whereas the signal motion viewed in the frequency domain is still clearly distinct in the transparent motion, in the white and Brownian motions, it is no longer distinguishable by casual inspection at 20% coherence. In the white motion, motion energy is scattered across a large range of spatial and temporal frequencies. In the Brownian motion, energy seems to concentrate at low spatial and temporal frequencies, which is analogous to the reduced slope of the lines in the space–time plots.
Figure 3
 
Spatiotemporal frequency content of the three kinematogram types (rows) at two coherence levels (columns) computed by Fourier analysis of space–time plots like those in Figure 2. Stimulus energy is plotted in white; the dashed yellow line represents the signal speed; the dashed cyan line represents the vector average speed; and the dashed magenta line represents the measured pursuit speed. (A, B) Transparent motion. (C, D) White motion. (E, F) Brownian motion. (A, C, E) Eighty percent coherence. (B, D, F) Twenty percent coherence.
Figure 3
 
Spatiotemporal frequency content of the three kinematogram types (rows) at two coherence levels (columns) computed by Fourier analysis of space–time plots like those in Figure 2. Stimulus energy is plotted in white; the dashed yellow line represents the signal speed; the dashed cyan line represents the vector average speed; and the dashed magenta line represents the measured pursuit speed. (A, B) Transparent motion. (C, D) White motion. (E, F) Brownian motion. (A, C, E) Eighty percent coherence. (B, D, F) Twenty percent coherence.
 
In the following, we analyzed if these differences affect pursuit of the motion signal and its perceived speed. Visual inspection of average eye movement traces ( Figures 4 and 5) show clearly that eye acceleration depended on the coherence level for all types of random-dot kinematograms. There were also clear differences in the steady-state pursuit gain among the three kinematogram types. In the following sections, we analyzed different aspects of pursuit: latency, acceleration, gain, and catch-up saccades.
Figure 4
 
Velocity traces averaged across five subjects for the three kinematogram types at the (A–C) five coherence levels and (D) speeds. (A) Transparent motion. (B) White motion. (C) Brownian motion. (D) Experiment with varying speed at 100% coherence. The velocity traces are aligned to pursuit onset. The colors denote different coherence levels in (A)–(C) and different signal speeds in (D). The dashed horizontal lines mark the speed of the signal motion.
Figure 4
 
Velocity traces averaged across five subjects for the three kinematogram types at the (A–C) five coherence levels and (D) speeds. (A) Transparent motion. (B) White motion. (C) Brownian motion. (D) Experiment with varying speed at 100% coherence. The velocity traces are aligned to pursuit onset. The colors denote different coherence levels in (A)–(C) and different signal speeds in (D). The dashed horizontal lines mark the speed of the signal motion.
Figure 5
 
The same average velocity traces shown in Figure 4 for all types of kinematograms are here grouped by coherence and speed. (A) Twenty percent coherence and 2 deg/s at 100% coherence. (B) Forty percent coherence and 4 deg/s at 100% coherence. (C) Sixty percent coherence and 6 deg/s at 100% coherence. (D) One hundred percent coherence and 10 deg/s at 100% coherence. As in Figure 4, all velocity traces are aligned to pursuit onset. The colors denote the different kinematogram types. The dashed horizontal lines mark the speed of the signal motion.
Figure 5
 
The same average velocity traces shown in Figure 4 for all types of kinematograms are here grouped by coherence and speed. (A) Twenty percent coherence and 2 deg/s at 100% coherence. (B) Forty percent coherence and 4 deg/s at 100% coherence. (C) Sixty percent coherence and 6 deg/s at 100% coherence. (D) One hundred percent coherence and 10 deg/s at 100% coherence. As in Figure 4, all velocity traces are aligned to pursuit onset. The colors denote the different kinematogram types. The dashed horizontal lines mark the speed of the signal motion.
 
Latency
Pursuit latency varied inversely with coherence ( Figure 6). The shortest latency of around 110 ms was reached with 100% coherence and the longest latency of around 150 ms was reached with 20% coherence. To test for statistical differences, we computed a repeated-measures ANOVA on pursuit latency with coherence level and kinematogram type as factors. We obtained a significant main effect of coherence ( F(3,12) = 14.546, P = 0.003), but no significant main effect of kinematogram type ( F(2,8) = 2.454, P = 0.180). The interaction between coherence and kinematogram type was also not significant ( F(6,24) = 1.117, P = 0.381). This indicates that there was a general inverse relationship between coherence and latency, which was similar for all kinematogram types. Consistently with our results, it has been shown that pursuit latency is inversely related to the number of signal dots (Heinen & Watamaniuk, 1998). As the increase of coherence always results in the increase of signal dots, this also applies to our case. There was a non-significant trend for longer latencies in the white motion condition. This might be explained by a lower absolute number of signal dots in that stimulus, for which we used a lower dot density.
Figure 6
 
Pursuit latency as a function of coherence and signal speed. The symbols show the average across subjects; error bars denote the standard error of the mean. The different symbols and colors represent the three kinematogram types (blue) and the experiment with varying speed at 100% coherence (red).
Figure 6
 
Pursuit latency as a function of coherence and signal speed. The symbols show the average across subjects; error bars denote the standard error of the mean. The different symbols and colors represent the three kinematogram types (blue) and the experiment with varying speed at 100% coherence (red).
 
There are two possible interpretations for the inverse relationship between pursuit latency and motion coherence: First, it might simply take more time to decode the target direction in stimuli with a low signal-to-noise ratio. However, this interpretation is rather unlikely because even our lowest coherence level of 20% was well above the typical psychophysical thresholds of around 6% (Scase et al., 1996). Second, latency might be inversely related to the speed of the stimulus. For instance, reaction times to motion onsets depend on perceived speed (Burr, Fiorentini, & Morrone, 1998). In the same way, pursuit latencies decrease with stimulus speed (Carl & Gellman, 1987; Fuchs, 1967; Movshon, Lisberger, & Krauzlis, 1990). Latencies of ocular following depend on temporal frequency (Miles, Kawano, & Optican, 1986) or a combination of temporal frequency and stimulus speed (Gellman, Carl, & Miles, 1990). Consistent with the idea that MT determines the initial pursuit response (Lisberger, Morris, & Tychsen, 1987), the response latency of MT neurons is also inversely related to stimulus speed (Lisberger & Movshon, 1999; Movshon et al., 1990). 
We performed a control experiment to test the idea that latency increases because of the reduced vector average speed. Here we measured with the same subjects smooth pursuit responses to 100% coherent motion at five different speeds (2, 4, 6, 8, and 10 deg/s), which correspond to the vector average speed at the five coherence levels of the main experiment. In this case, we observed the same relationship between speed and latency, as we did between coherence level and latency ( Figure 6, red curve). This result argues in favor of our second interpretation that the latency differences are caused by the reduction of vector average speed. 
Acceleration
Initial eye acceleration was strongly influenced by coherence ( Figure 7). We analyzed eye acceleration in three time intervals of 50 ms relative to pursuit onset. In the first interval (0–50 ms), acceleration increased with coherence level for all three kinematogram types, showing no clear differences among the types. In the second interval (50–100 ms), acceleration was, in general, higher than in the first interval and also the influence of coherence level was stronger. In the third interval (100–150 ms), acceleration began to decay. The transparent motion showed less variation with coherence than the other kinematogram types. This was mainly due to exceptionally high accelerations at low coherence levels in the transparent motion.
Figure 7
 
Eye acceleration during three consecutive time intervals of 50 ms after pursuit onset. Conventions are the same as in Figure 6. (A) Time interval from 0 to 50 ms after pursuit onset. (B) Time interval from 50 to 100 ms. (C) Time interval from 100 to 150 ms.
Figure 7
 
Eye acceleration during three consecutive time intervals of 50 ms after pursuit onset. Conventions are the same as in Figure 6. (A) Time interval from 0 to 50 ms after pursuit onset. (B) Time interval from 50 to 100 ms. (C) Time interval from 100 to 150 ms.
 
For statistical analysis, we calculated a repeated-measures ANOVA with the factors coherence level, kinematogram type, and time interval. We observed a significant main effect for coherence level ( F(3,12) = 36.989, P < 0.001), which confirms that acceleration depended on coherence. We also found a significant main effect for time interval ( F(2,8) = 63.744, P < 0.001), which proves that acceleration was not constant over time. The main effect for kinematogram type was not significant ( F(2,8) = 0.628, P = 0.511), suggesting that there was no general acceleration difference among the kinematogram types. The interaction between coherence level and kinematogram type was not significant ( F(6,24) = 2.406, P = 0.153), showing that the effects of the coherence level were similar in all kinematogram types. The interaction between kinematogram type and time interval was significant ( F(4,16) = 8.403, P = 0.011). This indicates that the effect of the kinematogram type was different across the time intervals of the acceleration phase. The interaction between coherence level and time interval was not significant ( F(6,24) = 2.039, P = 0.183), suggesting that the effect of the coherence level did not change across the time intervals. The three-way interaction was also not significant ( F(12,48) = 0.507, P = 0.716). 
The acceleration profile in the control experiment with 100% coherence and different speeds was similar to the data in the main experiment, suggesting that the eye acceleration was primarily determined by the vector average speed of the stimulus, except for the late acceleration in the transparent motion, which showed less dependency on coherence or vector average speed. 
Steady-state gain
One of the most striking differences among the three kinematogram types was the difference in steady-state pursuit gain ( Figure 8A). For each kinematogram type, we fitted Equation 3 to estimate the amount of noise integration across coherence levels. In the transparent motion, the average pursuit gain varied between 0.93 and 0.96 across all coherence levels. The noise efficacy was 0.02, which is close to minimum, indicating that subjects were able to separate signal from noise dots and to pursue exclusively the signal dots at an appropriate speed. The other extreme case was the Brownian motion. Here the average pursuit gain was 0.20 for 20% coherence and 0.99 for 100% coherence and the noise efficacy was at the maximum of 1.00. Hence, subjects integrated across all stimulus dots to generate an average of all present motion signals. A similar result has been shown for the integration of directional noise for pursuit gain (Watamaniuk & Heinen, 1999). The relation of coherence and pursuit gain measured for the white motion was in between the other two cases. Here pursuit gain was 0.52 and 0.97 for 20% and 100% coherence, respectively. The noise efficacy was at an intermediate level at 0.28, indicating partial noise integration. A repeated-measures ANOVA on pursuit gain with the factors coherence level and kinematogram type confirmed these results. We obtained a significant main effect of coherence (F(3,12) = 233.046, P < 0.001) as well as a significant main effect of kinematogram type (F(2,8) = 267.849, P < 0.001). The interaction between coherence and kinematogram type was also significant (F(6,24) = 63.120, P < 0.001), proving that the effects of coherence were different in the kinematogram types.
Figure 8
 
Pursuit gain and perceived speed for the three kinematogram types. (A) Pursuit gain as a function of coherence level and signal speed. (B) Perceptual gain as a function of coherence level. The symbols show the average across subjects; error bars denote the standard error of the mean. The different symbols and colors represent the three kinematogram types (blue) and the experiment with varying speed at 100% coherence (red). The blue lines are obtained by fitting the noise efficacy from Equation 3 to the data. The red line is obtained by a linear regression. (C) Noise efficacy. The open symbols show individual subject's data; the filled symbols show the efficacy for the pooled data; the diagonal line marks points with identical efficacies in pursuit and perception. We plot the noise efficacy on a log–log plot because of the non-linear effect of noise efficacy on pursuit and perceptual gain.
Figure 8
 
Pursuit gain and perceived speed for the three kinematogram types. (A) Pursuit gain as a function of coherence level and signal speed. (B) Perceptual gain as a function of coherence level. The symbols show the average across subjects; error bars denote the standard error of the mean. The different symbols and colors represent the three kinematogram types (blue) and the experiment with varying speed at 100% coherence (red). The blue lines are obtained by fitting the noise efficacy from Equation 3 to the data. The red line is obtained by a linear regression. (C) Noise efficacy. The open symbols show individual subject's data; the filled symbols show the efficacy for the pooled data; the diagonal line marks points with identical efficacies in pursuit and perception. We plot the noise efficacy on a log–log plot because of the non-linear effect of noise efficacy on pursuit and perceptual gain.
 
As expected, the pursuit gain was around unity for all speeds in the varying speed control experiment. Interestingly, pursuit gain was slightly larger than unity for low speeds but smaller for high speeds. Probably, the subjects built up a cognitive expectation of the speed range, which causes a tendency to the average speed (Kowler & McKee, 1987). 
To learn if perceived speed and smooth pursuit gain are similar in these displays, we measured in a separate experiment the perceived speed of the three kinematograms at the different coherence levels ( Figure 8B). We estimated the noise efficacy in the same way as for the pursuit gain. To facilitate comparison with pursuit, we report perceptual gains as the ratio of perceived speed and physical signal speed. The results were very similar: in the transparent motion, the perceptual gain was near unity, and the noise efficacy was below 0.01. Hence, perceived speed was completely independent of the noise for transparent motion. In the Brownian motion, the perceptual gain ranged between 0.28 for 20% coherence and 1.03 for 100% coherence. The noise efficacy was at 0.73 still very close to unity. Hence, subjects integrated across all noise directions to estimate the stimulus speed, similar to the pursuit gain. This result corresponds nicely with psychophysical reports about the perceived speed of this stimulus type (Benton & Curran, 2009; Freeman & Sumnall, 2002). In the white motion, perceptual gain was 0.59 and 0.99 for 20% coherence and 100% coherence, respectively. The noise efficacy was at 0.16 again in between the transparent and Brownian motions. As can be seen from Figure 8C, there is good general agreement between the noise efficacies for pursuit and perception on an individual basis, except that for one subject in the white and Brownian motions, noise efficacy was lower for perception than for pursuit. As for the pursuit gain, we computed a repeated-measures ANOVA with the factors coherence level and kinematogram type. The results of the ANOVA were basically the same as for the pursuit gain: we obtained a significant main effect of coherence (F(3,12) = 67.177, P < 0.001) as well as a significant main effect of kinematogram type (F(2,8) = 23.366, P = 0.001). The interaction between coherence and kinematogram type was also significant (F(6,24) = 8.382, P = 0.017). 
Pursuit eye movements typically oscillate with a frequency of 3 to 7 Hz around the target velocity (Goldreich, Krauzlis, & Lisberger, 1992; Robinson, 1965). As can be seen from the average eye velocity traces (Figures 4 and 5), this was also the case in our data. We observed oscillations in a range from 6 to 12 Hz. The frequency of the oscillations was positively related to pursuit speed but showed no dependency on the kinematogram type. 
Saccades
Smooth pursuit of low gain is typically interrupted by catch-up saccades in motion direction (forward saccades), which reduce position errors. During initiation, position errors arise inevitably because of the pursuit latency, so that one might expect a large number of saccades soon after motion onset. However, this analysis pertains only to single target tracking, where a particular position is defined by the tracking target. Our stimuli do not contain position cues and we should not observe the typical saccade pattern. On the other hand, it also might be that our stimuli elicit optokinetic nystagmus (OKN) and not “true” pursuit. In that case, we should observe more saccades against the motion direction of the signal dots (backward saccades), which might occur well after pursuit initiation. Such reverse saccades might also occur when the eye approaches the edge of the display. To investigate these two possibilities, we analyzed the latency of the first saccade, as well as the frequency of backward and forward saccades ( Figure 9).
Figure 9
 
Saccades during smooth pursuit. (A) Average latency of the first saccade. (B) Frequency of forward saccades. (C) Frequency of backward saccades. Conventions are the same as in Figure 6.
Figure 9
 
Saccades during smooth pursuit. (A) Average latency of the first saccade. (B) Frequency of forward saccades. (C) Frequency of backward saccades. Conventions are the same as in Figure 6.
 
The average latency of the first saccades for all kinematogram types was 0.53 s ( SEM = 0.02). This is later than typical catch-up saccades that occur during pursuit initiation. We computed a repeated-measures ANOVA on the latency of the first saccade with the factors coherence level and kinematogram type. The main effects of coherence ( F(3,9) = 0.513, P = 0.565) and kinematogram type ( F(2,6) = 0. 551, P = 0.519) were not significant. The interaction between coherence and kinematogram type was also not significant ( F(6,18) = 0.880, P = 0.458). So neither the amount nor the type of noise of the kinematograms seemed to influence the saccade latencies. The latencies of the first saccades were very similar for the control experiment with varying speed at 100% coherence. 
The average forward saccade frequency was 18% ( SEM = 5%). This is further evidence that the typical catch-up saccade pattern described for in single target tracking did not apply to our conditions. Under single target tracking conditions in a ramp paradigm, we would expect one or more catch-up saccades on every trial. The average backward saccade frequency was 34% ( SEM = 8%). We analyzed the number of forward and backward saccades separately in relation to the pursuit speed. The number of forward saccades did not depend on pursuit speed in any of the kinematogram types. The number of backward saccades, however, clearly varied with pursuit speed, showing more backward saccades in conditions with higher pursuit speed. The relationship between pursuit speed and number of backward saccades was similar for all three kinematogram types and the control experiment with varying speed at 100% coherence. This indicates that the actual pursuit speed was the crucial factor and not the coherence level or the pursuit gain. It might be that these backward saccades resulted from the properties of our display: the radius of the kinematogram aperture was 10 deg and we presented 10 deg/s motion for 1 s. This means that the eyes should have never reached the aperture border before the end of the trial. However, subjects made backward saccades on average 0.58 s ( SEM = 0.03) after motion onset, which indicates that the subjects may have tried to avoid moving their eyes near the edge of the aperture. 
Pursuit at 0% coherence
In a separate experiment, we tested how long the different kinematograms could maintain pursuit even after the coherence level dropped to 0%. As we found such pronounced differences among the kinematogram types for the steady-state pursuit gain, we wondered whether they would also differ in that respect. 
The eye movement traces in Figure 10 (now aligned on motion onset) show large differences between kinematogram types but also between coherence levels. At 20% initial motion coherence ( Figures 10B, 10D, and 10F), pursuit traces showed no peculiarities in the transparent motion. However, in the white and especially in the Brownian motion, there was not much pursuit at all. This picture was quite different for the 100% initial motion coherence ( Figures 10A, 10C, and 10E). In the white and Brownian motions, eye velocity increased initially but leveled off to the same steady-state velocity as with 20% initial coherence. In the transparent motion, there was the same transient and rapid rise of eye velocity as in the white and Brownian motions. However, this was followed by a reduced acceleration until the same steady-state velocity was reached as with 20% initial coherence.
Figure 10
 
Average eye velocity traces for the three kinematogram types (rows) at two initial coherence levels (columns) when the initial motion coherence drops to 0%. (A, B) Transparent motion. (C, D) White motion. (E, F) Brownian motion. (A, C, E) Eighty percent initial coherence. (B, D, F) Twenty percent initial coherence. Unlike in Figures 4 and 5, the average eye movement traces are aligned to motion onset. The different colors represent the four different initial motion durations. The dashed horizontal line marks the speed of the initial signal motion. The dashed vertical lines mark the four offset times of coherent motion.
Figure 10
 
Average eye velocity traces for the three kinematogram types (rows) at two initial coherence levels (columns) when the initial motion coherence drops to 0%. (A, B) Transparent motion. (C, D) White motion. (E, F) Brownian motion. (A, C, E) Eighty percent initial coherence. (B, D, F) Twenty percent initial coherence. Unlike in Figures 4 and 5, the average eye movement traces are aligned to motion onset. The different colors represent the four different initial motion durations. The dashed horizontal line marks the speed of the initial signal motion. The dashed vertical lines mark the four offset times of coherent motion.
 
Steady-state gain at 0% coherence
As we already mentioned, there were extreme differences in the pursuit gain: In the transparent motion, pursuit gain was on average 0.80 ( SEM = 0.10). The white motion resulted in an average gain of 0.11 ( SEM = 0.05). In the Brownian motion, pursuit gain was lowest with an average value of 0.01 ( SEM < 0.01). To test this statistically, we computed a repeated-measures ANOVA with the factors coherence level, initial motion duration, and kinematogram type. We found a significant main effect for kinematogram type ( F(2,4) = 31.980, P = 0.030) but not for initial motion duration ( F(3,6) = 3.441, P = 0.195) and coherence level ( F(4,8) = 1.676, P = 0.314). This indicates that the pursuit gain after the drop of coherence to 0% depended on the kinematogram type but not on the coherence or the duration of the initial motion. There was no significant two-way interaction between kinematogram type and motion duration ( F(6,12) = 4.180, P = 0.155). There was also no significant interaction between motion duration and coherence ( F(12,24) = 1.816, P = 0.291) and coherence and kinematogram type ( F(8,16) = 1.138, P = 0.403). The three-way interaction was also not significant ( F(24,48) = 1.312, P = 0.369). These results basically show that the kinematogram types differ in the maintenance of pursuit at zero coherence in the same way as in normal pursuit gain. 
Acceleration in the transparent motion at 0% coherence
To quantify the two-stage acceleration profile in the transparent motion, we analyzed pursuit acceleration in three time intervals: the first interval, 100 to 200 ms after motion onset, should be driven by the initial motion coherence. The second interval is an intermediate interval from 300 to 400 ms. The third interval, 500 to 600 ms after motion onset, should be too late to be influenced by the initial motion coherence ( Figure 11).
Figure 11
 
Eye acceleration for transparent motion in the 0% coherence experiment. (A) Time interval from 100 to 200 ms after motion onset. (B) Time interval from 300 to 400 ms after motion onset. (C) Time interval from 500 to 600 ms after motion onset. The symbols show the average across subjects; error bars denote the standard error of the mean. The different symbols and colors represent the four different initial motion durations.
Figure 11
 
Eye acceleration for transparent motion in the 0% coherence experiment. (A) Time interval from 100 to 200 ms after motion onset. (B) Time interval from 300 to 400 ms after motion onset. (C) Time interval from 500 to 600 ms after motion onset. The symbols show the average across subjects; error bars denote the standard error of the mean. The different symbols and colors represent the four different initial motion durations.
 
Indeed, acceleration in the first interval varied with initial coherence, with stronger effects for longer initial motion duration. In the second interval, acceleration seemed to be independent of initial motion duration but inversely related to initial motion coherence. The third interval showed low acceleration values, independent of coherence or duration of initial motion. We calculated a repeated-measures ANOVA with the factors initial motion duration, initial motion coherence, and time interval. There was no significant main effect for initial motion duration ( F(3,6) = 1.001, P = 0.438), initial coherence ( F(4,8) = 1.133, P = 0.401), or time interval ( F(1,2) = 0.977, P = 0.434). There was no significant interaction between initial motion duration and initial coherence ( F(12,24) = 0.563, P = 0.605) and initial motion duration and time interval ( F(6,12) = 3.323, P = 0.206), but the interaction between initial coherence and time interval was significant ( F(8,16) = 12.753, P = 0.048). The three-way interaction was not significant ( F(24,48) = 2.086, P = 0.240). 
These results indicate that acceleration depended only on initial motion coherence and that this dependency changed across the three time intervals. While acceleration depended positively on initial motion coherence in the first time interval, this relationship was inverted in the second time interval and absent in the last time interval. Hence, there were two distinct acceleration periods for transparent motion, a first one driven by the initial motion coherence and a second one that was independent of the initial motion coherence. 
Deceleration in the white and Brownian motions at 0% coherence
In order to quantify the eye movement behavior in the white and Brownian motions, we analyzed (i) the peak pursuit gain during the first 400 ms after motion onset, (ii) the time when the peak gain was reached, and (iii) the exponential decay of eye velocity in the 500 ms following the peak pursuit gain ( Figure 12).
Figure 12
 
Eye movement parameters for white and Brownian motions in the 0% coherence experiment. (A–C) White motion. (D–F) Brownian motion. (A, D) Peak eye gain. (B, E) Latency of peak gain. (C, F) Time constant of eye velocity decay. Conventions are the same as in Figure 11.
Figure 12
 
Eye movement parameters for white and Brownian motions in the 0% coherence experiment. (A–C) White motion. (D–F) Brownian motion. (A, D) Peak eye gain. (B, E) Latency of peak gain. (C, F) Time constant of eye velocity decay. Conventions are the same as in Figure 11.
 
The peak eye gain in the first 400 ms after motion onset varied between 0.14 and 0.75. We calculated a repeated-measures ANOVA with the factors initial motion duration, initial motion coherence, and kinematogram type (white and Brownian motions). The main effects of initial coherence ( F(4,8) = 37.043, P = 0.019) and initial motion duration ( F(3,6) = 108.020, P < 0.001) were significant, but the main effect of kinematogram type was not significant ( F(1,2) = 3.008, P = 0.225). Hence, the peak gain increased significantly with coherence and duration of the initial motion. There were also significant two-way interactions between coherence and motion duration ( F(12,24) = 20.294, P = 0.012) and motion duration and kinematogram type ( F(3,6) = 10.194, P = 0.043). This suggests that the increase with coherence was significantly stronger for longer initial motion duration. The two-way interaction between coherence and noise type ( F(4,8) = 2.800, P = 0.235) and the three-way interaction ( F(12,24) = 2.191, P = 0.253) were not significant. In sum, these results show that the peak gain depended mainly on the coherence and duration of the initial motion. 
The latency of the peak pursuit gain varied between 178.1 and 344.2 ms. Consistently with the manipulation, the peak's latency increased significantly with initial motion duration ( F(3,6) = 17.696, P = 0.037). The main effects for initial coherence ( F(4,8) = 0.650, P = 0.549) and kinematogram type ( F(1,2) = 6.716, P = 0.122) were not significant. All two-way interactions as well as the three-way interaction were also not significant. Hence, the time when the peak pursuit gain was reached depended only on the duration of the initial motion, but neither on its coherence nor on the kinematogram type. Across initial coherence and kinematogram type, the peak pursuit gain was reached after 238.2, 242.7, 251.0, and 269.2 ms for initial motion durations of 50, 70, 100, and 150 ms. A linear regression from initial motion duration to the time of peak pursuit gain results in an intercept of 221.3 ( P < 0.001) and a slope of 0.31 ( P = 0.007). 
Finally, we fitted an exponential decay function to each eye velocity trace. The average time constant of the exponential decay of eye velocity was 95.48 ms ( SEM = 2.51) and 84.12 ms ( SEM = 3.30) in the white motion and Brownian motion, respectively. These values are very similar to the 100-ms time constant for a vanishing target (Pola & Wyatt, 1997). We calculated a repeated-measures ANOVA with the factors initial motion duration, initial coherence, and kinematogram type. All of the main effects and interactions were not significant. Hence, the decay rate was similar for white and Brownian motions and independent of the duration and coherence of the initial motion. 
Saccades at 0% coherence
One possibility to achieve a high pursuit gain in the transparent motion at a coherence level of 0% is to select one noise dot that accidentally moves in the correct direction. The analysis in the main experiment showed that subjects did not pursue individual dots, because they did not make saccades frequently. However, it might be that the subjects chose a different strategy in the zero coherence condition. In this case, the saccade pattern should also be different. The overall number of saccades per trial was on average 0.19 ( SEM = 0.10) in the transparent motion, 0.43 ( SEM = 0.22) in the white motion, and 0.53 ( SEM = 0.42) in the Brownian motion. Hence, the saccade number was inversely related to the pursuit gain, which rules out the possibility that subjects were tracking individual noise dots in the transparent motion. To test the influence of the other factors, we calculated a repeated-measures ANOVA on the saccade number with the factors coherence level, initial motion duration, and kinematogram type. We found no significant main effect for initial motion duration ( F(3,6) = 6.455, P = 0.123). The main effects for initial coherence ( F(4,8) = 2.808, P = 0.222) and kinematogram type were also not significant ( F(2,4) = 0.497, P = 0.559). The two-way interactions between initial motion duration and initial coherence ( F(12,24) = 1.159, P = 0.399), initial motion duration and kinematogram type ( F(6,12) = 0.583, P = 0.594) and between coherence level and kinematogram type ( F(8,16) = 3.780, P = 0.174) were not significant. The three-way interaction was also not significant ( F(24,48) = 1.170, P = 0.397). These results show that the saccade frequency was very similar to the main experiment with coherence values above 0%. Hence, the subjects did not track individual dots, even at 0% coherence. 
In sum, we found two different pattern of pursuit when initial coherence dropped to zero. In the white and Brownian motions, the eyes accelerated up to a peak velocity that depended on the coherence and duration of the initial motion. Afterward, the eye velocity decayed as one would expect it if the stimulus would disappear. In the transparent motion, we observed two distinct acceleration periods: acceleration in the first period depended on the initial coherence, while acceleration in the later period was independent of initial coherence. 
Discussion
We tested how three different types of motion noise influence different aspects of smooth pursuit eye movements and speed perception. As expected, we found an inverse relationship between pursuit latency and coherence. Latency measurements for stimuli of 100% coherence moving at different speeds were similarly related to speed, which indicates that the coherence effect on latency might be due to the changes in the vector average speed. In general, we did not find any systematic differences in pursuit latency among the kinematogram types. 
Eye acceleration increased with coherence level. In the first 100 ms, there were no clear differences among the three kinematogram types. After 100 ms, acceleration to transparent motion became less dependent on coherence and was higher at low coherence than in the white and Brownian motions. This might reflect an early segmentation of signal from noise in the transparent motion. 
The clearest differences among the kinematogram types were observed for pursuit gain. Whereas pursuit gain was independent of the coherence level for the transparent motion, it followed almost perfectly the vector average in the Brownian motion. This means that the pursuit system completely discounts the noise in the transparent motion while it integrates across all motion directions in the Brownian motion. The latter finding is consistent with the findings of Watamaniuk and Heinen (1999), who showed that pursuit speed corresponds to the mean speed in displays with directional noise. The pursuit gain in the white motion also depended on coherence level but showed only partial integration of the noise. 
Perceived speeds for the different kinematogram types showed a similar pattern to the pursuit gains. For transparent motion, perceived speed was independent of coherence. For Brownian motion, perceived speed followed the vector average, which corresponds to previous reports with similar stimuli (Benton & Curran, 2009; Freeman & Sumnall, 2002). For white motion, we found speed matches in between the transparent and Brownian motions. The overall pattern of results suggests that visual motion signals for pursuit and for perception arise from the same or very similar visual mechanisms. 
Comparison with psychophysical studies
Most of the previous studies comparing different kinematogram types measured direction thresholds as a function of coherence or other stimulus parameters (Pilly & Seitz, 2009; Scase et al., 1996; Watamaniuk, Sekuler, & Williams, 1989; Williams & Sekuler, 1984). For these measurements, there are only small differences among the kinematogram types. Williams and Sekuler (1984) compared two kinematogram types that roughly correspond to our Brownian and transparent motions. They did not find any difference in direction discrimination thresholds. Direction discrimination for stimuli with limited ranges of directions has also been measured (Watamaniuk et al., 1989). This study also did not find any difference between a random-walk and a fixed-path condition. In a more complete survey, Scase et al. (1996) tested six different kinematogram types. They used three different noise types (random position, random walk, and random direction) combined with two signal selection rules (same and different). A “same” signal selection rule means that signal and noise labels remain constant throughout the trial, like in our transparent motion. A “different” selection rule means that signal and noise labels are randomly reassigned each step, like in our white and Brownian motions. Although they did not find large differences among the noise types, coherence thresholds were slightly lower if the signal dots were always the same than when they were randomly assigned. Recently, the accuracy of direction estimation for different kinematogram types was measured (Pilly & Seitz, 2009). This study also tested white and Brownian motions and found the best estimations for Brownian motion. However, the performance varied for all types with contrast, aperture size, and spatial and temporal displacements. In general, responses to the different kinematogram types are less different in their direction thresholds than in pursuit gain or perceived speed. 
Neural basis of motion perception and smooth pursuit
There is a lot of evidence that area MT plays a crucial role in local motion perception. Lesions in area MT lead to impairments of motion perception (Newsome & Pare, 1988) and microstimulation in MT changes the perceived motion direction (Salzman et al., 1992). The joint analysis of psychophysical judgments and neural activity revealed a modest correlation between psychophysical judgment and neural activity, which also argues for a functional role of MT in motion perception (Britten et al., 1996, 1992). MT responses vary mostly linear with motion coherence in a white motion kinematogram (Britten et al., 1993), similar to our measurements of perceived speed across coherence levels. However, there are also studies suggesting that MT is not the final neural stage of motion perception but rather one of several processing steps. For instance, MT and MST neurons are not activated by theta motion or moving sound sources, although both stimuli cause a vivid sense of motion (Ilg & Churan, 2004). fMRI studies (Culham, He, Dukelow, & Verstraten, 2001; Sunaert, Van Hecke, Marchal, & Orban, 1999) reveal many areas that are activated by visual motion, suggesting that the neural substrate for motion perception is distributed across a number of brain regions. 
Area MT is necessary for the normal initiation of smooth pursuit, as shown by lesion studies (Dursteler & Wurtz, 1988). However, MT neurons encode only retinal motion and are silenced if the target is stabilized on the retina by smooth pursuit (Newsome et al., 1988), which means that area MT alone cannot be responsible for the maintenance of pursuit in steady state. In contrast to area MT, some neurons in area MST encode target motion in external space coordinates and show activity, even if the target is stabilized on the retina by pursuit (Chukoskie & Movshon, 2009; Ilg et al., 2004; Ilg & Thier, 2003; Inaba et al., 2007; Ono & Mustari, 2006). Similar to area MT, activity in area MST depends on average linearly on the coherence of a white motion kinematogram (Heuer & Britten, 2007), which is consistent with our finding of an approximately linear relationship of coherence and pursuit gain in the white and Brownian motion conditions. 
Motion transparency, attention, and eye movements
When two separate motions with different directions are presented at the same spatial location, subjects typically perceive two transparent moving surfaces. This is somewhat similar to the appearance of our transparent motion condition. The segmentation of the different surfaces is strongly influenced by attention. If one direction is cued, it can be detected within six distracter directions, while it can be detected only within three directions in the uncued case (Felisberti & Zanker, 2005). Several studies investigated the influence of motion transparency on eye movements, primarily the optokinetic nystagmus (OKN), showing similar results as ours. For two oppositely moving random-dot patterns, pursuit gain and slow-phase OKN gain are slightly reduced (Niemann, Ilg, & Hoffmann, 1994). While the initial period of OKN follows the average speed (Mestre & Masson, 1997) or the average direction (Maruyama, Kobayashi, Katsura, & Kuriki, 2003) of stimulus motions, steady-state OKN can be modulated by attention, so that the eyes follow the speed (Mestre & Masson, 1997) or direction (Watanabe, 1999) of motion in the attended depth plane. 
Our findings for pursuit and transparent motion are quite similar to the results obtained for OKN. We showed that the initial pursuit response is governed by the vector average of all stimulus motions, whereas the steady-state pursuit gain is only slightly reduced by the noise in the stimulus. The perceived speed of the transparent motion was in the same way constant across coherence levels as the steady-state pursuit gain. This indicates that a similar attentional selection of the signal motion takes place during smooth pursuit and perception as during OKN. This selection process obviously operates on a slower time scale, because it does not affect the latency and initial acceleration of pursuit. 
Pursuit in the absence of a visual motion signal
We also investigated the ability to maintain pursuit after the motion coherence dropped to zero. We found two different patterns in the pursuit eye movements: in the white and Brownian motions, normal pursuit initiation was interrupted by a decay of eye velocity like one would expect if the stimulus would disappear. In the white motion, the pursuit gain reached values of 0.1 at maximum, and in the Brownian motion, there was no residual pursuit at all. In the transparent motion, the normal pursuit acceleration, which was driven by the initial coherence, was followed by a period of reduced acceleration and a final acceleration phase, which was independent of the initial coherence. The final pursuit gain was around 0.8 for transparent motion. We showed that the high pursuit gain in the transparent motion was not accompanied by an increase of the number of saccades per trial, which basically argues against the interpretation that subjects were tracking individual noise dots that were moving accidentally in the former signal direction. Since there are several noise dots moving in that direction, this even might not be necessary. The tracking might be driven simply by covert attentional selection of the noise dots moving in the former signal direction. We think that the same segmentation process that enables a near perfect pursuit gain irrespective of coherence level in the main experiment is also the key for the maintenance of pursuit at 0% coherence conditions. 
Eye movement types: Pursuit vs. optokinetic nystagmus vs. ocular following
Smooth pursuit is only one of three major oculomotor responses to moving stimuli. Besides pursuit, there are OKN and ocular following. Ocular following is a fast response to a sudden movement of a large field and is typically distinguished from pursuit by stimulus size and by latency. The typical pursuit stimulus is a single, small moving target. In contrast to that, large fields of random dots or sine-wave gratings, covering more than 75 deg, are used to study ocular following (Gellman et al., 1990; Miles et al., 1986). Our stimulus spanned 20 deg, which is probably too small to elicit ocular following responses. Ocular following and pursuit also differ in their latencies. Typical ocular following latencies of around 50 ms for monkeys (Miles et al., 1986) and approximately 75 ms for humans (Gellman et al., 1990) are much shorter than pursuit latencies of around 100 ms (Lisberger & Westbrook, 1985) for monkeys and 150 ms (Braun et al., 2008) for humans. Even our shortest latencies were above 100 ms, which suggests that we did not measure ocular following. 
The distinction between pursuit and OKN is less clear. OKN is a pattern of slow following eye movements in the direction of motion (slow phases) interrupted by fast movements in the opposite direction (fast phases). There are at least two different sub-types of OKN (Ter Braak, 1936): stare nystagmus, which consists of short, low-gain slow phases and high-frequent, small-amplitude fast phases. On the other hand, look nystagmus consists of long, high-gain slow phases and low-frequent, large-amplitude fast phases. Stare nystagmus is typically observed when subjects try to stare straight ahead and has fast phase frequencies of about 3 Hz. The frequency of saccades was much lower in our data, so we can exclude stare nystagmus. 
Look nystagmus is typically observed when subjects attend the target motion. We found a strong positive relationship between the frequency of backward saccades and pursuit speed, which indicates that the backward saccades did only occur if the eyes approached the edge of the stimulus aperture. Hence, we would not interpret these backward saccades as fast phases of a look nystagmus. Furthermore, it is debatable if there is a true distinction between look nystagmus and smooth pursuit. An fMRI study revealed that look nystagmus evoked similar activity in cortical oculomotor areas as pursuit, such as the frontal eye fields (FEFs) and the supplementary eye fields (SEFs), whereas stare nystagmus failed to activate these areas (Konen, Kleiser, Seitz, & Bremmer, 2005). Furthermore, look OKN and pursuit elicit the same improvement of chromatic contrast sensitivity (Schütz, Braun, & Gegenfurtner, 2009; Schütz, Braun, Kerzel, & Gegenfurtner, 2008), also indicating some common neural structure. Developmental studies provide further evidence for an overlap of OKN and pursuit: Both types of eye movements evolve at the same age (Rosander & von Hofsten, 2002) and their occurrence coincides with the maturation of motion direction sensitivity (von Hofsten, 2004). In conclusion, the distinction between look nystagmus and smooth pursuit seems to be difficult in humans. 
Conclusion
We measured smooth pursuit responses and perceived speed for three different types of random-dot kinematograms. Pursuit latency and early acceleration varied with coherence but were the same for all kinematogram types. There were marked differences among the kinematogram types in steady-state pursuit gain and in perceived speed. Perceived speed and pursuit gain showed good agreement across all conditions, which suggests that both are driven by the same neural machinery. 
Acknowledgments
ACS, DIB, and KRG were supported by the DFG Forschergruppe FOR 560 “Perception and action.” JAM was supported by a grant from the NIH (EY 04440). We thank Miriam Spering and Harold Bedell for helpful discussion and Sarah Jill Wagner for help with data collection. 
Commercial relationships: none. 
Corresponding author: Alexander C. Schütz. 
Address: Abteilung Allgemeine Psychologie, Justus-Liebig-Universität, Otto-Behaghel-Str. 10F, 35394 Giessen, Germany. 
References
Adelson E. H. Bergen J. R. (1985). Spatiotemporal energy models for the perception of motion. Journal of the Optical Society of America A, 2, 284–299. [CrossRef]
Baker D. H. Graf E. W. (2010). Extrinsic factors in the perception of bistable motion stimuli. Vision Research, 50, 1257–1265. [CrossRef] [PubMed]
Benton C. P. Curran W. (2009). The dependence of perceived speed upon signal intensity. Vision Research, 49, 284–286. [CrossRef] [PubMed]
Bowns L. Alais D. (2006). Large shifts in perceived motion direction reveal multiple global motion solutions. Vision Research, 46, 1170–1177. [CrossRef] [PubMed]
Braddick O. (1993). Segmentation versus integration in visual motion processing. Trends in Neurosciences, 16, 263–268. [CrossRef] [PubMed]
Braun D. I. Mennie N. Rasche C. Schütz A. C. Hawken M. J. Gegenfurtner K. R. (2008). Smooth pursuit eye movements to isoluminant targets. Journal of Neurophysiology, 100, 1287–1300. [CrossRef] [PubMed]
Britten K. H. Newsome W. T. Shadlen M. N. Celebrini S. Movshon J. A. (1996). A relationship between behavioral choice and the visual responses of neurons in macaque MT. Visual Neuroscience, 13, 87–100. [CrossRef] [PubMed]
Britten K. H. Shadlen M. N. Newsome W. T. Movshon J. A. (1992). The analysis of visual motion: A comparison of neuronal and psychophysical performance. Journal of Neuroscience, 12, 4745–4765. [PubMed]
Britten K. H. Shadlen M. N. Newsome W. T. Movshon J. A. (1993). Responses of neurons in macaque MT to stochastic motion signals. Visual Neuroscience, 10, 1157–1169. [CrossRef] [PubMed]
Burr D. C. Baldassi S. Morrone M. C. Verghese P. (2009). Pooling and segmenting motion signals. Vision Research, 49, 1065–1072. [CrossRef] [PubMed]
Burr D. C. Fiorentini A. Morrone C. (1998). Reaction time to motion onset of luminance and chromatic gratings is determined by perceived speed. Vision Research, 38, 3681–3690. [CrossRef] [PubMed]
Carl J. R. Gellman R. S. (1987). Human smooth pursuit: Stimulus-dependent responses. Journal of Neurophysiology, 57, 1446–1463. [PubMed]
Chukoskie L. Movshon J. A. (2009). Modulation of visual signals in macaque MT and MST neurons during pursuit eye movement. Journal of Neurophysiology, 102, 3225–3233. [CrossRef] [PubMed]
Culham J. He S. Dukelow S. Verstraten F. A. (2001). Visual motion and the human brain: What has neuroimaging told us? Acta Psychologica, 107, 69–94. [CrossRef] [PubMed]
Dursteler M. R. Wurtz R. H. (1988). Pursuit and optokinetic deficits following chemical lesions of cortical areas MT and MST. Journal of Neurophysiology, 60, 940–965. [PubMed]
Fahle M. Poggio T. (1981). Visual hyperacuity: Spatiotemporal interpolation in human vision. Proceedings of the Royal Society of London B: Biological Sciences, 213, 451–477. [CrossRef]
Felisberti F. M. Zanker J. M. (2005). Attention modulates perception of transparent motion. Vision Research, 45, 2587–2599. [CrossRef] [PubMed]
Ferrera V. P. Lisberger S. G. (1997). The effect of a moving distractor on the initiation of smooth-pursuit eye movements. Visual Neuroscience, 14, 323–338. [CrossRef] [PubMed]
Freeman T. C. Sumnall J. H. (2002). Motion versus position in the perception of head-centred movement. Perception, 31, 603–615. [CrossRef] [PubMed]
Fuchs A. F. (1967). Saccadic and smooth pursuit eye movements in the monkey. The Journal of Physiology, 191, 609–631. [CrossRef] [PubMed]
Gellman R. S. Carl J. R. Miles F. A. (1990). Short latency ocular-following responses in man. Visual Neuroscience, 5, 107–122. [CrossRef] [PubMed]
Goldreich D. Krauzlis R. J. Lisberger S. G. (1992). Effect of changing feedback delay on spontaneous oscillations in smooth pursuit eye movements of monkeys. Journal of Neurophysiology, 67, 625–638. [PubMed]
Heeger D. J. (1987). Model for the extraction of image flow. Journal of the Optical Society of America A, 4, 1455–1471. [CrossRef]
Heinen S. J. Watamaniuk S. N. (1998). Spatial integration in human smooth pursuit. Vision Research, 38, 3785–3794. [CrossRef] [PubMed]
Heuer H. W. Britten K. H. (2007). Linear responses to stochastic motion signals in area MST. Journal of Neurophysiology, 98, 1115–1124. [CrossRef] [PubMed]
Ilg U. J. Churan J. (2004). Motion perception without explicit activity in areas MT and MST. Journal of Neurophysiology, 92, 1512–1523. [CrossRef] [PubMed]
Ilg U. J. Schumann S. Thier P. (2004). Posterior parietal cortex neurons encode target motion in world-centered coordinates. Neuron, 43, 145–151. [CrossRef] [PubMed]
Ilg U. J. Thier P. (2003). Visual tracking neurons in primate area MST are activated by smooth-pursuit eye movements of an “imaginary” target. Journal of Neurophysiology, 90, 1489–1502. [CrossRef] [PubMed]
Inaba N. Shinomoto S. Yamane S. Takemura A. Kawano K. (2007). MST neurons code for visual motion in space independent of pursuit eye movements. Journal of Neurophysiology, 97, 3473–3483. [CrossRef] [PubMed]
Keller E. L. Khan N. S. (1986). Smooth-pursuit initiation in the presence of a textured background in monkey. Vision Research, 26, 943–955. [CrossRef] [PubMed]
Kimmig H. G. Miles F. A. Schwarz U. (1992). Effects of stationary textured backgrounds on the initiation of pursuit eye movements in monkeys. Journal of Neurophysiology, 68, 2147–2164. [PubMed]
Konen C. S. Kleiser R. Seitz R. J. Bremmer F. (2005). An fMRI study of optokinetic nystagmus and smooth-pursuit eye movements in humans. Experimental Brain Research, 165, 203–216. [CrossRef] [PubMed]
Kowler E. McKee S. P. (1987). Sensitivity of smooth eye movement to small differences in target velocity. Vision Research, 27, 993–1015. [CrossRef] [PubMed]
Levitt H. (1971). Transformed up–down methods in psychoacoustics. Journal of the Acoustical Society of America, 49, 467–477. [CrossRef] [PubMed]
Lisberger S. G. Ferrera V. P. (1997). Vector averaging for smooth pursuit eye movements initiated by two moving targets in monkeys. Journal of Neuroscience, 17, 7490–7502. [PubMed]
Lisberger S. G. Morris E. J. Tychsen L. (1987). Visual motion processing and sensory-motor integration for smooth pursuit eye movements. Annual Review of Neuroscience, 10, 97–129. [CrossRef] [PubMed]
Lisberger S. G. Movshon J. A. (1999). Visual motion analysis for pursuit eye movements in area MT of macaque monkeys. Journal of Neuroscience, 19, 2224–2246. [PubMed]
Lisberger S. G. Westbrook L. E. (1985). Properties of visual inputs that initiate horizontal smooth pursuit eye movements in monkeys. Journal of Neuroscience, 5, 1662–1673. [PubMed]
Maruya K. Amano K. Nishida S. (2010). Conditional spatial-frequency selective pooling of one-dimensional motion signals into global two-dimensional motion. Vision Research, 50, 1054–1064. [CrossRef] [PubMed]
Maruyama M. Kobayashi T. Katsura T. Kuriki S. (2003). Early behavior of optokinetic responses elicited by transparent motion stimuli during depth-based attention. Experimental Brain Research, 151, 411–419. [CrossRef] [PubMed]
Masson G. S. Stone L. S. (2002). From following edges to pursuing objects. Journal of Neurophysiology, 88, 2869–2873. [CrossRef] [PubMed]
Mestre D. R. Masson G. S. (1997). Ocular responses to motion parallax stimuli: The role of perceptual and attentional factors. Vision Research, 37, 1627–1641. [CrossRef] [PubMed]
Miles F. A. Kawano K. Optican L. M. (1986). Short-latency ocular following responses of monkey I Dependence on temporospatial properties of visual input. Journal of Neurophysiology, 56, 1321–1354. [PubMed]
Miura K. Kobayashi Y. Kawano K. (2009). Ocular responses to brief motion of textured backgrounds during smooth pursuit in humans. Journal of Neurophysiology, 102, 1736–1747. [CrossRef] [PubMed]
Morgan M. J. Ward R. (1980). Conditions for motion flow in dynamic visual noise. Vision Research, 20, 431–435. [CrossRef] [PubMed]
Movshon J. A. Lisberger S. G. Krauzlis R. J. (1990). Visual cortical signals supporting smooth pursuit eye movements. Cold Spring Harbor Symposia on Quantitative Biology, 55, 707–716. [CrossRef] [PubMed]
Newsome W. T. Pare E. B. (1988). A selective impairment of motion perception following lesions of the middle temporal visual area (MT). Journal of Neuroscience, 8, 2201–2211. [PubMed]
Newsome W. T. Wurtz R. H. Dursteler M. R. Mikami A. (1985). Deficits in visual motion processing following ibotenic acid lesions of the middle temporal visual area of the macaque monkey. Journal of Neuroscience, 5, 825–840. [PubMed]
Newsome W. T. Wurtz R. H. Komatsu H. (1988). Relation of cortical areas MT and MST to pursuit eye movements II. Differentiation of retinal from extraretinal inputs. Journal of Neurophysiology, 60, 604–620. [PubMed]
Niemann T. Hoffmann K. P. (1997). The influence of stationary and moving textured backgrounds on smooth-pursuit initiation and steady state pursuit in humans. Experimental Brain Research, 115, 531–540. [CrossRef] [PubMed]
Niemann T. Ilg U. J. Hoffmann K. P. (1994). Eye movements elicited by transparent stimuli. Experimental Brain Research, 98, 314–322. [CrossRef] [PubMed]
Ono S. Mustari M. J. (2006). Extraretinal signals in MSTd neurons related to volitional smooth pursuit. Journal of Neurophysiology, 96, 2819–2825. [CrossRef] [PubMed]
Osborne L. C. Lisberger S. G. Bialek W. (2005). A sensory source for motor variation. Nature, 437, 412–416. [CrossRef] [PubMed]
Pilly P. K. Seitz A. R. (2009). What a difference a parameter makes: A psychophysical comparison of random dot motion algorithms. Vision Research, 49, 1599–1612. [CrossRef] [PubMed]
Pola J. Wyatt H. J. (1997). Offset dynamics of human smooth pursuit eye movements: Effects of target presence and subject attention. Vision Research, 37, 2579–2595. [CrossRef] [PubMed]
Robinson D. A. (1965). The mechanics of human smooth pursuit eye movement. The Journal of Physiology, 180, 569–591. [CrossRef] [PubMed]
Rosander K. von Hofsten C. (2002). Development of gaze tracking of small and large objects. Experimental Brain Research, 146, 257–264. [CrossRef] [PubMed]
Salzman C. D. Murasugi C. M. Britten K. H. Newsome W. T. (1992). Microstimulation in visual area MT: Effects on direction discrimination performance. Journal of Neuroscience, 12, 2331–2355. [PubMed]
Scase M. O. Braddick O. J. Raymond J. E. (1996). What is noise for the motion system? Vision Research, 36, 2579–2586. [CrossRef] [PubMed]
Schütz A. C. Braun D. I. Gegenfurtner K. R. (2007). Contrast sensitivity during the initiation of smooth pursuit eye movements. Vision Research, 47, 2767–2777. [CrossRef] [PubMed]
Schütz A. C. Braun D. I. Gegenfurtner K. R. (2009). Chromatic contrast sensitivity during optokinetic nystagmus, visually enhanced vestibulo-ocular reflex and smooth pursuit eye movements. Journal of Neurophysiology, 101, 2317–2327. [CrossRef] [PubMed]
Schütz A. C. Braun D. I. Kerzel D. Gegenfurtner K. R. (2008). Improved visual sensitivity during smooth pursuit eye movements. Nature Neuroscience, 11, 1211–1216. [CrossRef] [PubMed]
Snowden R. J. Verstraten F. A. (1999). Motion transparency: Making models of motion perception transparent. Trends in Cognitive Sciences, 3, 369–377. [CrossRef] [PubMed]
Spering M. Gegenfurtner K. R. (2007a). Contextual effects on smooth-pursuit eye movements. Journal of Neurophysiology, 97, 1353–1367. [CrossRef]
Spering M. Gegenfurtner K. R. (2007b). Contrast and assimilation in motion perception and smooth pursuit eye movements. Journal of Neurophysiology, 98, 1355–1363. [CrossRef]
Spering M. Gegenfurtner K. R. (2008). Contextual effects on motion perception and smooth pursuit eye movements. Brain Research, 1225, 76–85. [CrossRef] [PubMed]
Spering M. Gegenfurtner K. R. Kerzel D. (2006). Distractor interference during smooth pursuit eye movements. Journal of Experimental Psychology: Human Perception and Performance, 32, 1136–1154. [CrossRef] [PubMed]
Sunaert S. Van Hecke P. Marchal G. Orban G. A. (1999). Motion-responsive regions of the human brain. Experimental Brain Research, 127, 355–370. [CrossRef] [PubMed]
Ter Braak J. W. G. (1936). Untersuchungen über optokinetischen nystagmus. Archives Néerlandaises de Physiologie de L'homme et des Animaux, 21, 309–376.
von Hofsten C. (2004). An action perspective on motor development. Trends in Cognitive Sciences, 8, 266–272. [CrossRef] [PubMed]
Wallace J. M. Stone L. S. Masson G. S. (2005). Object motion computation for the initiation of smooth pursuit eye movements in humans. Journal of Neurophysiology, 93, 2279–2293. [CrossRef] [PubMed]
Watamaniuk S. N. Heinen S. J. (1999). Human smooth pursuit direction discrimination. Vision Research, 39, 59–70. [CrossRef] [PubMed]
Watamaniuk S. N. Sekuler R. Williams D. W. (1989). Direction perception in complex dynamic displays: The integration of direction information. Vision Research, 29, 47–59. [CrossRef] [PubMed]
Watanabe K. (1999). Optokinetic nystagmus with spontaneous reversal of transparent motion perception. Experimental Brain Research, 129, 156–160. [CrossRef] [PubMed]
Webb B. S. Ledgeway T. McGraw P. V. (2007). Cortical pooling algorithms for judging global motion direction. Proceedings of the National Academy of Sciences of the United States of America, 104, 3532–3537. [CrossRef] [PubMed]
Williams D. W. Sekuler R. (1984). Coherent global motion percepts from stochastic local motions. Vision Research, 24, 55–62. [CrossRef] [PubMed]
Figure 1
 
Movie of the three kinematogram types at different coherence levels.
Figure 1
 
Movie of the three kinematogram types at different coherence levels.
Figure 2
 
Space–time plots for the three kinematogram types (rows) at two coherence levels (columns). Space represents one horizontal line of pixels in the display. Stimulus dots are plotted in white; the dashed yellow line represents the signal speed; the dashed cyan line represents the vector average speed; and the dashed magenta line represents the measured pursuit speed. (A, B) Transparent motion. (C, D) White motion. (E, F) Brownian motion. (A, C, E) Eighty percent coherence. (B, D, F) Twenty percent coherence.
Figure 2
 
Space–time plots for the three kinematogram types (rows) at two coherence levels (columns). Space represents one horizontal line of pixels in the display. Stimulus dots are plotted in white; the dashed yellow line represents the signal speed; the dashed cyan line represents the vector average speed; and the dashed magenta line represents the measured pursuit speed. (A, B) Transparent motion. (C, D) White motion. (E, F) Brownian motion. (A, C, E) Eighty percent coherence. (B, D, F) Twenty percent coherence.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×