Free
Article  |   May 2011
Compensation for equiluminant color motion during smooth pursuit eye movement
Author Affiliations
Journal of Vision May 2011, Vol.11, 12. doi:https://doi.org/10.1167/11.6.12
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Masahiko Terao, Ikuya Murakami; Compensation for equiluminant color motion during smooth pursuit eye movement. Journal of Vision 2011;11(6):12. https://doi.org/10.1167/11.6.12.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Motion perception is compromised at equiluminance. Because previous investigations have been primarily carried out under fixation conditions, it remains unknown whether and how equiluminant color motion comes into play in the velocity compensation for retinal image motion due to smooth pursuit eye movement. We measured the retinal image velocity required to reach subjective stationarity for a horizontally drifting sinusoidal grating in the presence of horizontal smooth pursuit. The grating was defined by luminance or chromatic modulation. When the subjective stationarity of the color motion was shifted toward environmental stationarity, compared with the subjective stationarity of luminance motion, that of color motion was farther from retinal stationarity, indicating that a slowing of color motion occurred before this factor contributed to the process by which retinal motion was integrated with a biological estimate of eye velocity during pursuit. The gain in the estimate of eye velocity per se was unchanged irrespective of whether the stimulus was defined by luminance or by color. Indeed, the subjective reduction in the speed of color motion during fixation was accounted for by the same amount of deterioration in speed. From these results, we conclude that the motion deterioration at equiluminance takes place prior to the velocity comparison.

Introduction
Retinal motion signals must eventually interact with eye movement signals because retinal image motion originates not only from the object motion but also from the eye movement of an observer. For example, smooth pursuit eye movement introduces an extra backward retinal sweep of stationary background patterns (for simplicity, we write as if a leftward pursuit makes a rightward retinal sweep, ignoring the optical reversal that occurs through the lens system). Denoting the head-centered (environmental) velocity of a moving object, its retinal velocity, and the observer's eye velocity as V H, V R, and V E, respectively, the following relationship exists: V H = V R + V E. It is important to determine what portion of the retinal image velocity is elicited by the observer's bodily movements and what portion is elicited by external object motion. It has been established that retinal image motion defined by luminance information is partially compensated for by a biological estimate of ongoing eye movement during smooth pursuit. The most common explanation for this compensation is that the visual system integrates two velocities, one from visual inputs on the retina and the other from extraretinal information such as an efference copy of oculomotor commands, at a higher level of motion processing (von Holst, 1954; for a review, see Goldstein, 2007). Thus, this velocity integration can be written as
V ^
H =
V ^
R +
V ^
E, where
V ^
H,
V ^
R, and
V ^
E indicate the perceived head-centered velocity of a moving object, the biologically estimated retinal image motion velocity, and the biologically estimated eye velocity, respectively. We assume that the internal estimation of velocity is simply proportional to the actual velocity:
V ^
E/V E = g E (const.); the ratio of estimated to actual velocities is, hereafter, termed the estimation gain. If
V ^
R and
V ^
E were both obtained with the estimation gain of unity, a head-centrically (environmentally) stationary object would be perceptually stationary in the visual world as well. However, biological estimates are not accurate. It is known that
V ^
E/V E < 1 in several experimental situations with poor visual environments. This underestimation of pursuit velocity has been shown to result in an environmentally stationary object appearing to move slightly against the direction of pursuit (the Filehne illusion; Filehne, 1922; Freeman & Banks, 1998; Mack & Herman, 1973, 1978; Wertheim, 1987) and a tracked object appearing to move more slowly than when viewed during fixation (the Aubert–Fleischl phenomenon; Aubert, 1886; Dichgans, Wist, Diener, & Brandt, 1975; Fleischl, 1882; Freeman & Banks, 1998; Wertheim & Van Gelder, 1990). 
In a natural image, motion signals contain both color and luminance information. The relationship between motion processing and color information has long been an intriguing issue. Because color motion has perceptual characteristics that differ from those of luminance-based motion and because color motion may rely on a different neural mechanism, it is possible that the brain compensates for the color motion originating from pursuit in a way that differs from how it compensates for luminance-based motion. Early neurophysiological and anatomical studies suggested the existence of separate neural pathways for motion and color processing (Livingstone & Hubel, 1988; Maunsell & Newsome, 1987; Zeki, 1974). However, we can perceive motion when we observe the motion of an equiluminant pattern, where motion is defined by chromatic modulation alone. A widely held view is that the visual system has a distinct color motion mechanism, which is different from the luminance-based motion mechanism (Cavanagh & Favreau, 1985; Cropper & Derrington, 1996; Derrington & Badcock, 1985; Dougherty, Press, & Wandell, 1999; Mullen & Baker, 1985; Ruppertsberg, Wuerger, & Bertamini, 2003; Seidemann, Poirson, Wandell, & Newsome, 1999; Wandell et al., 1999; for a review, see Cropper & Wuerger, 2005; Gegenfurtner & Hawken, 1996a). The experimental fact is that equiluminant motion exhibits different perceptual characteristics compared with luminance-based motion. The detection threshold for color motion is lower than that for luminance motion in response to peripherally presented stimuli (e.g., Cavanagh & Anstis, 1991; Derrington & Henning, 1993) and higher than that for luminance motion in response to foveally presented stimuli (Derrington & Henning, 1993; Gegenfurtner & Hawken, 1995; Stromeyer, Kronauer, Ryu, Chaparro, & Eskew, 1995). The perceived speed of color motion is slower than that of luminance motion in response to supra-threshold stimuli at slow speeds (e.g., Cavanagh, Tyler, & Favreau, 1984). Such perceptual deterioration of color motion occurs primarily in response to peripheral stimuli that are briefly presented at low temporal frequencies (see Cropper & Wuerger, 2005 for a review; Gegenfurtner & Hawken, 1996a), which were the conditions used in the present study. 
Because most previous psychophysical experiments have focused on perception of color motion while fixation is maintained, it is difficult to infer what happens to color motion during smooth pursuit. A recent study revealed that smooth pursuit of an equiluminant stimulus is not accompanied by an analogous subjective slowing (Braun et al., 2008) and that steady-state pursuit speed did not significantly differ in response to color- versus luminance-defined targets, although pursuit initiation was delayed and perceived speed was slower in response to color motion observed under conditions of fixation. This study demonstrated that the absence of luminance information does not influence
V ^
E per se when a target was perfectly tracked (i.e., V R = 0). However, whether and how the aforementioned velocity integration process is affected by color motion when it occurs independently of a pursuit target remains unclear. Investigation of how the visual system compensates for color motion during pursuit should provide a better understanding of the frame of reference used by the visual system to convert retina-centered information into a head-centered representation. 
Experiment 1
We employed a widely used psychophysical method (e.g., Ehrenstein, Mateeff, & Hohnsbein, 1990; Freeman & Banks, 1998; Mack & Herman, 1973, 1978; Turano & Massof, 2001; Wertheim, 1987; Wertheim & Van Gelder, 1990) that measures the retinal image velocity required to reach the point of subjective stationarity (PSS). The PSS can be considered as the point at which
V ^
R and
V ^
E are balanced (i.e.,
V ^
R = −
V ^
E). Figure 1A shows a schematic example of PSS for luminance-based motion. Assuming
V ^
E/V E < 1, the PSS is found at an environmental velocity (V H) moving in the same direction as smooth pursuit. This corresponds to a retinal velocity (V R) in the direction opposite to that of pursuit but at a speed slower than eye speed. Therefore, the ratio [V R at PSS]/−V E yields the eye velocity estimation gain,
V ^
E/V E
Figure 1
 
Predictions. (A) Schematic prediction of the relationship between perceived velocity and stimulus velocity at the point of subjective stationarity (PSS). In this figure, the eye movement is leftward. In this particular example, the eye velocity estimation gain was set at 0.64 (from Freeman, 2001), and then, the PSS was located 0.36% to the left of head-centered (environmental) stationarity and 0.64% to the right of retinal stationarity. (B) Schematic predictions for color motion. If the speed reduction in color motion occurred after velocity integration, the PSS for color motion would be the same as that for luminance-based motion (gray curve), with the only difference being in the slope of the psychometric function (red curve). In contrast, if the reduction occurred before velocity integration, the PSS would shift farther to the right (blue and purple curves), with the ratio [V R at PSS for luminance motion]/[V R at PSS for color motion] indicating the ratio of speed reduction of color motion.
Figure 1
 
Predictions. (A) Schematic prediction of the relationship between perceived velocity and stimulus velocity at the point of subjective stationarity (PSS). In this figure, the eye movement is leftward. In this particular example, the eye velocity estimation gain was set at 0.64 (from Freeman, 2001), and then, the PSS was located 0.36% to the left of head-centered (environmental) stationarity and 0.64% to the right of retinal stationarity. (B) Schematic predictions for color motion. If the speed reduction in color motion occurred after velocity integration, the PSS for color motion would be the same as that for luminance-based motion (gray curve), with the only difference being in the slope of the psychometric function (red curve). In contrast, if the reduction occurred before velocity integration, the PSS would shift farther to the right (blue and purple curves), with the ratio [V R at PSS for luminance motion]/[V R at PSS for color motion] indicating the ratio of speed reduction of color motion.
As described in the Introduction section, color motion appears slower than luminance motion (e.g., Cavanagh et al., 1984). We measured the PSS during pursuit for luminance motion and color motion to see whether an orderly relationship existed among the estimation gains of V R for luminance motion, V R for color motion, and V E. In the Introduction section, we briefly argued that the visual system estimates head-centered velocity with the equation
V ^
H =
V ^
R +
V ^
E. Assuming that [
V ^
E] is the same and that the summation property of the equation [+] is the same under all conditions, two possible factors remain to explain the perceptual slowing of color motion. One is a color-dependent alteration of
V ^
H after velocity integration, and the other is a color-dependent alteration of
V ^
R before velocity integration. Figure 1B shows our predictions. If the perceived reduction in the speed of color motion occurred after the velocity integration (
V ^
H), the PSS would be the same as that of the luminance motion, and the subjective reduction in speed would take the form of a shallower slope of the psychometric function (Figure 1B, red). If the reduction of color motion speed occurred before velocity integration (
V ^
R), a faster retinal velocity would be required to cancel
V ^
E, shifting the PSS for color motion farther from the PSS for luminance motion (Figure 1B, blue); in other words, the Filehne illusion would be apparently reduced. Finally, if a sufficiently severe speed reduction occurred before velocity integration such that the estimation gain for retinal color motion was less than
V ^
E/V E, the PSS would be found at a positive head-centered velocity (i.e., the Filehne illusion would be reversed, as is sometimes found in domains other than chromatic modulation; e.g., Ehrenstein et al., 1990; Freeman & Banks, 1998; Turano & Massof, 2001; Wertheim, 1987; Figure 1B, purple). To test these possibilities, we measured the PSS for two speeds of pursuit (8 and 2 deg/s). These speeds were used because the subjective speed reduction in equiluminant motion is known to be greater at slower physical speeds. Because we used a grating at a low enough spatial frequency (0.12 cpd), the temporal frequencies of the retinal images of the environmentally stationary grating were 0.96 and 0.24 Hz for the pursuit speeds of 8 and 2 deg/s, respectively. These spatial and temporal frequencies are sufficiently low to lead to the subjective slowing of equiluminant stimuli (e.g., Cavanagh et al., 1984; Gegenfurtner & Hawken, 1995). 
Methods
Observers
Observers were one of the authors (M.T.) and three volunteers who were unaware of the purpose of the experiments. All had normal or corrected-to-normal vision. All participants gave written informed consent prior to the experiments. The experiments and the consent form were approved by the University of Tokyo Research Ethics Committee. 
Apparatus
The visual stimuli were presented on the screen of a 120-Hz CRT monitor (Mitsubishi, RDF223H) driven by a computer (Apple Mac Pro Quad-core, 2 × 2.8 GHz) through a graphic card (NVIDIA Quadro FX 5600). The spatial resolution of the monitor was 800 × 600 pixels, with each pixel subtending 2 arcmin at the viewing distance of 86 cm. Each phosphor output of the CRT was linearized by a look-up table to give a 10-bit intensity resolution. The observer sat in a dimly illuminated room with his or her head fixed on a chin rest and viewed the display binocularly. A keyboard was placed in front of the observer to register responses. The horizontal and vertical gaze positions of both eyes were monitored at 500 Hz with a video-based eye tracking system (EyeLink 1000, SR Research). 
Stimuli
Stimuli were generated using a programming environment (MATLAB, The MathWorks) with the software library Psychtoolbox (Brainard, 1997; Pelli, 1997). A white disk (16 arcmin) with the maximum luminance was used as the pursuit target. A vertically oriented sinusoidal grating with a spatial frequency of 0.12 cpd was presented 10 deg below the pursuit target (Figure 2). The grating subtended 26.6 deg in width and 10 deg in height and drifted within a stationary spatial Gaussian window of contrast modulation. The full width at half maximum (FWHM) of the spatial window was 19.6 deg horizontally and 5.6 deg vertically. The background was a uniform gray field (28.7 cd/m2). The grating drifted at various retinal velocities according to the method of constant stimuli. 
Figure 2
 
Schematic illustration of the spatial configuration. A grating of S–(L + M) motion is shown as an example. The grating drifted within a stationary spatial Gaussian window of contrast modulation. The horizontal and vertical extents of the spatial Gaussian window are shown.
Figure 2
 
Schematic illustration of the spatial configuration. A grating of S–(L + M) motion is shown as an example. The grating drifted within a stationary spatial Gaussian window of contrast modulation. The horizontal and vertical extents of the spatial Gaussian window are shown.
Achromatic and chromatic modulations
The sinusoids were achromatic and chromatic modulations along the cardinal axes of the DKL color space (Derrington, Krauskopf, & Lennie, 1984; Krauskopf, Williams, & Heeley, 1982; MacLeod & Boynton, 1979). The chromatic modulations of the gratings were chosen to isolate the three putative subcortical mechanisms, i.e., a luminance mechanism (“Lum”), an L–M mechanism, and an S–(L + M) mechanism (Derrington et al., 1984). The appearance of these gratings was achromatic, reddish–greenish, and lime–violet, respectively. The maximum luminance contrast (i.e., the contrast at the center of the grating) was 0.17. Cone contrasts were determined with respect to white (CIE (1931) xy coordinates, [x, y] = [0.334, 0.334]) and were calculated by assuming the cone fundamentals of the standard observer (Smith & Pokorny, 1975). Along the L–M axis, the maximum L- and M-cone contrasts were 12% and 14.7%, respectively. Expressed in CIE (1931) xy coordinates, the chromatic modulation along the L–M axis varied from [0.41, 0.29] to [0.23, 0.38]. Along the S–(L + M) axis, the maximum S-cone contrast was 81.9%. Expressed in CIE (1931) xy coordinates, the chromatic modulation along the S–(L + M) axis varied from [0.41, 0.49] to [0.29, 0.25]. The subjective equiluminance of each chromatic modulation had been adjusted for each observer by flicker photometry. These luminance and color contrasts were chosen to establish roughly equivalent saliency for the achromatic and chromatic gratings, which had been determined by the first author through pilot experiments. 
Psychophysical procedure and analysis
A key press by the observer was followed by the horizontal movement of the pursuit target at a constant speed of 8 or 2 deg/s. The target moved for a distance of 26.6 deg from the left end to the right end or from the right end to the left end of the CRT screen. Observers were asked to pursue the target as precisely as possible. When the target came within ±2 deg of the horizontal center of the screen, the drifting grating appeared for 500 ms. After the target disappeared, the observer had to judge the motion direction of the grating using screen coordinates (leftward or rightward). The velocity of the drifting grating was randomly changed within each experimental block. The type of the sinusoid (Lum, L–M, or S–(L + M)) and target speed were changed between blocks. Blocks for different sinusoids and target speeds were tested in random order, with an inter-block rest interval of at least 1 min, and approximately 12 blocks were needed to acquire sufficient data to calculate each psychometric function. 
Because no systematic difference was found between the rightward and leftward pursuit conditions, the values under the rightward condition were flipped, and the two were merged. In the subsequent analysis, all data were formatted such that the observer was considered to be judging the perceived direction while engaging in leftward smooth pursuit eye movement. A psychometric function was drawn by fitting the cumulative Gaussian function to the proportion of perceiving rightward motion according to the constrained maximum likelihood method, and confidence intervals were computed by the bootstrap method using the software library Psignifit toolbox for MATLAB (Wichmann & Hill, 2001a, 2001b). The PSS was defined as the image velocity yielding a 50% proportion of perceiving rightward motion. 
Eye movement recording and analysis
Calibration of the eye tracking system was carried out at the beginning of each block. Eye position data in each trial were stored on disk for offline analysis. The eye position time series were filtered using a Butterworth filter with a cut-off frequency of 30 Hz. Eye velocity was obtained by digital differentiation of eye position over time. Trials were excluded if they contained rapid eye movements exceeding 30 deg/s indicating catch-up saccades. Trials were also excluded if pursuit gain was out of the range of 100 ± 25% of the target velocity. Pursuit gain was determined by the linear regression of eye position against time within the 600-ms period centered at the stimulus duration. Note that the values for oculomotor pursuit gain did not change across different color modulation conditions because we used the same target with sufficient luminance contrast under all conditions. We collected data until at least 20 valid trials were obtained for each sampling point. Therefore, each point of the psychometric functions shown in Figure 3 and others was based on at least 40 trials. 
Figure 3
 
Results of Experiment 1 for a pursuit speed of 8 deg/s. (A) Psychometric functions obtained for four observers plotted in separate panels. The upper and lower abscissas indicate the velocity of the drifting grating in retinal coordinates and in head-centered (environmental) coordinates, respectively. These abscissas are offset from each other by 8 because the velocity of smooth pursuit was −8 deg/s in this experiment. The horizontal bar at the midpoint of each psychometric function indicates a 95% bootstrap confidence interval. (B) Summary data of the PSS. Symbols indicate individual data points, and the bars indicate the across-observer average, with error bars showing ±1 SEM. Each solid bar illustrates averaged V H at the PSS, whereas each open bar illustrates averaged V R at the PSS. (C) The velocity estimation gain for color motion, defined as the ratio [V R at the PSS for Lum motion]/[V R at the PSS for color motion]. The leftmost bar is set at 1 by definition, and the estimation gains for L–M and S–(L + M) motions are shown relative to this.
Figure 3
 
Results of Experiment 1 for a pursuit speed of 8 deg/s. (A) Psychometric functions obtained for four observers plotted in separate panels. The upper and lower abscissas indicate the velocity of the drifting grating in retinal coordinates and in head-centered (environmental) coordinates, respectively. These abscissas are offset from each other by 8 because the velocity of smooth pursuit was −8 deg/s in this experiment. The horizontal bar at the midpoint of each psychometric function indicates a 95% bootstrap confidence interval. (B) Summary data of the PSS. Symbols indicate individual data points, and the bars indicate the across-observer average, with error bars showing ±1 SEM. Each solid bar illustrates averaged V H at the PSS, whereas each open bar illustrates averaged V R at the PSS. (C) The velocity estimation gain for color motion, defined as the ratio [V R at the PSS for Lum motion]/[V R at the PSS for color motion]. The leftmost bar is set at 1 by definition, and the estimation gains for L–M and S–(L + M) motions are shown relative to this.
Results and discussion
Results for the directional judgments for the four observers under the condition of a pursuit speed of 8 deg/s are shown in separate panels of Figure 3A, in which psychometric functions for the probability of perceiving motion in the direction opposite to pursuit are plotted against the velocity of the drifting grating. 
Several important findings emerged. First, the PSS for Lum motion was found to lie between retinal stationarity and head-centered stationarity. This result indicates that the retinal velocity for Lum motion fed into the velocity integration to compensate for retinal image motion originating from smooth pursuit, but that this compensation was not perfect. The leftmost bar of Figure 3B plots the PSS for Lum motion, averaged across observers. Clearly, an actually stationary grating did not appear stationary. To become subjectively stationary, the grating had to drift on the display at a rate of approximately 4 deg/s in the same direction as smooth pursuit. This relationship is a confirmation of the well-known Filehne illusion and means that
V ^
E/V E ≈ 0.5. This value is consistent with previously determined values in the literature on the Filehne illusion, which ranged from 0.5 to 0.8 (Freeman, 2001, 1999; Freeman, Banks, & Crowell, 2000; Haarmeier, Bunjes, Lindner, Berret, & Thier, 2001). 
Second, the psychometric function for L–M motion also shifted from retinal stationarity. Thus, motion signals in L–M motion contributed to the velocity integration. However, the location of the psychometric function was not the same as that for Lum motion. Compared with the PSS for Lum motion, that for L–M motion was farther away from retinal stationarity. Thus, from a head-centric perspective (Figure 3B, left-hand ordinate), the Filehne illusion was reduced (two-tailed t-test, t(3) = 3.82, p < 0.05). With respect to the retinal velocity (Figure 3B, right-hand ordinate), however, a somewhat different assertion can be made. That is, to reach the PSS, L–M motion must be retinally moved approximately 1.5 times as fast as the retinal velocity at the PSS for Lum motion. This relationship is visualized in the middle bar of Figure 3C. Assuming that
V ^
E/V E was constant at 0.5, the estimation gain in the retinal velocity (
V ^
R/V R) for L–M motion as calculated by [V R at the PSS for Lum motion]/[V R at the PSS for L–M motion] was approximately 0.7 and was significantly less than 1 (two-tailed t-test, t(3) = 5.66, p < 0.05); to make the same impact as Lum motion, L–M motion must move 1/0.7 times as fast. 
Third, the psychometric function for S–(L + M) motion shifted similarly for observer TF and even farther for the remaining observers. Consequently, the Filehne illusion was reduced even further, as shown in the rightmost bar of Figure 3B. The estimation gain in the retinal velocity was smaller, as shown in the rightmost bar of Figure 3C. For all observers, the PSS for S–(L + M) motion and that for Lum motion were significantly different (two-tailed t-test, t(3) = 4.22, p < 0.05). For all except TF, the PSS for S–(L + M) motion and that for L–M motion were also significantly different (the bootstrap method). 
The results under the condition of a pursuit speed of 2 deg/s are shown in Figure 4. Under both color motion conditions, the stimulus appeared to move in the same direction as the pursuit when the stimulus was physically stationary on the display (bootstrap method, p < 0.01). To make it appear stationary, the stimulus had to move in the opposite direction. This “inverse Filehne” illusion supports the notion that the visual system is integrating two velocity signals (i.e., the internally reduced
V ^
R of color motion and
V ^
E). Additionally, as plotted in Figure 4C, these differences can be accounted for by the well-known property of color motion, namely that the reduction in registered speed is much greater at slower speeds or lower temporal frequencies (e.g., Cavanagh et al., 1984). The estimation gain of V R at a pursuit speed of 2 deg/s was smaller than that at a pursuit speed of 8 deg/s for both color motions (two-tailed t-test, t(3) = 4.54, p < 0.05 for L–M motion and t(3) = 6.29, p < 0.01 for S–(L + M) motion). 
Figure 4
 
Results of Experiment 1 for a pursuit speed of 2 deg/s. (A) Psychometric functions obtained for four observers. (B) Summary data of the PSS. (C) Color motion gain.
Figure 4
 
Results of Experiment 1 for a pursuit speed of 2 deg/s. (A) Psychometric functions obtained for four observers. (B) Summary data of the PSS. (C) Color motion gain.
The analysis above derived
V ^
R from
V ^
H under the assumptions that
V ^
E/V E was constant across conditions and that we had equality, i.e.,
V ^
H =
V ^
R +
V ^
E, under all conditions, but these assumptions are not necessarily met. That is, the color-dependent change in perceived speed may be caused by a change in any of these three factors: [
V ^
R], [
V ^
E], or the summation property [+] of two velocities. Although one of the important reasons for locating the pursuit target separate from the moving grating was to keep
V ^
E the same across different color modulation conditions, it is technically possible that the mere presence/absence of a particular chromatic display would affect
V ^
E because it depends not only on extraretinal information but also on visual inputs (e.g., Crowell & Andersen, 2001; Haarmeier & Thier, 1996; Pack, Grossberg, & Mingolla, 2001; Sumnall, Freeman, & Snowden, 2003; Turano & Massof, 2001; Wertheim, 1994). Additionally, although we have no reason to believe otherwise, whether
V ^
R and
V ^
E are summed in linear fashion to yield perceived velocity when different colors are used to define a moving stimulus remains unknown. 
In subsequent experiments, we will examine each of these possibilities, as briefly summarized in the following. In Experiment 2, we will test that the mere presence of color or luminance motion on the display does not affect
V ^
E. In Experiment 3, we will test that color motion indeed reduces
V ^
R during fixation. More specifically, we will test that the subjective speed reduction observed during fixation is the same as the quantitative prediction derived from the results of Experiment 1, that is, along each color modulation axis, the relationship
V ^
H =
V ^
R +
V ^
E that was found to be the case with eye movements in Experiment 1 was reduced to
V ^
H =
V ^
R without eye movements in Experiment 3, which maintained the same value for
V ^
R. Based on this orderly relationship and the stable
V ^
E with no indication of a change in the summation property, we will reach the most parsimonious conclusion that it is neither
V ^
E nor a summation property, but that
V ^
R actually changes depending on the color modulation axis and causes a change in
V ^
H or the perceived head-centered motion of the grating. 
Experiment 2
When a large portion of the display is occupied by an equiluminant moving stimulus, the unstable sensory evidence of color motion (reduction in the perceived speed and impact of motion) may lead to a different eye movement estimation gain (
V ^
E/V E) than when the display is occupied by a luminance-defined stimulus. To address this issue, we measured the PSS of a new drifting grating presented on top of the pursuit target in the presence of a grating of the same size and at the same location as in Experiment 1. Because the upper grating for which the PSS was to be established was constant along one modulation axis, the
V ^
R of the upper grating was also constant. Therefore, if different values of PSS for the upper grating were found, a change in
V ^
E that depended on the color modulation axis of the lower grating must have occurred. 
Methods
The methods were identical to those used in Experiment 1, except for the following. We presented two gratings simultaneously. One was a test grating presented 5 deg above the pursuit target. This was a sinusoidal grating with a spatial frequency of 0.18 cpd, windowed by a stationary spatial Gaussian (FWHM, 19.6 deg horizontally and 1.4 deg vertically). Additionally, an extra grating was presented 10 deg below the target. This grating had the same size and position parameters as the grating used in Experiment 1. Specifically, this was a sinusoidal grating with a spatial frequency of 0.12 cpd, windowed by a stationary spatial Gaussian (FWHM, 19.6 deg horizontally and 5.6 deg vertically). The grating drifted at various velocities according to the method of constant stimuli. The spatial frequencies of the upper and lower gratings were intentionally set at different values so as to prevent the use of the time course of their relative phase offset as a directional cue. 
The observers judged the motion direction of the upper grating according to coordinates on the screen (leftward or rightward). The lower grating was task-irrelevant and stationary on the screen; however, because observers had to pursue the target, the lower grating was retinally displaced in the direction opposite to but at the same speed as the eye movement. 
Results and discussion
As shown in Figure 5A, the velocity at the PSS for the Lum-modulated upper grating was not significantly different and was virtually identical irrespective of the color modulation axis of the lower grating (ANOVA, F(2, 3) = 0.73, p = 0.51). Because the same upper grating was used to determine the PSS under different conditions,
V ^
R had to remain unchanged across the different modulation axes of the lower grating. Thus, these results indicated that
V ^
E was also unchanged. 
Figure 5
 
Results of Experiment 2. (A) The PSS of the upper grating for Lum motion, plotted as a function of the modulation type of the lower task-irrelevant grating. Symbols indicate individual data points, and the bars indicate the across-observer average, with error bars showing ±1 SEM. (B) The PSS of the upper grating for L–M motion was plotted in the same format. (C) The PSS of the upper grating for S–(L + M) motion was plotted in the same format.
Figure 5
 
Results of Experiment 2. (A) The PSS of the upper grating for Lum motion, plotted as a function of the modulation type of the lower task-irrelevant grating. Symbols indicate individual data points, and the bars indicate the across-observer average, with error bars showing ±1 SEM. (B) The PSS of the upper grating for L–M motion was plotted in the same format. (C) The PSS of the upper grating for S–(L + M) motion was plotted in the same format.
One might argue that the Lum-modulated upper grating provided sufficient sensory evidence to obtain reliable eye velocity estimation gain irrespective of the type of motion that was simultaneously presented below the target. To address this concern, we measured the PSS for the L–M-modulated upper grating (Figure 5B) and that for the S–(L + M)-modulated upper grating (Figure 5C) in the presence of the lower grating with one of the three types of modulation (Lum, L–M, and S–(L + M)). The resulting PSS values exhibited no significant differences (ANOVA, F(2, 2) = 0.65, p = 0.55 for the L–M-modulated upper grating and F(2, 2) = 0.11, p = 0.90 for the S–(L + M)-modulated upper grating), supporting the notion that the gain in eye velocity estimation was the same across different motion displays. 
Comparisons across the three panels of Figure 5 revealed that the V R at the PSS was faster when the upper grating was defined by color than when it was defined by luminance (Ryan's multiple comparison test, t(16) = 5.55, p < 0.01 for L–M motion and t(16) = 5.05, p < 0.01 for S–(L + M) motion). These results qualitatively duplicated the main results of Experiment 1, as shown in Figure 3B. However, the difference between the results obtained for the Lum grating and for the L–M grating (comparison between Figures 5A and 5B) was smaller than that in Experiment 1, and the difference between the results of the two types of chromatic modulation observed in Experiment 1 was absent (comparison between Figures 5B and 5C). Note that these deviations from the results of Experiment 1 were expected given the more central location of the upper grating for which the PSS was to be established. First, this pattern of results was predictable from the fact that the impact of the motion from equiluminant stimuli grows stronger with decreasing eccentricity (Bilodeau & Faubert, 1999). Second, additional position cues such as the relative displacement between the grating and the tracking target were presumably more readily available to the observers. Third, because the equiluminance setting had been optimized for the lower location, it might differ slightly from true equiluminance for the upper location. However, none of these possible scenarios affected the conclusion of this experiment;
V ^
E was found to be stable irrespective of visual inputs at the location of the lower grating. 
Experiment 3
Assuming the relationship
V ^
H =
V ^
R +
V ^
E, with
V ^
E constant across color modulation conditions, Experiment 1 reached the interim conclusion that
V ^
R for color motion was already attenuated before velocity integration. Experiment 2 confirmed that
V ^
E was indeed constant. If
V ^
R was already attenuated, the V R at the PSS for Lum motion, that for L–M motion, and that for S–(L + M) motion should have all been transformed into an effectively equivalent velocity prior to the stage of velocity integration during pursuit. Thus, when viewed during fixation rather than during pursuit, the apparent velocities of these motions would be perceived to be equivalent. If not, we would have to review the results of Experiment 1 and reconsider special changes in the summation properties of
V ^
R and
V ^
E that depended on the color modulation axis. Therefore, this experiment directly measured the perceived speed (
V ^
H) of a moving grating viewed with fixation maintained. Note that V E = 0, and therefore,
V ^
H =
V ^
R
Methods
The methods were identical to those used in Experiment 1, except for the following (Figure 6A). Observers viewed stimuli moving at the V R at the PSS, as determined under the condition of a pursuit speed of 8 deg/s in Experiment 1 but without pursuit. The fixation point, the appearance of which was identical to the pursuit target in Experiment 1, appeared at the center of the screen and remained still. The purpose of this experiment was to establish speed matching between color motion and Lum motion. Two drifting gratings were presented sequentially. One was an equiluminant grating, or a test grating, whose modulation was along the L–M axis or the S–(L + M) axis. The drifting speed of each grating was fixed at the V R at the PSS obtained for each observer in Experiment 1. The other was a reference grating whose modulation was along the Lum axis and whose drifting speed was randomly chosen from predetermined levels according to the method of constant stimuli. The temporal order of the presentation of the two gratings was randomly chosen. The observer judged which grating appeared to move faster while maintaining fixation on the fixation point. The color modulation axis of the test grating was changed between blocks. Each psychometric function obtained for each observer and condition was fitted with a cumulative Gaussian function; the drifting speed that yielded a 50% probability of seeing the test grating faster than the reference motion was determined as the speed match. 
Figure 6
 
The results of Experiment 3. (A) Schematic illustration of the experimental procedure. The arrow schematically illustrates the time axis. The first and second drifting gratings were viewed while fixation was maintained. (B) Averaged data. The open bars are exact copies of the open bars shown in Figure 3B, indicating the V R at the PSS for each modulation type. Each solid bar indicates the matched speed (i.e., the velocity of the luminance grating that appeared to move as fast as the V R at the PSS obtained for each color motion in Experiment 1). The error bars indicate ±1 SEM. (C) Each observer's data. The error bars indicate 95% bootstrap confidence intervals.
Figure 6
 
The results of Experiment 3. (A) Schematic illustration of the experimental procedure. The arrow schematically illustrates the time axis. The first and second drifting gratings were viewed while fixation was maintained. (B) Averaged data. The open bars are exact copies of the open bars shown in Figure 3B, indicating the V R at the PSS for each modulation type. Each solid bar indicates the matched speed (i.e., the velocity of the luminance grating that appeared to move as fast as the V R at the PSS obtained for each color motion in Experiment 1). The error bars indicate ±1 SEM. (C) Each observer's data. The error bars indicate 95% bootstrap confidence intervals.
Results and discussion
Figure 6 shows the results. The open bars in Figure 6B indicate the V R at the PSS obtained in Experiment 1. Hence, they are identical to the open bars in Figure 3B and indicate that during smooth pursuit at 8 deg/s, the gratings drifting at these retinal velocities were subjectively stationary. The results of Experiment 3, plotted as the solid bars, indicate the velocity of the luminance grating that appeared as fast as the V R at the PSS for color motion when viewed with steady fixation. As can be clearly seen by comparing the open and solid bars with the same coloring, the perceived speed was underestimated at both motions, L–M motion and S–(L + M) motion. The difference between actual and perceived speeds was significant both for L–M motion (two-tailed t-test, t(3) = 4.65, p < 0.05) and for S–(L + M) motion (two-tailed t-test, t(3) = 4.88, p < 0.05). Comparison among the first, third, and fifth bars of Figure 6B revealed that the apparent velocity was close to or even slower than the PSS for Lum motion, indicating that the retinal velocities that were needed to establish the PSS during smooth pursuit in Experiment 1 were perceptually almost equivalent when viewed with maintained fixation, even though the absolute speeds differed depending on the axis of color modulation. To put it differently,
V ^
R/V R in this experiment was virtually the same as that in Experiment 1. Therefore, we concluded that slowing occurs at an early processing level in retina-centered coordinates prior to velocity integration during smooth pursuit as well as during steady fixation, with no change between
V ^
R and
V ^
E in summation properties dependent on the color modulation axis. 
General discussion
Summary of results
We investigated whether and how color motion affects velocity integration during smooth pursuit. In Experiment 1, the psychometric functions of both color motions were shifted toward head-centered stationarity, although color motion had less impact on velocity integration than did luminance motion. Additionally, an inverse Filehne illusion was observed when the pursuit speed was slow. Experiment 2 revealed that eye velocity estimation (
V ^
E) was the same across different motion displays. Experiment 3 revealed that the weaker impact of color motion observed in the first experiment was quantitatively consistent with the perceptual slowing of color motion (
V ^
R) during fixation. These results indicate that the perceived velocity of color motion during pursuit is determined by integrating an attenuated
V ^
R with a stable
V ^
E
Comparison with previous studies
The present results are in general agreement with those of previous studies of perceptions of equiluminant color motion and of eye velocity estimation. At a pursuit speed of 8 deg/s,
V ^
E / V E = 0.67 for L–M motion and 0.55 for S–(L + M) motion. At a pursuit speed of 2 deg/s,
V ^
E / V E = 0.41 for L–M motion and 0.29 for S–(L + M) motion. These gains are in agreement with the results reported by Cavanagh et al. (1984), who measured the reduction in speed of equiluminant motion with maintained fixation, similar to our Experiment 3. Additionally,
V ^
R / V R ≈ 0.5, which is similar to Freeman's (2001) estimation. 
A recent study demonstrated that when an equiluminant target was being tracked almost perfectly, its perceived speed was the same as that of a luminance target (Braun et al., 2008). It follows that the color of a perfectly tracked target would not influence
V ^
E per se (i.e., when V R = 0, the relationship
V ^
H =
V ^
R +
V ^
E =
V ^
E would hold irrespective of the color axis used to define the pursuit target). This observation is in accord with our finding in Experiment 2, in which we confirmed that
V ^
E was stable irrespective of the color signals contained in visual stimuli. Thus, the results of the present study were consistent with Braun et al.'s (2008) interesting observation but used a different approach to extend the relationship between the apparent deterioration of color motion and velocity integration,
V ^
H =
V ^
R +
V ^
E when V R ≠ 0. 
Early speed reduction in color motion
The present results indicate that the apparent reduction in speed at equiluminance takes place prior to velocity integration at an early processing level in retina-centered coordinates. This finding can be discussed from the theoretical viewpoint of signal detection in a noisy system. When luminance contrast is low, local image motion energies are noisy, and the exact speed of the stimulus is difficult to determine. A computational theory suggests that in noisy circumstances, velocity is underestimated because slower velocities are assumed to be more likely to occur than faster ones in the natural world (e.g., Weiss, Simoncelli, & Adelson, 2002). Our perception is a best estimate as to what might occur in the world, given both sensory data and prior knowledge. In particular, the prior distribution of velocity could be estimated empirically from the statistics of motion in the world (Ullman, 1979). The present results suggest that in this computational framework, the slow-world prior distribution is not based on head-centered coordinates but on retina-centered coordinates. A similar argument could be made not only for color motion but also for impoverished retinal velocity estimation of luminance motion per se. Freeman and Banks (1998) measured the PSS for luminance motion containing a high spatial frequency and found a shift in comparison with the PSS for luminance motion with a low spatial frequency. 
Relationship with internal and external motion information
When the observer makes slow pursuit eye movements, a luminance-defined stationary stimulus appears to move in the direction opposite to that of pursuit (Filehne illusion). However, we found that an equiluminant stationary stimulus can appear to move “together” with pursuit. At a pursuit velocity of −2 deg/s (i.e., to the left), the averaged V H at the PSS was approximately +0.5 deg/s for L–M motion and +2 deg/s for S–(L + M) motion. These positive velocities required to cancel subjective motion during pursuit clearly contrasted with the cancellation velocity for Lum motion, which was approximately −1 deg/s. This discrepancy (i.e., a standard Filehne illusion derived from luminance signals and an inverse Filehne illusion derived from color signals) should always occur in our brains whenever we make smooth pursuit at slow speeds while viewing colorful stationary scene characterized by low spatial frequencies. According to the classical framework of parallel and independent pathways devoted to luminance and color signals (Livingstone & Hubel, 1988; Maunsell & Newsome, 1987; Zeki, 1974), luminance and color motions are conveyed via distinct pathways. Subsequent studies have argued for different pathways devoted to slow-moving and fast-moving (e.g., Gegenfurtner & Hawken, 1996a, 1996b) chromatic stimuli, suggesting that signals for slow-moving chromatic stimuli are routed via ventral extrastriate areas before reaching a cortical motion-processing center, presumably area MT. In any case, image decompositions based on frequency, chromaticity, and contrast for every smooth pursuit eye movement would undoubtedly be hazardous. However, we never perceive such discrepancies in our daily lives. This may be because sensory evidence of color motion, especially S–(L + M) motion, is weak and is normally overridden by sensory evidence of luminance-based motion. The phenomenon of motion capture (Murakami & Shimojo, 1993; Ramachandran, 1987), in which luminance motion phenomenologically “captures” a stationary color stimulus such that luminance and color appear to move together, is not only useful in binding luminance and color signals belonging to the same moving object but also crucial in maintaining position constancy during pursuit. The world is normally full of environmentally stationary objects, which means that during each pursuit eye movement, the retinal image of the world is normally full of movements that are registered with different gains depending on the color axis. Color signals with poorly registered velocities are perceptually bound with more reliable luminance-based motions, thereby avoiding undesirable diplopia or color decomposition due to different velocity gains, which might occur with each pursuit if the mechanism of motion binding across color axes did not work. 
Acknowledgments
MT is supported by the Japan Society for the Promotion of Science. IM is supported by the Nissan Science Foundation and JSPS Funding Program for Next Generation World-Leading Researchers (LZ004). 
Commercial relationships: none. 
Corresponding author: Masahiko Terao. 
Email: masahiko_terao@mac.com. 
Address: Department of Life Sciences, The University of Tokyo, Building No. 2, Room 104, 3-8-1 Komaba, Meguro-ku, Tokyo, Japan. 
References
Aubert H. (1886). Die Bewegungsempfindung. Pflügers Archiv, 39, 347–370. [CrossRef]
Bilodeau L. Faubert J. (1999). The oblique effect with colour defined motion throughout the visual field. Vision Research, 39, 757–763. [CrossRef] [PubMed]
Brainard D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Braun D. I. Mennie N. Rasche C. Schütz A. C. Hawken M. J. Gegenfurtner K. R. (2008). Smooth pursuit eye movements to isoluminant targets. Journal of Neurophysiology, 100, 1287–1300. [CrossRef] [PubMed]
Cavanagh P. Anstis S. M. (1991). The contribution of color to motion in normal and color-deficient observers. Vision Research, 31, 2109–2148. [CrossRef] [PubMed]
Cavanagh P. Favreau O. E. (1985). Color and luminance share a common motion pathway. Vision Research, 25, 1595–1601. [CrossRef] [PubMed]
Cavanagh P. Tyler C. W. Favreau O. E. (1984). Perceived velocity of moving chromatic gratings. Journal of the Optical Society of America A, 1, 893–899. [CrossRef]
Cropper S. J. Derrington A. M. (1996). Rapid colour-specific detection of motion in human vision. Nature, 379, 72–74. [CrossRef] [PubMed]
Cropper S. J. Wuerger S. M. (2005). The perception of motion in chromatic stimuli. Behavioral and Cognitive Neuroscience Reviews, 4, 192–217. [CrossRef] [PubMed]
Crowell J. A. Andersen R. A. (2001). Pursuit compensation during self-motion. Perception, 30, 1465–1488. [CrossRef] [PubMed]
Derrington A. M. Badcock D. R. (1985). The low level motion system has both chromatic and luminance inputs. Vision Research, 25, 1874–1884.
Derrington A. M. Henning G. B. (1993). Detecting and discriminating the direction of motion of luminance and colour gratings. Vision Research, 33, 799–811. [CrossRef] [PubMed]
Derrington A. M. Krauskopf J. Lennie P. (1984). Chromatic mechanisms in lateral geniculate nucleus of macaque. The Journal of Physiology, 357, 241–265. [CrossRef] [PubMed]
Dichgans J. Wist E. Diener H. C. Brandt T. (1975). The Aubert–Fleischl phenomenon: A temporal frequency effect on perceived velocity in afferent motion perception. Experimental Brain Research, 23, 529–533. [CrossRef] [PubMed]
Dougherty R. F. Press W. A. Wandell B. (1999). Perceived speed of colored stimuli. Neuron, 24, 893–899. [CrossRef] [PubMed]
Ehrenstein W. H. Mateeff S. Hohnsbein J. (1990). The strength of the Filehne illusion depends on the velocity of ocular pursuit. Perception, 19, 411–412.
Filehne W. (1922). Uber das optische Wahrnehmen von Bewegungen. Zeitschrift fur Sinnephysiologie, 53, 134–145.
Fleischl E. V. (1882). Physiologisch-optische Notizen, 2. Mitteilung. Sitzung Wiener Bereich der Akademie der Wissenschaften, 3, 7–25.
Freeman T. C. (2001). Transducer models of head-centered motion perception. Vision Research, 41, 2741–2755. [CrossRef] [PubMed]
Freeman T. C. A. (1999). Path perception and Filehne illusion compared: Model and data. Vision Research, 39, 2659–2667. [CrossRef] [PubMed]
Freeman T. C. A. Banks M. S. (1998). Perceived head-centric speed is affected by both extra-retinal and retinal errors. Vision Research, 38, 941–945. [CrossRef] [PubMed]
Freeman T. C. A. Banks M. S. Crowell J. A. (2000). Extra-retinal and retinal amplitude and phase errors during Filehne illusion and path perception. Perception & Psychophysics, 62, 900–909. [CrossRef] [PubMed]
Gegenfurtner K. R. Hawken M. H. (1995). Temporal and chromatic properties of motion mechanisms. Vision Research, 35, 1547–1563. [CrossRef] [PubMed]
Gegenfurtner K. R. Hawken M. J. (1996a). Interactions between color and motion in the visual pathways. Trends in Neurosciences, 19, 394–401. [CrossRef]
Gegenfurtner K. R. Hawken M. J. (1996b). Perceived velocity of luminance, chromatic and non-Fourier stimuli: Influence of contrast and temporal frequency. Vision Research, 36, 1281–1290. [CrossRef]
Goldstein E. B. (2007). Sensation and perception. Belmont, CA: Thompson Wadsworth.
Haarmeier T. Bunjes F. Lindner A. Berret E. Thier P. (2001). Optimizing visual motion perception during eye movements. Neuron, 32, 527–535. [CrossRef] [PubMed]
Haarmeier T. Thier P. (1996). Modification of the Filehne illusion by conditioning visual stimuli. Vision Research, 36, 741–750. [CrossRef] [PubMed]
Krauskopf J. Williams D. R. Heeley D. W. (1982). Cardinal directions of color space. Vision Research, 22, 1123–1131. [CrossRef] [PubMed]
Livingstone M. S. Hubel D. H. (1988). Segregation of form, color, movement and depth: Anatomy, physiology and perception. Science, 240, 740–749. [CrossRef] [PubMed]
Mack A. Herman E. (1973). Position constancy during pursuit eye movement: An investigation of the Filehne illusion. Quarterly Journal of Experimental Psychology, 25, 71–84. [CrossRef] [PubMed]
Mack A. Herman E. (1978). The loss of position constancy during pursuit eye movements. Vision Research, 18, 55–62. [CrossRef] [PubMed]
Macleod D. I. A. Boynton R. M. (1979). Chromaticity diagram showing cone excitation by stimuli of equal luminance. Journal of the Optical Society of America, 69, 1183–1186. [CrossRef] [PubMed]
Maunsell J. H. R. Newsome W. T. (1987). Visual processing in monkey extrastriate cortex. Annual Review of Neuroscience, 10, 363–401. [CrossRef] [PubMed]
Mullen K. T. Baker C. L. (1985). A motion aftereffect from an isoluminant stimulus. Vision Research, 25, 685–688. [CrossRef] [PubMed]
Murakami I. Shimojo S. (1993). Motion capture changes to induced motion at higher luminance contrasts, smaller eccentricities, and larger inducer sizes. Vision Research, 33, 2091–2107. [CrossRef] [PubMed]
Pack C. Grossberg S. Mingolla E. (2001). A neural model of smooth pursuit control and motion perception by cortical area MST. Journal of Cognitive Neuroscience, 13, 102–120. [CrossRef] [PubMed]
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [CrossRef] [PubMed]
Ramachandran V. S. (1987). Interaction between colour and motion in human vision. Nature, 328, 645–647. [CrossRef] [PubMed]
Ruppertsberg A. Wuerger S. M. Bertamini M. (2003). The chromatic selectivity of global motion perception. Visual Neuroscience, 20, 421–428. [CrossRef] [PubMed]
Seidemann E. Poirson A. B. Wandell B. A. Newsome W. T. (1999). Color signals in area MT of the macaque monkey. Neuron, 24, 911–917. [CrossRef] [PubMed]
Smith V. C. Pokorny J. (1975). Spectral sensitivity of the foveal cone photopigments between 400 and 500 nm. Vision Research, 15, 161–171. [CrossRef] [PubMed]
Stromeyer C. F., 3rd Kronauer R. E. Ryu A. Chaparro A. Eskew R. T., Jr. (1995). Contributions of human long-wave and middle-wave cones to motion detection. The Journal of Physiology, 15, 221–243. [CrossRef]
Sumnall J. H. Freeman T. C. A. Snowden R. J. (2003). Optokinetic potential and the perception of head-centred motion. Vision Research, 43, 1709–1718. [CrossRef] [PubMed]
Turano K. A. Massof R. W. (2001). Nonlinear contribution of eye velocity to motion perception. Vision Research, 41, 385–395. [CrossRef] [PubMed]
Ullman S. (1979). The interpretation of visual motion. Cambridge, MA: MIT Press.
von Holst E. (1954). Relations between the central nervous system and the peripheral organs. British Journal of Animal Behaviour, 2, 89–94. [CrossRef]
Wandell B. A. Poirson A. B. Newsome W. T. Baseler H. A. Boynton G. A. Huk A. et al. (1999). Color signals in human motion-selective cortex. Neuron, 24, 901–909. [CrossRef] [PubMed]
Weiss Y. Simoncelli E. P. Adelson E. H. (2002). Motion illusions as optimal percepts. Nature Neuroscience, 5, 598–604. [CrossRef] [PubMed]
Wertheim A. H. (1987). Retinal and extraretinal information in movement perception: How to invert the Filehne illusion. Perception, 16, 299–308. [CrossRef] [PubMed]
Wertheim A. H. (1994). Motion perception during self-motion—The direct versus inferential controversy revisited. Behavioral and Brain Sciences, 17, 293–311. [CrossRef]
Wertheim A. H. Van Gelder P. (1990). An acceleration illusion caused by underestimation of stimulus velocity during pursuit eye movements: The Aubert–Fleischl phenomenon revisited. Perception, 19, 471–482 (erratum in Perception, 19(5), 700). [CrossRef] [PubMed]
Wichmann F. A. Hill N. J. (2001a). The psychometric function: I. Fitting, sampling, and goodness of fit. Perception & Psychophysics, 63, 1293–1313. [CrossRef]
Wichmann F. A. Hill N. J. (2001b). The psychometric function: II. Bootstrap-based confidence intervals and sampling. Perception and Psychophysics, 63, 1314–1329. [CrossRef]
Zeki S. M. (1974). Functional organization of a visual area in the posterior bank of the superior temporal sulcus of the rhesus monkey. The Journal of Physiology, 236, 549–573. [CrossRef] [PubMed]
Figure 1
 
Predictions. (A) Schematic prediction of the relationship between perceived velocity and stimulus velocity at the point of subjective stationarity (PSS). In this figure, the eye movement is leftward. In this particular example, the eye velocity estimation gain was set at 0.64 (from Freeman, 2001), and then, the PSS was located 0.36% to the left of head-centered (environmental) stationarity and 0.64% to the right of retinal stationarity. (B) Schematic predictions for color motion. If the speed reduction in color motion occurred after velocity integration, the PSS for color motion would be the same as that for luminance-based motion (gray curve), with the only difference being in the slope of the psychometric function (red curve). In contrast, if the reduction occurred before velocity integration, the PSS would shift farther to the right (blue and purple curves), with the ratio [V R at PSS for luminance motion]/[V R at PSS for color motion] indicating the ratio of speed reduction of color motion.
Figure 1
 
Predictions. (A) Schematic prediction of the relationship between perceived velocity and stimulus velocity at the point of subjective stationarity (PSS). In this figure, the eye movement is leftward. In this particular example, the eye velocity estimation gain was set at 0.64 (from Freeman, 2001), and then, the PSS was located 0.36% to the left of head-centered (environmental) stationarity and 0.64% to the right of retinal stationarity. (B) Schematic predictions for color motion. If the speed reduction in color motion occurred after velocity integration, the PSS for color motion would be the same as that for luminance-based motion (gray curve), with the only difference being in the slope of the psychometric function (red curve). In contrast, if the reduction occurred before velocity integration, the PSS would shift farther to the right (blue and purple curves), with the ratio [V R at PSS for luminance motion]/[V R at PSS for color motion] indicating the ratio of speed reduction of color motion.
Figure 2
 
Schematic illustration of the spatial configuration. A grating of S–(L + M) motion is shown as an example. The grating drifted within a stationary spatial Gaussian window of contrast modulation. The horizontal and vertical extents of the spatial Gaussian window are shown.
Figure 2
 
Schematic illustration of the spatial configuration. A grating of S–(L + M) motion is shown as an example. The grating drifted within a stationary spatial Gaussian window of contrast modulation. The horizontal and vertical extents of the spatial Gaussian window are shown.
Figure 3
 
Results of Experiment 1 for a pursuit speed of 8 deg/s. (A) Psychometric functions obtained for four observers plotted in separate panels. The upper and lower abscissas indicate the velocity of the drifting grating in retinal coordinates and in head-centered (environmental) coordinates, respectively. These abscissas are offset from each other by 8 because the velocity of smooth pursuit was −8 deg/s in this experiment. The horizontal bar at the midpoint of each psychometric function indicates a 95% bootstrap confidence interval. (B) Summary data of the PSS. Symbols indicate individual data points, and the bars indicate the across-observer average, with error bars showing ±1 SEM. Each solid bar illustrates averaged V H at the PSS, whereas each open bar illustrates averaged V R at the PSS. (C) The velocity estimation gain for color motion, defined as the ratio [V R at the PSS for Lum motion]/[V R at the PSS for color motion]. The leftmost bar is set at 1 by definition, and the estimation gains for L–M and S–(L + M) motions are shown relative to this.
Figure 3
 
Results of Experiment 1 for a pursuit speed of 8 deg/s. (A) Psychometric functions obtained for four observers plotted in separate panels. The upper and lower abscissas indicate the velocity of the drifting grating in retinal coordinates and in head-centered (environmental) coordinates, respectively. These abscissas are offset from each other by 8 because the velocity of smooth pursuit was −8 deg/s in this experiment. The horizontal bar at the midpoint of each psychometric function indicates a 95% bootstrap confidence interval. (B) Summary data of the PSS. Symbols indicate individual data points, and the bars indicate the across-observer average, with error bars showing ±1 SEM. Each solid bar illustrates averaged V H at the PSS, whereas each open bar illustrates averaged V R at the PSS. (C) The velocity estimation gain for color motion, defined as the ratio [V R at the PSS for Lum motion]/[V R at the PSS for color motion]. The leftmost bar is set at 1 by definition, and the estimation gains for L–M and S–(L + M) motions are shown relative to this.
Figure 4
 
Results of Experiment 1 for a pursuit speed of 2 deg/s. (A) Psychometric functions obtained for four observers. (B) Summary data of the PSS. (C) Color motion gain.
Figure 4
 
Results of Experiment 1 for a pursuit speed of 2 deg/s. (A) Psychometric functions obtained for four observers. (B) Summary data of the PSS. (C) Color motion gain.
Figure 5
 
Results of Experiment 2. (A) The PSS of the upper grating for Lum motion, plotted as a function of the modulation type of the lower task-irrelevant grating. Symbols indicate individual data points, and the bars indicate the across-observer average, with error bars showing ±1 SEM. (B) The PSS of the upper grating for L–M motion was plotted in the same format. (C) The PSS of the upper grating for S–(L + M) motion was plotted in the same format.
Figure 5
 
Results of Experiment 2. (A) The PSS of the upper grating for Lum motion, plotted as a function of the modulation type of the lower task-irrelevant grating. Symbols indicate individual data points, and the bars indicate the across-observer average, with error bars showing ±1 SEM. (B) The PSS of the upper grating for L–M motion was plotted in the same format. (C) The PSS of the upper grating for S–(L + M) motion was plotted in the same format.
Figure 6
 
The results of Experiment 3. (A) Schematic illustration of the experimental procedure. The arrow schematically illustrates the time axis. The first and second drifting gratings were viewed while fixation was maintained. (B) Averaged data. The open bars are exact copies of the open bars shown in Figure 3B, indicating the V R at the PSS for each modulation type. Each solid bar indicates the matched speed (i.e., the velocity of the luminance grating that appeared to move as fast as the V R at the PSS obtained for each color motion in Experiment 1). The error bars indicate ±1 SEM. (C) Each observer's data. The error bars indicate 95% bootstrap confidence intervals.
Figure 6
 
The results of Experiment 3. (A) Schematic illustration of the experimental procedure. The arrow schematically illustrates the time axis. The first and second drifting gratings were viewed while fixation was maintained. (B) Averaged data. The open bars are exact copies of the open bars shown in Figure 3B, indicating the V R at the PSS for each modulation type. Each solid bar indicates the matched speed (i.e., the velocity of the luminance grating that appeared to move as fast as the V R at the PSS obtained for each color motion in Experiment 1). The error bars indicate ±1 SEM. (C) Each observer's data. The error bars indicate 95% bootstrap confidence intervals.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×