February 2005
Volume 5, Issue 2
Free
Research Article  |   February 2005
Reference frames in early motion detection
Author Affiliations
Journal of Vision February 2005, Vol.5, 4. doi:10.1167/5.2.4
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Camille Morvan, Mark Wexler; Reference frames in early motion detection. Journal of Vision 2005;5(2):4. doi: 10.1167/5.2.4.

      Download citation file:


      © 2016 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

To perceive the real motion of objects in the world while moving the eyes, retinal motion signals must be compensated by information about eye movements. Here we study when this compensation takes place in the course of visual processing, and whether uncompensated motion signals are ever available. We used a paradigm based on asymmetry in motion detection: Fast-moving objects are found easier among slow-moving distractors than are slow objects among fast distractors. By coupling object motion to eye motion, we created stimuli that moved fast on the retina but slowly in an eye-independent reference frame, or vice versa. In the 100 ms after stimulus onset, motion detection is dominated by retinal motion, uncompensated for eye movements. As early as 130 ms, compensated signals become available: Objects that move slowly on the retina but fast in an eye-independent frame are detected as easily as those that move fast on the retina.

Introduction
Visual search for motion is asymmetric: efficient at detecting a moving object among stationary distractors and inefficient at the opposite task, namely detecting a stationary object among moving distractors (Ivry & Cohen, 1992; Dick, Ullman, & Sagi, 1987; Royden, Wolfe, & Klempen, 2001). The visual system seems to have evolved an effective motion detector. However, as this asymmetry has been found in the nonmoving observer with fixed gaze, it is not clear in which reference frame the motion detection operates: retinocentric, head-centric, trunk-centric, or earth-centric. While a retinocentric motion detector is undoubtedly useful (e.g., for planning eye movements), it confounds physical object motion and that induced by the observer’s movements. Indeed, while tracking a moving object with the eyes, the image of the object slows down or comes to a halt on the retina, whereas the projection of the stationary background sweeps across the retinal image. In spite of this, we usually perceive the object as moving and the world as stationary. As for many other characteristics of the visual scene (e.g., lightness, occlusion, depth, and size), retinal motion information has to be processed to extract the actual, distal properties of the scene (physical object motion) from the accidental properties dependent on the retinal projection. This process, whose end result is known as spatial constancy, is usually achieved during tracking and saccadic eye movements. 
To achieve spatial constancy during eye movements, the visual system has to compensate for retinal motion due to eye movements. It has been claimed that in performing this compensation, the visual system uses, at least in part, an extra-retinal signal that encodes eye movements (von Helmholtz, 1867; von Holst & Mittelstaedt, 1950; Sperry, 1950) (for a review, see Carpenter, 1988), and background motion is perceived only if the retinal and extraretinal signals differ (Mach, 1959/1914; Brindley & Merton, 1960; Stevens et al., 1976). At the same time, it is known that compensation for eye movements is also partly achieved through a hypothesis of visual background stationarity (Duncker, 1929; Matin, Picoult, Stevens, Edwards, & MacArthur, 1982), a process that does not require extraretinal information. Although in most cases the visual system compensates correctly for eye movements, some well-known illusions reveal that constancy during smooth pursuit is actually incomplete (Filehne illusion, Filehne, 1922; Aubert-Fleischl effect, Aubert, 1886; Fleischl, 1882), as if the visual system slightly underestimated the actual displacement of the eyes. In some special cases, compensation for smooth pursuit eye movements can approach zero (Wallach, Becklen, & Nitzberg, 1985; Li, Brenner, Cornelissen, & Kim, 2002), and has been found to be absent in at least one neurological patient (Haarmeier, Thier, Repnow, & Petersen, 1997). 
The problems raised by spatial constancy and compensation for eye movements have been the topic of extensive research in neurophysiology (Duhamel, Colby, & Goldberg, 1992; Ross, Morrone, Goldberg, & Burr, 2001; Merriam, Genovese, & Colby, 2003; Andersen, Essick, & Siegel, 1985; Snyder, Grieve, Brotchie, & Andersen, 1998). Concerning the perception of motion during smooth pursuit, two areas in the superior temporal sulcus in monkeys, MT and MST, are known to be specialized in processing visual motion (Komatsu & Wurtz, 1988; Newsome, Wurtz, & Komatsu, 1988). While neurons in MT respond only to retinal motion, neurons have been found in MST (and especially in a sub-area, MSTd) that receive extraretinal information about eye movements (Newsome et al., 1988). There is good evidence that these signals are used to differentiate eye movement-induced retinal motion from physical object motion (Erickson & Thier, 1991; Ilg, Schumann, & Thier, 2004). 
The present study is concerned with the problem of reference frames and compensation for smooth pursuit eye movements in motion detection, and with the timing of this compensation. When the eyes are engaged in pursuit and an object appears, the “raw input” about its motion is in a retinocentric reference frame. How long does it take for the representation of the object’s motion to be compensated for the eye movement? Is compensation immediate, or is there a time window in which uncompensated movement is detected? Although this issue has received some attention in psychophysics (Stoper, 1967; Mack & Herman, 1978) and neurophysiology (Haarmeier & Thier, 1998; Hoffmann & Bach, 2002; Tikhonov, Haarmeier, Thier, Braun, & Lutzenberger, 2004), little evidence of time evolution of compensation has been presented. Here we introduce a technique that detects the time evolution of compensation for very brief stimuli. 
We have used a modified form of the Ivry and Cohen visual search task (Ivry & Cohen, 1992) mentioned above, in which a subject either searches for a fast-moving item among slow-moving distractors, or vice versa, and in which fast-moving objects are better detected by observers with immobile gaze. We have modified the task by yoking stimulus motion to the observer’s gaze, thus dissociating motion on the retina from motion on the screen. The idea is schematically illustrated in Figure 1. While the observer pursues a cross (moving at 6°/s) on a computer screen, a number of moving points briefly appear. On half the trials, one point-the target-has a different speed than the rest (but all objects move in the same direction); the subject’s task is to determine if such a target is present. The motion of the points is chosen so that the target (when present) moves slowly on the screen but fast on the retina, while the distractors move fast on the screen but slowly on the retina (the left panels in Figure 1) or the reverse: the target fast on the screen but slow on the retina and the distractors slow and fast, respectively (the right panels in Figure 1). If the visual search asymmetry (Ivry & Cohen, 1992) is due to efficient detection of fast targets on the retina, then the stimulus on the left of Figure 1 should be detected better than the one on the right. If, on the other hand, rapid objects in an allocentric frame1 are detected efficiently (i.e., motion that is already compensated for eye movement), then the stimulus on the right should be detected better. 
Figure 1
 
Incorporating eye movement into the visual search paradigm allows us to dissociate retino- and allocentric reference frames in motion detection. Top panels show stimuli on the screen while subjects pursue the cross (perfect pursuit is assumed in this example), while bottom panels show the corresponding retinal projections. In the stimulus on the left, the motion of the target is slow on the screen but fast on the retina (with the distractors fast and slow, respectively). The speeds of thetarget are reversed in the stimulus on the right: fast on thescreen and slow on the retina.
Figure 1
 
Incorporating eye movement into the visual search paradigm allows us to dissociate retino- and allocentric reference frames in motion detection. Top panels show stimuli on the screen while subjects pursue the cross (perfect pursuit is assumed in this example), while bottom panels show the corresponding retinal projections. In the stimulus on the left, the motion of the target is slow on the screen but fast on the retina (with the distractors fast and slow, respectively). The speeds of thetarget are reversed in the stimulus on the right: fast on thescreen and slow on the retina.
An important goal of our study was to measure the time course of the compensation process. This required stimuli with well-controlled durations, which is not possible with the standard response time paradigm that is used in visual search. We therefore presented brief stimuli (between about 80 and 150 ms) followed by masks, and used detection performance rather than response time as the dependent variable. To check that the asymmetry found previously for response times also holds for detection performance, our subjects also took part in a fixation condition, in which they fixated a cross while the stimulus approximately reproduced the optic flow from a previous pursuit stimulus. 
Methods
Visual display and procedure
Trials were first performed in the pursuit condition while the subject’s eye movements were recorded. Gaze position and speed recorded in the pursuit trial were used to approximately reproduce the optic flow in corresponding later fixation trials. The subjects (8 men with normal or corrected-to-normal vision, average age 27) performed 960 trials grouped in three sessions. Each session, which lasted about an hour, was interrupted by at least two eye-tracker recalibrations and by rest breaks. A session began with a block of 16 pursuit trials, followed by a block of 16 corresponding fixation trials, and so forth. 
A trial began with the presentation of the fixation cross (two red lines of 0.8° length) at its starting position of 11.8° from the center of the screen to the left or to the right, according to the future direction of movement. The subject pressed a mouse button to begin, at which point the cross turned white until the end of the trial. In the pursuit condition, the cross accelerated from 0 to 6°/s with a constant acceleration for 1.55 s, then moved at constant speed for 1.8 s; in the fixation condition, the cross remained immobile for the same amount of time (speeds are given in the screen reference frame; motion to the subject’s right is positive). Following this initial phase when only the cross was visible, the stimulus appeared; it was composed of nine red disks, randomly positioned without overlap, each having a radius of 0.3°. In the pursuit condition, the disks either remained still or moved at 5°/s. In the “slow target” condition (slow on the retina), the target (when present, 50% of trials) moved at 5°/s while the distractors remained still, whereas in the fast condition the target remained still and the distractors moved at 5°/s. In the fixation condition, the speeds and positions of the disks were computed using the eye speed from the corresponding pursuit trial; for example, with a pursuit speed of 5.4°/s, the disks that moved at 5°/s in pursuit moved at 0.4°/s in fixation and the disks that did not move in pursuit moved 5.4°/s in fixation. In all conditions, the cross remained visible during the stimulus phase; in the pursuit condition, it continued moving at 6°/s, whereas in the fixation condition it remained still. The stimulus was presented for 82, 105, 129, or 152 ms, and was followed by a 300-ms mask composed of 500 white lines, whose endpoints were randomly chosen on each monitor frame. The mask was followed by a response screen, instructing the subject to answer whether all disks moved the same way or if one moved differently from the others (target present/absent). Except for the regularly alternating blocks of pursuit and fixation trials, trial order was randomized. 
Eye movement recording
Gaze position was measured with a Skalar Iris infrared limbus eye tracker. The eye position data were sampled at the same frequency as the display monitor, 85 Hz. Subjects’ head movements were restrained by means of a chin rest with the eyes approximately 57 cm from the monitor screen. The eye tracker was operated in monocular position mode, with one eye (the left in 10 sessions, the right in 14 sessions) set for horizontal reading. The voltage readings were converted into fixation positions on the monitor by means of a calibration procedure performed at the beginning of each session, and then at least twice during the session, in which the subject fixated a sequence of calibration points, with the screen position fit as a cubic polynomial in the voltage output of the eye tracker. 
Eye movement analysis
Eye blinks and saccades were detected by computing on-line the speed from two successive frames. If speed exceeded 40°/s, the trial was aborted (all aborted trials were performed later during the block). In pursuit trials, tracking gain (ratio of eye speed to moving cross speed) was checked during the display of the search array. If gain was less than 0.7 or greater than 1.3, the trial was aborted. 
Offline, filters were applied to eliminate trials in which incorrect tracking led to inappropriate stimuli. First, a second saccade filter was applied to detect the conjunction of eye speed over 10°/s and acceleration over 250°/s2. (In offline filters, speed and acceleration were calculated by performing first- and second-order fits in a 250-ms window terminating at stimulus offset.) Second, the retinal speed of the disks was computed, taking into account the measured eye speed. Only those trials were kept in which the disks moved all in the same direction on the retina, and whose speeds fell within the limits of 0 to 2.8°/s for the slow dots and 5 to 7.8°/s for the fast ones. Finally, trials with high acceleration were discarded. Trials were discarded if the acceleration led to a speed change of more than 25%. These filters led to the elimination of 52% of the trials, with 1776 pursuit trials and 1900 fixation trials remaining. 
Detection performance analysis
Detection performance was measured using the non-parametric measure A′ (Pollack & Norman, 1964; Grier, 1971), related to the rate of correct responses and similar to the better known d′ in that it measures discrimination rather than bias. A′ ranges from 0 to 1, with 1 reflecting perfect detection and chance level at 0.5. In addition, to measure the interaction between eye and target movement variables, we used an index of allocentricity; it is defined as the difference, between fixation and pursuit conditions, of the A′ difference for fast and slow targets: (A′Ff − A′Fs) − (A′Pf − A′Ps) (F and P refer to fixation and pursuit, while f and s indicate fast and slow targets). When the index of allocentricity is zero, detection is based on retinocentric motion (because target motion is defined in a retinocentric frame) (i.e., the difference in detection of retinal fast and slow targets is independent of eye movement, and therefore also independent of allocentric target motion); the more positive the index, the more allocentric motion contributes to detection. This index was used to study the individual performance. 
Results
As expected, detection performance in the fixation condition, shown in the left part of Figure 2(a), was better for fast targets than for slow ones for all durations taken together (p < .005 in planned comparisons), and for the three longer durations taken individually (p < .02, t test, Sidak correction). This echoes previous results on motion detection asymmetry in immobile observers (Ivry & Cohen, 1992), but with short durations and detection performance as the dependent variable, rather than response time. 
Figure 2
 
Mean detection performance (A′) as a function of stimulus duration. Perfect detection corresponds to 1 and chance level is at 0.5. (a). Detection performance as a function of stimulus duration in the pursuit and fixation conditions, for fast and slow targets. Target speed refers to motion on the retina, rather than on the monitor screen. Filled circles indicate performance significantly greater than chance (1-tailed t test, Sidak correction). (b). Same data plotted differently to show interaction between eye movement, target speed, and duration variables. Data are averaged for short durations (82, 105, and 129 ms) and long durations (152 ms).
Figure 2
 
Mean detection performance (A′) as a function of stimulus duration. Perfect detection corresponds to 1 and chance level is at 0.5. (a). Detection performance as a function of stimulus duration in the pursuit and fixation conditions, for fast and slow targets. Target speed refers to motion on the retina, rather than on the monitor screen. Filled circles indicate performance significantly greater than chance (1-tailed t test, Sidak correction). (b). Same data plotted differently to show interaction between eye movement, target speed, and duration variables. Data are averaged for short durations (82, 105, and 129 ms) and long durations (152 ms).
Performance in the pursuit condition, shown in the right part of Figure 2(a), followed a different pattern from that in fixation-even though the retinocentric visual stimuli were very similar. In discussing the results from the pursuit condition, we will use the terms “slow” and “fast” to refer to the speed of motion on the retina, rather than on the screen. The reader should keep in mind that, in the pursuit condition, “fast” targets move slowly on the screen, and vice versa. The results show that on the one hand, for the three shortest durations (below 130 ms), fast (retinocentric) targets were detected better than slow ones (p < .02 in planned comparisons), as in the fixation condition, showing that at these durations, motion was detected in a retinocentric frame. On the other hand, for the longest duration (152 ms), slow targets are detected as well as or better than fast ones: A′ is higher for slow targets, but this difference is not significant. 
We performed an analysis of variance (ANOVA) on the A′ data, with the independent variables being eye movement (pursuit, fixation), target speed (fast, slow), and stimulus duration. Not surprisingly, there is a significant main effect of duration (F3,21 = 38.0, p < .0001): Performance increased when the stimulus was displayed longer. More importantly, as can be seen in Figure 2(b), there was a significant interaction of eye movement, target speed, and duration (F3,21 = 3.92, p < .02), showing that the advantage of fast targets in the fixation condition reversed with increasing duration in the pursuit condition. This result is not due to the mere presence or absence of eye movements, because there was no significant main effect of the eye movement variable. To investigate this interaction effect further, we carried out the ANOVA separately for each duration. For the three shortest durations, there was no interaction between eye movement and target speed. However, mean A′ was higher for fast targets than for slow ones, and this main effect was significant (F1,7 = 17.4, p < .005), indicating that detection was based on retinal motion. For the longest duration (152 ms), on the other hand, the interaction between eye movement and target speed was significant (F1,7 = 15.1, p < .01), showing that the detection advantage of fast retinal targets was lost at this duration in the pursuit condition [Figure 2(b)]. 
To study these effects in individual subjects, we defined the index of allocentricity as the difference, between fixation and pursuit conditions, of the A′ difference for fast and slow targets: (A′Ff − A′Fs) − (A′Pf − A′Ps) (F and P refer to fixation and pursuit, while f and s indicate fast and slow targets). When the index of allocentricity is zero, detection is based on retinocentric motion (i.e., the difference in detection of retinal fast and slow targets is independent of eye movement, and therefore also independent of allocentric target motion): the higher the index, the more allocentric motion contributes to detection, with zero corresponding to retinocentric detection. The indices of allocentricity for each subject and each duration are shown in Figure 3. For the two shortest durations, the individual values of the indices were distributed about 0, and the mean was not significantly different from 0, as shown by a t test (p > .65). For 129 ms the mean index increased, approached significance (p = .08), and was positive in six of eight subjects. Finally, for 152 ms the index was positive for all subjects, with the mean significantly greater than zero (p < .01, t test, Sidak corrected). 
Figure 3
 
Index of allocentricity as a function of the stimulus duration, for individual subjects and mean for all subjects (solid bold line). Positive index denotes an allocentric motion detection and zero or negative egocentric. The closed symbol indicates a mean significantly greater than zero.
Figure 3
 
Index of allocentricity as a function of the stimulus duration, for individual subjects and mean for all subjects (solid bold line). Positive index denotes an allocentric motion detection and zero or negative egocentric. The closed symbol indicates a mean significantly greater than zero.
On many trials, the actual eye movements did not correspond to instructions (e.g., saccades) or had speeds that were too low or too high, and the resulting stimuli, which were coupled to the eye movements, were not acceptable. (This includes, e.g., stimuli in which objects moved in opposite directions on the retina.) These trials were eliminated a posteriori, as discussed in Methods. The effects on detection performance that we have presented are robust, in that they do not critically depend on the details of the trials that were excluded. For instance, if we include all trials, we still find the significant interaction between the eye movement, target speed, and duration variables (F3,21 = 3.28, p < .05), as well as the other effects that have been presented. 
Discussion
In summary, we have found evidence that during smooth pursuit, retinocentric motion is compensated by extraretinal eye movement signals, and that this happens very early on, within 130–150 ms of stimulus onset. Once this compensation is in place, it abolishes the relative disadvantage of slow targets in motion detection, when these targets move fast in an allocentric frame. Our technique has yielded evidence of earlier compensation than previous psychophysical or neurophysiological studies. Before compensation is in place, however, around 100 ms after stimulus onset, motion detection is better than chance, but this detection is entirely retinocentric. Thus, we have evidence of a transition from retinocentric to allocentric motion detection taking place at around 130 ms following stimulus onset. 
Although we have mainly addressed the question of extraretinal mechanisms of spatial constancy, constancy also relies on purely visual cues, through the principle of background stationarity. Namely, in the case of relative motion between a large coherent background and a smaller foreground object, the background is assumed to be stationary, hence a component opposite to the motion of the background is added to the perceived motion of the foreground object (Duncker, 1929). This purely visual constancy mechanism certainly has an effect on our stimuli: The motion of the target relative to the distractors is, at least in part, interpreted as absolute motion of the target. Nevertheless, this effect must be the same in the pursuit and fixation conditions, because relative motion is identical in the two conditions. Therefore, the performance differences that are observed between fixation and pursuit [Figure 2(a)] reflect the integration of extraretinal signals.2 
Previous studies have examined some aspects of the timing of spatial constancy, but have missed the egocentric-to-allocentric transition at 130–150 ms because of the longer durations used. An early study by Stoper (1967) indicated only weak constancy during smooth pursuit for brief durations (300 ms), with constancy increasing-but still incomplete-for much longer stimuli (1700 ms). However, Mack and Herman (1978) showed that Stoper’s results can be explained by the dominance of relative over absolute motion. When the dominance of relative motion was reduced, Mack and Herman found constancy as strong for their brief (200 ms) as for their long (1200 ms) stimuli. They concluded that by 200 ms spatial constancy is largely in place. Our results do not disagree with this conclusion, and further demonstrate that compensation for eye movements exists even down to 150 ms, but breaks down for briefer stimuli (at 100 ms and earlier). In our study, loss of spatial constancy (for durations below 100 ms) is not confounded with the dominance of relative motion as it is in Stoper’s (1967), because relative motion between target and nontarget items is identical in fixation and pursuit conditions. 
More recently, the time evolution of spatial constancy and compensation for eye movements has been investigated using electrophysiological methods. A group in Tubingen has used an experimental paradigm based on the adaptation of the extra-retinal eye movement signal by inappropriately moving backgrounds during pursuit (Haarmeier & Thier, 1996). Using magnetic evoked potentials in man, they have found traces of compensatory signals and therefore of spatial constancy as early as 160–175 ms after stimulus onset (Tikhonov et al., 2004). Measurements using EEG have found traces of compensation starting around 300 ms after stimulus onset (Haarmeier & Thier, 1998; Hoffmann & Bach, 2002). Our results do not contradict these neurophysiological findings, but demonstrate that the onset of compensation is even earlier than what is found in MEG, and allow us to probe the detection of visual motion prior to the onset of compensation for eye movements. 
Approximate information concerning the timing of constancy can be gleaned from other studies that have used grouping and visual search paradigms. The main question addressed by these works has been whether grouping and search processes operate on postconstancy, “distal” representations, or on “proximal,” preconstant ones. Contrary to a previous assumption that grouping is an early (and preconstant) process (Wertheimer, 1950), a number of studies by Rock, Palmer, and their colleagues have shown that grouping can be influenced by constancy information in the case of lightness (Rock, Nijhawan, Palmer, & Tudor, 1992), amodal completion (Palmer, Neff, & Beck, 1996), and depth (Rock & Brosgole, 1964). However, the above-mentioned studies examined how grouping occurs with unlimited exposure time, and therefore little control over the stage of visual processing that gives rise to the subject’s response. In an attempt to study grouping at earlier stages of visual processing, a recent study (Schulz & Sanocki, 2003) has shown, by limiting presentation time, that grouping by color can be based on preconstancy, retinal spectrum information. This limitation of stimulus duration has actually classically been used in studies of size constancy (Gulick & Stake, 1957) and shape constancy (Leibowitz & Bourne, 1956). The latter studies showed that before 100 ms the perceived shape is very close to the projected shape on the retina. 
In visual search, as in grouping, the classical view is that search operates on preconstant, retinal data (Treisman & Gelade, 1980). More recent work has demonstrated that the input to visual search is more complex than previously assumed. For example, Enns and Rensink (1990) demonstrated the influence of three-dimensional properties and lightning direction in visual search. In the case of amodal completion, search mechanisms rely on postcompletion information even if this impairs the search (He & Nakayama, 1992; Rensink & Enns, 1998). By interrupting the search process by a visual mask, Rauschenberger and Yantis (2001) have shown an influence of pre-amodal completion on visual search (but see Rauschenberger, Peterson, Mosca, & Bruno, 2004). Finally, Moore and Brown (2001) have shown, in the case of lightness constancy, an influence of preconstancy information on visual search even without interrupting the search task. Our results are in agreement with these findings of preconstancy influence on visual search, because we have shown that search for motion can rely on preconstant information if the search is interrupted early and on both preconstant or postconstant information for longer (but still brief) durations. Note, however, that we have not found a decrease in retinocentric motion detection for longer durations, but decreases in detection performance due to constancy have been found for durations above 200 ms. 
In the context of saccades, the timing of spatial constancy processes has been studied using tasks that involve the localization of points in the dark, in which retinal directions have to be compensated for by the orientation of the eye in the orbit. The main result has been that the compensation is slower than the eye movement itself, starting about 100–200 ms before onset of eye movement, and attaining its final level 100–200 ms after the saccade is over. 
The mismatch between saccade and compensation can be seen indirectly from psychophysical data on perisaccadic mislocalization (Matin & Pearce, 1965; Honda, 1989), as well as directly through a stroboscopic illusion (Hershberger, 1987). Therefore, on the case of both saccades and smooth pursuit, there is a complex temporal relationship between the eye movement and compensation mechanisms: Compensation is slower than the eye movement itself. 
The observation of the transition between retinocentric-based motion detection to one that is also allocentric, which takes place around 130 ms following stimulus onset, gives rise to two possible scenarios concerning motion detection. There may be two motion detectors with differing latencies: one that detects retinocentric motion (for instance, based in MT), and one in which compensation for eye movement leads to detection of allocentric motion (e.g., based in MSTd). In neurophysiological data, one could compare latencies in these cortical areas, and see whether they correspond to those of the retino- and allocentric phases in our results. Alternatively, motion detection, which we have found to occur before compensation, is an opportunistic process that can operate on intermediate, partly compensated motion signals. It would therefore be interesting to study responses to motion in area MSTd, which is found to be compensated for eye movement (Komatsu & Wurtz, 1988; Newsome et al., 1988; Erickson & Thier, 1991) or even head movement (Ilg et al., 2004). Given our results, it is possible that the degree of compensation of this response depends on latency, with early response compensated less than later activity. If this were the case, taken together with our results, it would constitute evidence that motion detection is based exclusively on activity in area MSTd. 
Acknowledgments
Commercial relationships: none. 
Corresponding author: Mark Wexler. 
Address: LPPA, CNRS, Collège de France, Paris, France. 
Footnotes
Footnotes
1  Here we use the term “allocentric” to mean reference frames independent of eye movement. Thus, for the purposes of this article, both head- and earth-centered frames are allocentric.
Footnotes
2  However, there was another type of relative motion that might have influenced our results. This was due to the fixation cross, which was roughly stationary on the retina in both pursuit and fixation conditions. Although the cross was small, it might have introduced a Duncker-like bias in favor of a retinocentric frame. Therefore, the onset of compensation for eye movement that we localize between 130 and 150 ms might occur even earlier.
References
Andersen, R. A. Essick, G. K. Siegel, R. M. (1985). Encoding of spatial location by posterior parietal neurons. Science, 230, 456–458. [PubMed] [CrossRef] [PubMed]
Aubert, H. (1886). Die Bewegungsempfindung. Pflügers Archiv, 39, 347–370. [CrossRef]
Brindley, G. S. Merton, P. A. (1960). The absence of position sense in the human eye. Journal of Physiology, 153, 127–130. [PubMed] [CrossRef] [PubMed]
Carpenter, R. H. S. (1988). Movements of the eyes. London: Pion.
Dick, M. Ullman, S. Sagi, D. (1987). Parallel and serial processes in motion detection. Science, 237, 400–402. [PubMed] [CrossRef] [PubMed]
Duhamel, J. R. Colby, C. L. Goldberg, M. E. (1992). The updating of the representation of visual space in parietal cortex by intended eye movements. Science, 255, 90–92. [PubMed] [CrossRef] [PubMed]
Duncker, K. (1929). Uber induzierte bewegung. Psychologische Forschung, 12, 180–259. [CrossRef]
Enns, J. T. Rensink, R. A. (1990). Influence of scene-based properties on visual search. Science, 247, 721–723. [PubMed] [CrossRef] [PubMed]
Erickson, R. G. Thier, P. (1991). A neuronal correlate of spatial stability during periods of self-induced visual motion. Experimental Brain Research, 86, 608–616. [PubMed] [CrossRef] [PubMed]
Filehne, W. (1922). Uber das optische wahrnehmen von bewegungen. Zeitschrift für Sinnephysiologie, 53, 134–145.
Fleischl, E. V. (1882). Physiologischoptische Notizen, 2. Mitteilung. Sitzung Wiener Bereich der Akademie der Wissenschaften, 3, 7–25.
Grier, J. B. (1971). Non parametric indexes for sensitivity & bias: Computing formulas. Psychological Bulletin, 75, 424–429. [PubMed] [CrossRef] [PubMed]
Gulick, W. L. Stake, R. E. (1957). The effect of time on size constancy. American Journal of Psychology, 70, 276–279. [PubMed] [CrossRef] [PubMed]
Haarmeier, T. Thier, P. (1996). Modification of the Filehne illusion by conditioning visual stimuli. Vision Research, 36, 741–750. [PubMed] [CrossRef] [PubMed]
Haarmeier, T. Thier, P. (1998). An electrophysiological correlate of visual motion awareness in man. Journal of Cognitive Neuroscience, 10, 464–471. [PubMed] [CrossRef] [PubMed]
Haarmeier, T. Thier, P. Repnow, M. Petersen, D. (1997). False perception of motion in a patient who cannot compensate for eye movements. Nature, 389, 849–852. [PubMed] [CrossRef] [PubMed]
He, Z. J. Nakayama, K. (1992). Surfaces versus features in visual search. Nature, 359, 231–233. [PubMed] [CrossRef] [PubMed]
Hershberger, W. A. (1987). Saccadic eye movements & the perception of visual direction. Perception & Psychophysics, 41, 35–44. [PubMed] [CrossRef] [PubMed]
Hoffmann, M. B. Bach, M. (2002). The distinction between eye & object motion is reflected by the motiononset visual evoked potential. Experimental Brain Research, 144, 141–151. [PubMed] [CrossRef] [PubMed]
Honda, H. (1989). Perceptual localization of visual stimuli flashed during saccades. Perception & Psychophysics, 46, 162–174. [PubMed] [CrossRef]
Ilg, U.J. Schumann, S. Thier, P. (2004). Posterior parietal cortex neurons encode target motion in world centered coordinates. Neuron, 43, 145–151. [PubMed] [CrossRef] [PubMed]
Ivry, R. B. Cohen, A. (1992). Asymmetry in visual search for targets defined by differences in movement speed. Journal of Experimental Psychology Human Perception & Performance, 18, 1045–1057. [PubMed] [CrossRef]
Komatsu, H. Wurtz, R. H. (1988). Relation of cortical areas MT & MST to pursuit eye movements. I. Localization & visual properties of neurons. Journal of Neurophysiology, 60, 580–603. [PubMed] [PubMed]
Leibowitz, H. Bourne, L. E. (1956). Time and intensity as determiners of perceived shape. Journal of Experimental Psychology, 51, 277–281. [PubMed] [CrossRef] [PubMed]
Li, H. C. Brenner, E. Cornelissen, F. W. Kim, E. S. (2002). Systematic distortion of perceived 2D shape during smooth pursuit eye movements. Vision Research, 42, 2569–2575. [PubMed] [CrossRef] [PubMed]
Mach, E. (1959). The analysis of sensations. Chicago: Open Court. (Original work published 1914.)
Mack, A. Herman, E. (1978). The loss of position constancy during pursuit eye movements. Vision Research, 18, 55–62. [PubMed] [CrossRef] [PubMed]
Matin, L. Pearce, D. G. (1965). Visual perception of direction for stimuli flashed during voluntary saccadic eye movements. Science, 148, 1485–1487. [CrossRef] [PubMed]
Matin, L. Picoult, E. Stevens, J. Edwards, M. MacArthur, R. (1982). Oculoparalytic illusion: Visual-field dependent spatial mislocalizations by humans partially paralyzed with curare. Science, 216, 198–201. [PubMed] [CrossRef] [PubMed]
Merriam, E. P Genovese, C. R. Colby, C. L. (2003). Spatial updating in human parietal cortex. Neuron, 39, 361–373. [PubMed] [CrossRef] [PubMed]
Moore, C. M. Brown, L. E. (2001). Preconstancy information can influence visual search: The case of lightness constancy. Journal of Experimental Psychology Human Perception & Performance, 27, 178–194. [PubMed] [CrossRef]
Newsome, W. T. Wurtz, R. H. Komatsu, H. (1988). Relation of cortical areas MT & MST to pursuit eye movements. II. Differentiation of retinal from extraretinal inputs. Journal of Neurophysiology, 60, 604–620. [PubMed] [PubMed]
Palmer, S. E. Neff, J. Beck, D. (1996). Late influences on perceptual grouping: Amodal completion. Psychonomic Bulletin & Review, 3, 75–80. [CrossRef] [PubMed]
Pollack, I. Norman, D. A. (1964). A nonparametric analysis of recognition experiments. Psychonomic Science, 1, 125–126. [CrossRef]
Rauschenberger, R. Peterson, M. A. Mosca, F. Bruno, N. (2004). Amodal completion in visual search: Pre-emption or context effects. Psychological Science, 15, 351–355. [PubMed] [CrossRef] [PubMed]
Rauschenberger, R. Yantis, S. (2001). Masking unveils pre-amodal completion representation in visual search. Nature, 410, 369–372. [PubMed] [CrossRef] [PubMed]
Rensink, R. A. Enns, J. T. (1998). Early completion of occluded objects. Vision Research, 38, 2489–2505. [PubMed] [CrossRef] [PubMed]
Rock, I. Brosgole, L. (1964). Grouping based on phenomenal proximity. Journal of Experimental Psychology, 67, 531–538. [PubMed] [CrossRef] [PubMed]
Rock, I. Nijhawan, R. Palmer, S. Tudor, L. (1992). Grouping based on phenomenal similarity of achromatic color. Perception, 21, 779–789. [PubMed] [CrossRef] [PubMed]
Ross, J. Morrone, M. C. Goldberg, M. E. Burr, D. C. (2001). Changes in visual perception at the time of saccades. Trends in Neurosciences, 24, 113–121. [PubMed] [CrossRef] [PubMed]
Royden, C. S. Wolfe, J. M. Klempen, N. (2001). Visual search asymmetries in motion & optic flow fields. Perception & Psychophysics, 63, 436–444. [PubMed] [CrossRef] [PubMed]
Schulz, M. F. Sanocki, T. (2003). Time course of perceptual grouping by color. Psychological Science, 14, 26–30. [PubMed] [CrossRef] [PubMed]
Snyder, L. H. Grieve, K. L. Brotchie, P. Andersen, R. A. (1998). Separate body & world referenced representations of visual space in parietal cortex. Nature, 394, 887–891. [PubMed] [CrossRef] [PubMed]
Sperry, R. W. (1950). Neural basis of the spontaneous optokinetic response produced by visual inversion. Journal of Comparative & Physiological Psychology, 43, 482–489. [PubMed] [CrossRef]
Stevens, J. K. Emerson, R. C. Gerstein, G. L. Kallos, T. Neufeld, G. R. Nichols, C. W. Rosenquist, A. C. (1976). Paralysis of the awake human: Visual perceptions. Vision Research, 16, 93–98. [PubMed] [CrossRef] [PubMed]
Stoper, A. (1967). Vision during pursuit eye movements: The role of occulomotor information. Unpublished doctoral dissertation, Brandeis University, Waltham, MA.
Tikhonov, A. Haarmeier, T. Thier, P. Braun, C. Lutzenberger, W. (2004). Neuromagnetic activity in medial parietooccipital cortex reflects the perception of visual motion during eye movements. Neuroimage, 21, 593–600. [PubMed] [CrossRef] [PubMed]
Treisman, A. M. Gelade, G. (1980). A feature integration theory of attention. Cognitive Psychology, 12, 97–136. [PubMed] [CrossRef] [PubMed]
von Helmholtz, H. (1867). Handbuch der Physiologischen Optik. Hamburg: Voss.
von Holst, E. Mittelstaedt, H. (1950). Das reafferenzprinzip. Naturwissenschaften, 37, 464–476.
Wallach, H. Becklen, R. Nitzberg, D. (1985). The perception of motion during colinear eye movements. Perception & Psychophysics, 38, 18–22. [PubMed] [CrossRef] [PubMed]
Wertheimer, M. (1950). A source book of Gestalt psychology. New York: The Humanities Press.
Figure 1
 
Incorporating eye movement into the visual search paradigm allows us to dissociate retino- and allocentric reference frames in motion detection. Top panels show stimuli on the screen while subjects pursue the cross (perfect pursuit is assumed in this example), while bottom panels show the corresponding retinal projections. In the stimulus on the left, the motion of the target is slow on the screen but fast on the retina (with the distractors fast and slow, respectively). The speeds of thetarget are reversed in the stimulus on the right: fast on thescreen and slow on the retina.
Figure 1
 
Incorporating eye movement into the visual search paradigm allows us to dissociate retino- and allocentric reference frames in motion detection. Top panels show stimuli on the screen while subjects pursue the cross (perfect pursuit is assumed in this example), while bottom panels show the corresponding retinal projections. In the stimulus on the left, the motion of the target is slow on the screen but fast on the retina (with the distractors fast and slow, respectively). The speeds of thetarget are reversed in the stimulus on the right: fast on thescreen and slow on the retina.
Figure 2
 
Mean detection performance (A′) as a function of stimulus duration. Perfect detection corresponds to 1 and chance level is at 0.5. (a). Detection performance as a function of stimulus duration in the pursuit and fixation conditions, for fast and slow targets. Target speed refers to motion on the retina, rather than on the monitor screen. Filled circles indicate performance significantly greater than chance (1-tailed t test, Sidak correction). (b). Same data plotted differently to show interaction between eye movement, target speed, and duration variables. Data are averaged for short durations (82, 105, and 129 ms) and long durations (152 ms).
Figure 2
 
Mean detection performance (A′) as a function of stimulus duration. Perfect detection corresponds to 1 and chance level is at 0.5. (a). Detection performance as a function of stimulus duration in the pursuit and fixation conditions, for fast and slow targets. Target speed refers to motion on the retina, rather than on the monitor screen. Filled circles indicate performance significantly greater than chance (1-tailed t test, Sidak correction). (b). Same data plotted differently to show interaction between eye movement, target speed, and duration variables. Data are averaged for short durations (82, 105, and 129 ms) and long durations (152 ms).
Figure 3
 
Index of allocentricity as a function of the stimulus duration, for individual subjects and mean for all subjects (solid bold line). Positive index denotes an allocentric motion detection and zero or negative egocentric. The closed symbol indicates a mean significantly greater than zero.
Figure 3
 
Index of allocentricity as a function of the stimulus duration, for individual subjects and mean for all subjects (solid bold line). Positive index denotes an allocentric motion detection and zero or negative egocentric. The closed symbol indicates a mean significantly greater than zero.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×