July 2006
Volume 6, Issue 9
Free
Research Article  |   August 2006
Humans can perceive heading without visual path information
Author Affiliations
Journal of Vision August 2006, Vol.6, 2. doi:https://doi.org/10.1167/6.9.2
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Li Li, Barbara T. Sweet, Leland S. Stone; Humans can perceive heading without visual path information. Journal of Vision 2006;6(9):2. https://doi.org/10.1167/6.9.2.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It has previously been reported that humans can determine their direction of 3D translation (heading) from the 2D velocity field of retinal motion experienced during self-motion through a rigid environment, as is done by current computational models of visual heading estimation from optic flow. However, these claims were supported by studies that used stimuli that contained low rotational flow rates and/or additional visual cues beyond the velocity field or a task in which observers were asked to indicate their future trajectory of self-motion (path). Thus, previous conclusions about heading estimation have been confounded by the presence of other visual factors beyond the velocity field, by the use of a path-estimation task, or both. In particular, path estimation involves an exocentric computation with respect to an environmental reference, whereas heading estimation is an egocentric computation with respect to one's line of sight. Here, we use a heading-adjustment task to demonstrate that humans can precisely estimate their heading from the velocity field, independent of visual information about path, displacement, layout, or acceleration, with accuracy robust to rotation rates at least as high as 20 deg/s. Our findings show that instantaneous velocity-field information about heading is directly available for the visual control of locomotion and steering.

Introduction
The question of how humans perceive and control their 3D self-motion has been an active area of neuroscience and cognitive-science research for the past few decades. It has long been known that when traveling on a straight path (pure translation), the focus of expansion (FOE) of the resulting radially expanding retinal flow pattern indicates one's instantaneous direction of 3D translation (heading, see Gibson, 1950). Under more complex (but natural) conditions with combined translational and rotational retinal motion (such as when traveling on a curved path or when rotating one's head or eyes), the process of extracting heading from optic flow becomes complicated as the rotation shifts the apparent FOE away from the heading direction and disrupts the radial pattern (Regan & Beverley, 1982). However, one can still mathematically compensate for the rotation and recover one's instantaneous heading from a single 2D velocity field of combined translational and rotational retinal motion generated by points in a rigid 3D environment (Bandopadhay & Ballard, 1990; Bruss & Horn, 1983; Fermuller & Aloimonos, 1995; Heeger & Jepson, 1990; Hildreth, 1992; Koenderink & van Doorn, 1987; Longuet-Higgins & Prazdny, 1980; Rieger & Lawton, 1985), a computation that can be implemented with neurophysiological models of primate extrastriate visual cortex (Lappe & Rauschecker, 1993; Perrone, 1992; Perrone & Stone, 1994; Royden, 1997; Zemel & Sejnowski, 1998). Note that although the instantaneous velocity field of combined translation and rotational retinal motion is associated with only a single heading, it is nevertheless consistent with a continuum of trajectory scenarios ranging from linear translation with eye rotation to a circular path with no eye movement (Banks, Ehrlich, Backus, & Crowell, 1996). This path ambiguity can only be resolved by examining the evolution of the optic flow over time (for an in-depth discussion, see Royden, 1994; Stone & Perrone, 1997). 
Psychophysically, it has been shown that humans can estimate their heading to within 1 deg of visual angle during simulated translation (Warren, Morris, & Kalish, 1988). However, good (i.e., accurate and precise) performance during pure translation and fixed gaze angle does not necessarily indicate 3D self-motion perception because the task could be easily performed by simply locating the FOE in the 2D flow field without any 3D interpretation. To determine if humans are capable of recovering 3D heading from combined translational and rotational retinal flow, a number of studies examined self-motion perception during simulated eye rotation and reported good performance when either the rotation rates were low (Warren & Hannon, 1988) or subjects had extraretinal information about rotation (Royden, Banks, & Crowell, 1992; Warren & Hannon, 1988) but poor performance at high rotation rates with retinal flow information alone (Royden et al., 1992). However, the task used in these studies was to estimate future path with respect to an environmental reference point, which is quite different from estimating heading itself (Stone & Perrone, 1997) and is further confounded by ambiguity in the perceived depth of the reference point (Ehrlich, Beck, Crowell, Freeman, & Banks, 1998). Although Stone and Perrone (1997), using a true heading-estimation task and simulating motion along a curved path, found that humans can indeed recover their heading accurately and precisely from optic-flow information alone (see also Cutting, 1986; Rieger & Toet, 1985), this finding does not fully resolve the question of whether humans can derive heading directly from the instantaneous velocity field as is done by the mathematical and neurophysiological models cited above. The accurate heading estimation observed could have resulted from an indirect reconstruction of heading by first estimating one's displacement over time with respect to a fixed rigid environment (one's path) and then working backward to infer heading as the path's tangent. While many recent studies have begun to address the question of heading versus path estimation during visual control of locomotion (e.g., Wann & Swapp, 2000; Warren, Kay, Zosh, Duchon, & Sahuc, 2001), the most direct way to resolve the issue unambiguously is to eliminate the visual cues beyond the velocity field that could allow one to recover one's path independent of heading (Li & Warren, 2000; Stone & Perrone, 1997). In this study, we investigate whether humans can precisely and accurately perceive and adjust their heading from a sequence of velocity fields by using dynamic random-dot optic-flow stimuli in which environmental points are periodically redrawn. This stimulus allowed us to generate a continuously available heading signal while eliminating path, displacement, and other visual cues beyond the velocity field from the optic flow. 
Methods
Subjects
Six staff members (five naive as to the specific goals of the study; four males, two females) between the age of 26 and 41 at the NASA Ames Research Center participated in the experiment. All had normal or corrected-to-normal vision. 
Visual stimuli
The display simulated an observer traveling along a circular trajectory (yaw rate: 5–20 deg/s) through a random-dot 3D cloud (depth range, 6–50 m) at three translation speeds (7.5, 10, and 15 m/s). Two display conditions were tested: (a) static scene, in which dots were displayed until they left the field of view, and (b) dynamic scene, in which dot lifetime was limited to 100-ms velocity snapshots (6 frames at 60 Hz) to match the known psychophysical integration time of human motion processing (Burr, 1981; Watson, 1979; Watson & Turano, 1995; for a review, see Watson, 1986). Our 100-ms dot lifetime is also well matched to the physiological integration time of directionally selective neurons within the primate cortical motion-processing pathway (Bair & Movshon, 2004; Osborne, Bialek, & Lisberger, 2004) and, thus, effectively represents a single “biological frame” of motion. Indeed, we performed a pilot study using dot lifetimes less than 6 frames and confirmed that motion perception per se becomes seriously compromised at shorter dot lifetimes, consistent with previous studies that found temporal integration times of 100 ms or longer for direction (Watamaniuk & Sekuler, 1992) and speed (McKee & Welch, 1985; Snowden & Braddick, 1991) discrimination. Although it is mathematically possible to derive acceleration from the 6-frame sequence of stimulus motion in our displays, previous studies have shown that humans can only reliably detect accelerations with speed changes of 30% to 80% (Calderone & Kaiser, 1989; Snowden & Braddick, 1991). We calculated the highest accelerations in our stimuli associated with the most peripherally displayed points on the closest plane. The speed changes for these points were only 15% to 27% during the 100-ms dot lifetime (depending on the condition), and such points appear only sporadically, embedded in stimuli containing hundreds of other points. Thus, it is unlikely that any acceleration cues in our dynamic condition were perceptually useful. In fact, the dot lifetime used for the dynamic condition was specifically chosen to be as short as possible without degrading motion perception per se, thus effectively limiting our dynamic-scene stimuli to sequences of independent velocity fields. 
In the case of the static scene, one's path can be derived from visual displacement over time (e.g., from one's initial and final 3D position with respect to the fixed scene, from the extended streamline trajectories of individual environmental points, or both). In the case of the dynamic scene, this is not possible because neither the velocity vectors nor the associated environmental points persist over time, thereby effectively eliminating the displacement cues and higher-order derivatives of motion (e.g., acceleration) that can be used over time to determine one's path (Rieger, 1983; Stone & Perrone, 1997). 
The 3D cloud consisted of ∼250 dots (3 × 3 white pixels, 39.8 cd/m 2), uniformly distributed on a gray background (2.7 cd/m 2). The dots were generated within a pyramidal frustum subtending the same visual angle as the field of view in the depth range of 6–50 m, such that about the same number of dots at each distance in depth was displayed on each frame and the number of visible dots was kept relatively constant throughout the trial. The frustum moved with the line of sight in virtual-world coordinates (i.e., with the vehicle orientation), which was controlled by the joystick displacement. Visual stimuli were presented on a FlexScan F980 Eizo 21-in. monitor (1,240 × 1,028 pixels, 60 deg [H] × 45 deg [V]) refreshing at 60 Hz, viewed from a chin rest at a distance of 14 in. from within a black viewing box. Twelve circular translation and rotation combinations were simulated, with the translation and rotation rates at 7.5 m/s and ±7.5 deg/s, 7.5 m/s and ±15 deg/s, 10 m/s and ±5 deg/s, 10 m/s and ±20 deg/s, 15 m/s and ±7.5 deg/s, and 15 m/s and ±15 deg/s, respectively. 
Procedure
On each trial (starting and ending with a trigger pull), observers were instructed to imagine that they were looking through the windshield of a vehicle traveling on a circular path. Their task was to use a joystick (B&G Systems, JF3) to steer and align their vehicle (and thus their virtual line of sight) with their heading direction in virtual-world coordinates (i.e., until they believed that they were looking straight in the instantaneous direction they were traveling). In the case of the static scene, this is equivalent to adjusting one's virtual-world line of sight to align it with the tangent of a circular path defined by one's continuous displacement with respect to the fixed, albeit random, virtual scene. The initial vehicle orientation was in a random position within −16 to −8 deg or +8 to +16 deg from the initial heading (negative values to the left and positive values to the right). Once the observer felt properly aligned, he or she then ended the trial with the joystick trigger ( Figure 1). Each trial generally lasted less than 10 s. With our interactive display, in screen coordinates, observers were therefore actually rotating their initially misaligned heading direction in the display to align it with the center of the screen, which represented their straight-ahead viewpoint out the windshield of their virtual vehicle. 
Figure 1
 
The curvilinear paradigm. (a) Bird's-eye view of a typical 15 m/s and 15 deg/s trial showing four time points starting with the initial heading angle of 15 deg at t = 0, which was adjusted over time to a final heading setting corresponding to a heading angle of 0.7 deg at t = 9.3 s. (b) A sample instantaneous flow field of the 3D random-dot cloud produced by complex 3D motion with the translation and rotation rates at 10 m/s and 5 deg/s. In this flow field example, heading is at the center of the display as indicated by the cross. The open circle shows the singularity (pseudo-FOE) generated by the closest points and represents the best case 2D-based performance (Royden, 1994; Stone & Perrone, 1997).
Figure 1
 
The curvilinear paradigm. (a) Bird's-eye view of a typical 15 m/s and 15 deg/s trial showing four time points starting with the initial heading angle of 15 deg at t = 0, which was adjusted over time to a final heading setting corresponding to a heading angle of 0.7 deg at t = 9.3 s. (b) A sample instantaneous flow field of the 3D random-dot cloud produced by complex 3D motion with the translation and rotation rates at 10 m/s and 5 deg/s. In this flow field example, heading is at the center of the display as indicated by the cross. The open circle shows the singularity (pseudo-FOE) generated by the closest points and represents the best case 2D-based performance (Royden, 1994; Stone & Perrone, 1997).
Although, in our interactive task, subjects induced additional rotations while their adjustments were being made, the acceptance of the final setting was made by visually examining the final flow stimulus and confirming that one's heading was indeed aligned with straight ahead. As such, the final setting was generally recorded under conditions in which the rotational flow was that defined by the experimenter, unaltered by the observer. Thus, we did not anticipate any effect caused by the interactive nature of our task, and performance in our baseline static condition was indeed indistinguishable from that observed previously under similar passive conditions (Stone & Perrone, 1997). 
The final angle between the observer's virtual line of sight and heading, defined as heading angle, was used as the raw indicator of heading-estimation performance (see Figure 1). Taking advantage of left–right symmetry and of the fact that translation rate, within the range tested, had no effect (see Results section), we collapsed the heading-angle data across these two variables to generate measures of heading error as a function of the three rotation-to-translation ratios (R:T) tested. If path perception is a prerequisite to accurate and precise heading perception, we would expect performance with the dynamic scene to be random or at least much worse than that with the static scene. However, if humans can perceive their heading, independent of visual path information, performance with the dynamic scene should be similar to that with the static scene. 
Observers viewed, monocularly with their dominant eye, both the static- and dynamic-scene displays blocked in a counterbalanced order, with 120 trials in each block (10 trials × 12 translation–rotation combinations). To avoid extraneous relative-motion information, we did not use a fixation point so that observers could move their eyes and had to make their final judgments with respect to perceived straight ahead (which could be biased away from the center of the display). It is therefore possible that performance could have been enhanced by active gaze strategies potentially used in our interactive task. However, given that the heading accuracy and precision observed here are similar to those found using 400-ms stimulus presentations, a fixation cross, and a forced-choice task (Stone & Perrone, 1997), any gaze-based strategy would seem unlikely to be playing a major role. 
To make sure that observers understood the task and became familiar with the joystick–display dynamic interaction, they received 120 practice trials before data acquisition began. Observers received feedback, that is, a blue tunnel indicating the future circular path, during initial practice with static scenes only, using motion parameters different from those used in the actual experiments to avoid the possibility of performance based on memorizing 2D flow characteristics. We provided no feedback in the actual experiment; in particular, observers never received any feedback for the critical dynamic-scene condition. 
Results
The mean heading angle is plotted in Figure 2 for all six observers and all rotation and translation pairs tested, under both the static and dynamic conditions. Although there is idiosyncratic variability in perceived straight ahead, all observers showed reasonably precise heading estimation with accuracy largely robust to variations across rotation and translation rates. Taking advantage of the fact that, for each R:T, there are two data points corresponding to a low and high translation value, for each observer separately, we performed a 2 × 2 (Translation Level × Display Condition) repeated measures ANOVA across six R:T levels and found that, for all observers, the effects of translation and display condition as well as the interaction were not significant, F(1,5) < 3.4, p > .12. The failure to find a significant effect of translation on performance is not surprising because, for a fixed distribution of depth of the environmental points and fixed field of view, the signal-to-noise ratio of heading estimation from optic flow is determined by R:T (Koenderink & van Doorn, 1987; see, however, Stone & Perrone, 1996). Therefore, to increase our statistical power, we combined heading angles for a given ratio and averaged performance across leftward and rightward rotation directions to generate a single measure of heading error and uncertainty. We then performed a 2 × 3 (Display Condition × R:T Level) repeated measures ANOVAs across all six observers. 
Figure 2
 
Heading angle (mean final heading setting ± SE across trials) as a function of R:T ratio for the static- and dynamic-scene conditions for six observers (five naive volunteers and an author, L.L.). Positive heading angles indicate that the observer's final gaze direction was deviated to the right of true heading, and conversely, negative values indicate a deviation to the left.
Figure 2
 
Heading angle (mean final heading setting ± SE across trials) as a function of R:T ratio for the static- and dynamic-scene conditions for six observers (five naive volunteers and an author, L.L.). Positive heading angles indicate that the observer's final gaze direction was deviated to the right of true heading, and conversely, negative values indicate a deviation to the left.
Mean heading error is plotted against R:T ratio for each observer in Figure 3. For all observers, performance is significantly better than that predicted from the optimal 2D FOE-based strategy (solid line) in both the static and dynamic conditions. This FOE-based response represents the horizontal output of a crude estimator that ignores the effect of rotation when estimating heading. This estimator gazes straight ahead and merely attempts to locate the zero-velocity point (actual or extrapolated) of the 2D pseudo-expansion pattern created from simulated self-motion along a curved path as if the self-motion were pure translation along a straight path. Indeed, when there is inadequate visual information to recover heading (e.g., when there is no depth variation in the environmental points), human performance has been shown to converge to this pseudo-FOE-based prediction (e.g., Royden et al., 1992; Stone & Perrone, 1997). Perfect performance is indicated by the dashed horizontal line. Thus, in both conditions of this study, observers are largely able to compensate for the presence of rotational flow. 
Figure 3
 
Heading error (mean ± SE across trials averaged across the two directions) as a function of R:T ratio for the static- and dynamic-scene conditions for six observers (five naive volunteers and an author, L.L.). Positive heading errors indicate that the observer's final gaze direction was biased in the direction of rotation, and conversely, negative values indicate a bias away from the direction of rotation. The dashed horizontal line indicates perfect performance and the solid line indicates performance of zero compensation for the rotational flow by responding to the pseudo-FOE of the closest points.
Figure 3
 
Heading error (mean ± SE across trials averaged across the two directions) as a function of R:T ratio for the static- and dynamic-scene conditions for six observers (five naive volunteers and an author, L.L.). Positive heading errors indicate that the observer's final gaze direction was biased in the direction of rotation, and conversely, negative values indicate a bias away from the direction of rotation. The dashed horizontal line indicates perfect performance and the solid line indicates performance of zero compensation for the rotational flow by responding to the pseudo-FOE of the closest points.
Not surprisingly, our observers can estimate their heading quite well in the static condition. On average, they were able to set their heading to within 2.1 ± 1.3 deg (mean unsigned error ± SD across observers) of their straight ahead. This finding using our interactive display and method of adjustment is quantitatively consistent with earlier findings using a forced-choice methodology (1.8 ± 1.3 deg and 2.3 ± 1.5 deg for the two high-rotation conditions tested in Stone & Perrone, 1997). 
More important, we found that heading performance (accuracy and precision) was quite similar under the dynamic condition. On average, with the dynamic scene, our subjects were able to set their heading to within 1.8 ± 0.9 deg (mean unsigned error ± SD across observers) of their straight ahead, which was indistinguishable from performance with the static scene, F(1,5) = 0.72, p = .43. Furthermore, the uncertainty in heading estimates ( SD of heading error) was also indistinguishable between the static and dynamic scenes, mean ± SD across observers: 4.1 ± 1.5 deg vs. 4.2 ± 1.4 deg; F(1,5) = 0.76, p = .42. We did however find, as expected (Stone & Perrone, 1996), a small but highly significant effect of R:T on heading uncertainty, F(2,10) = 8.76, p < .01. Indeed, if other stimulus variables are held constant, heading estimation is degraded by increasing R:T as predicted mathematically (Koenderink & van Doorn, 1987). 
Although overall accuracy and precision appear similar for the static and dynamic conditions, we found a significant effect of display, F(1,5) = 12.75, p < .05, as well as a significant interaction effect of Display × R:T, F(2,10) = 5.70, p < .05, on the signed error (see Figure 3). The latter finding prompted us to perform separate one-way repeated measures ANOVAs for the two display conditions. We found that the effect of R:T was significant for the dynamic condition, F(2,10) = 8.14, p < .01, but not for the static condition, F(2,10) = 0.14, p = .87, suggesting a somewhat more robust ability to compensate for rotation in the static condition. 
Discussion
Despite the fact that the dynamic scene contains spurious motion and flicker noise due to the scintillating dots and despite the presence of vestibular conflict (i.e., the vestibular system is reporting zero translation and zero rotation), visual heading estimation with the dynamic scene is similar to that with the static scene, indicating that humans have a robust visual capability to compute their 3D heading directly from the velocity field. This critical question heretofore remained unresolved and a contentious issue in the field. 
Previous studies have reported good heading estimation under experimental conditions involving pure expansion (translation only) or flow with only small amounts of rotation so that 2D strategies might prove sufficient (e.g., Warren & Hannon, 1988; Warren et al., 1988), or involving additional stereo or perspective visual information about the layout of environmental points or oculomotor information beyond the velocity field (e.g., Royden et al., 1992; van den Berg & Brenner, 1994a, 1994b), or in which the task involved path estimation as opposed to heading estimation (e.g., van den Berg, 1996; Warren, Blackwell, Kurtz, Hatsopoulos, & Kalish, 1991). In particular, to assess the role of the streamlines and high-order motion derivatives generated by fixed environmental points, Warren et al. (1991) examined the effect of dot lifetime on self-motion perception from optic flow generated by simulated circular trajectories. Their task involved judgments of extrapolated future path (the location of one's exocentric trajectory through a virtual world relative to an environmental reference point at some future point in time) as opposed to judgments of heading (one's egocentric direction of self-translation relative to one's line of sight at the current instant in time). For simulated self-motion along a straight-line trajectory, these distinct measures of perceived self-motion are aligned. For motion along curved trajectories, these two measures diverge, sometimes dramatically (Li & Warren, 2000; Stone & Perrone, 1997). In addition, Warren et al. provided visual information beyond optic flow about the relative depth of environmental points (ground-plane perspective layout). Thus, their findings do not resolve whether there is an effect of dot lifetime on heading estimation, nor do they tell us whether heading can be recovered from the velocity field alone. Lastly, although Stone & Perrone (1997) used an explicit heading estimation task and found that humans could perceive heading from brief (400 ms) presentations of optic flow without oculomotor or layout information, even at high rotation rates, it remained unsettled whether their observers estimated path first from displacement cues and/or higher-order optic-flow information in the visual stimulus and then worked backward to derive heading. Our current study rules out these possibilities and shows that humans do indeed have access to heading information from the velocity field without additional visual information about path or environmental layout. Furthermore, the accuracy and precision of heading adjustments in our interactive task are within the required range for safe control of human locomotion (Cutting, Springer, Braren, & Johnson, 1992), suggesting that velocity-field information about heading is available for the online guidance and control of self-motion. 
Our data are silent on the perception of path. However, even if a curved path was perceived in our dynamic condition, inferred indirectly from the fact that egocentric heading remained constant over time between adjustments despite the rotational flow (thereby ruling out eye movements as the source of the rotation), this does not undermine our main conclusion. Any accurate percept of path in the dynamic condition using this inference requires estimating instantaneous translation and rotation first and then determining that successive heading measures were not changing over time. Thus, with our dynamic scene, heading estimation from the velocity field must precede the possibility of path estimation. 
Although our findings show that the velocity field provides reliable heading information, this does not imply that other visual cues such as path, displacement, or acceleration, when available, do not also play a role in self-motion perception. The significant effect of display condition on the signed heading error indeed suggests that optic-flow cues beyond the velocity field may make the recovery of translational flow (i.e., heading) more robust to rotational masking. This issue remains open to future study as the small performance difference we observed could be due to the additional visual cues to self-motion in the static scene, the presence of motion noise generated by the random redrawing of points in the dynamic scene, or the feedback provided during practice exclusively in the static condition. 
Acknowledgments
This study was supported by NASA's Airspace Systems (711-80-03) and Human Health & Performance (111-10-10) programs. We thank Brent Beutter, Joel Lachter, and Dov Adelstein for their helpful comments on a previous draft of this article. 
Commercial relationships: none. 
Corresponding author: Li Li. 
Email: lili@hku.hk. 
Address: Department of Psychology, University of Hongkong, Pokfulam Road, Hongkong. 
References
Bair, W. Movshon, J. A. (2004). Adaptive temporal integration of motion in direction-selective neurons in macaque visual cortex. The Journal of Neuroscience, 24, 7305–7323. [PubMed] [Article] [CrossRef] [PubMed]
Bandopadhay, A. Ballard, D. H. (1990). Egomotion perception using visual tracking. Computational Intelligence, 7, 39–47. [CrossRef]
Banks, M. S. Ehrlich, S. M. Backus, B. T. Crowell, J. A. (1996). Estimating heading during real and simulated eye movements. Vision Research, 36, 431–443. [PubMed] [CrossRef] [PubMed]
Bruss, A. R. Horn, B. P. (1983). Passive navigation. Computer Vision, Graphics, and Image Processing, 21, 3–20. [CrossRef]
Burr, D. C. (1981). Temporal summation of moving images by the human visual system. Proceedings of the Royal Society of London: Series B, 211, 321–339. [PubMed] [CrossRef]
Calderone, J. B. Kaiser, M. K. (1989). Visual acceleration detection: Effect of sign and motion orientation. Perception & Psychophysics, 45, 391–394. [PubMed] [CrossRef] [PubMed]
Cutting, J. E. (1986). Perception with an eye for motion. Cambridge, MA: MIT Press.
Cutting, J. E. Springer, K. Braren, P. A. Johnson, S. H. (1992). Wayfinding on foot from information in retinal, not optical, flow. Journal of Experimental Psychology: General, 121, 41–72. [PubMed] [CrossRef] [PubMed]
Ehrlich, S. M. Beck, D. M. Crowell, J. A. Freeman, T. C. Banks, M. S. (1998). Depth information and perceived self-motion during simulated gaze rotations. Vision Research, 38, 3129–3145. [PubMed] [CrossRef] [PubMed]
Fermuller, C. Aloimonos, Y. (1995). Direct perception of three-dimensional motion from patterns of visual motion. Science, 270, 1973–1976. [PubMed] [CrossRef] [PubMed]
Gibson, J. J. (1950). The perception of the visual world. Boston: Houghton Mifflin.
Heeger, D. J. Jepson, A. D. (1990). Visual perception of three-dimensional motion. Neural Computation, 2, 129–137. [CrossRef]
Hildreth, E. C. (1992). Recovering heading for visually-guided navigation. Vision Research, 32, 1177–1192. [PubMed] [CrossRef] [PubMed]
Koenderink, J. J. van Doorn, A. J. (1987). Facts on optic flow. Biological Cybernetics, 56, 247–254. [PubMed] [CrossRef] [PubMed]
Lappe, M. Rauschecker, J. P. (1993). A neural network for the processing of optic flow from ego-motion in higher animals. Neural Computation, 5, 374–391. [CrossRef]
Li, L. Warren, Jr., W. H. (2000). Perception of heading during rotation: Sufficiency of dense motion parallax and reference objects. Vision Research, 40, 3873–3894. [PubMed] [CrossRef] [PubMed]
Longuet-Higgins, H. C. Prazdny, K. (1980). The interpretation of a moving retinal image. Proceedings of the Royal Society of London: Series B, 208, 385–397. [PubMed] [CrossRef]
McKee, S. P. Welch, L. (1985). Sequential recruitment in the discrimination of velocity. Journal of the Optical Society of America A, Optics and Image Science, 2, 243–251. [PubMed] [CrossRef] [PubMed]
Osborne, L. C. Bialek, W. (2004). Time course of information about motion direction in visual area MT of macaque monkeys. The Journal of Neuroscience, 24, 3210–3222. [PubMed] [Article] [CrossRef] [PubMed]
Perrone, J. A. (1992). Model for the computation of self-motion in biological systems. Journal of the Optical Society of America A, Optics and Image Science, 9, 177–194. [PubMed] [CrossRef] [PubMed]
Perrone, J. A. Stone, L. S. (1994). A model of self-motion estimation within primate extrastriate visual cortex. Vision Research, 34, 2917–2938. [PubMed] [CrossRef] [PubMed]
Regan, D. Beverley, K. I. (1982). How do we avoid confounding the direction we are looking and the direction we are moving? Science, 215, 194–196. [PubMed] [CrossRef] [PubMed]
Rieger, J. H. (1983). Information in optical flows induced by curved paths of observation. Journal of the Optical Society of America, 73, 339–344. [PubMed] [CrossRef] [PubMed]
Rieger, J. H. Lawton, D. T. (1985). Processing differential image motion. Journal of the Optical Society of America A, Optics and Image Science, 2, 354–360. [PubMed] [CrossRef] [PubMed]
Rieger, J. H. Toet, L. (1985). Human visual navigation in the presence of 3-D rotations. Biological Cybernetics, 52, 377–381. [PubMed] [CrossRef] [PubMed]
Royden, C. S. (1994). Analysis of misperceived observer motion during simulated eye rotations. Vision Research, 34, 3215–3222. [PubMed] [CrossRef] [PubMed]
Royden, C. S. (1997). Mathematical analysis of motion-opponent mechanisms used in the determination of heading and depth. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 14, 2128–2143. [PubMed] [CrossRef] [PubMed]
Royden, C. S. Banks, M. S. Crowell, J. A. (1992). The perception of heading during eye movements. Nature, 360, 583–585. [PubMed] [CrossRef] [PubMed]
Snowden, R. J. Braddick, O. J. (1991). The temporal integration and resolution of velocity signals. Vision Research, 31, 907–914. [PubMed] [CrossRef] [PubMed]
Stone, L. S. Perrone, J. A. (1996). Translation and rotation trade off in human visual heading estimation. Investigative Ophthalmology and Visual Science, 37,
Stone, L. S. Perrone, J. A. (1997). Human heading estimation during visually simulated curvilinear motion. Vision Research, 37, 573–590. [PubMed] [CrossRef] [PubMed]
van den Berg, A. V. (1996). Judgments of heading. Vision Research, 36, 2337–2350. [PubMed] [CrossRef] [PubMed]
van den Berg, A. V. Brenner, E. (1994a). Humans combine the optic flow with static depth cues for robust perception of heading. Vision Research, 34, 2153–2167. [PubMed] [CrossRef]
van den Berg, A. V. Brenner, E. (1994b). Why two eyes are better than one for judgments of heading. Nature, 371, 700–702. [PubMed] [CrossRef]
Wann, J. P. Swapp, D. K. (2000). Why you should look where you are going. Nature Neuroscience, 3, 647–648. [PubMed] [Article] [CrossRef] [PubMed]
Warren, Jr., W. H. Blackwell, A. W. Kurtz, K. J. Hatsopoulos, N. G. Kalish, M. L. (1991). On the sufficiency of the velocity field for perception of heading. Biological Cybernetics, 65, 311–320. [PubMed] [CrossRef] [PubMed]
Warren, Jr., W. H. Hannon, D. J. (1988). Direction of self-motion is perceived from optical flow. Nature, 336, 162–163. [CrossRef]
Warren, Jr., W. H. Kay, B. A. Zosh, W. D. Duchon, A. P. (2001). Optic flow is used to control human walking. Nature Neuroscience, 4, 213–216. [PubMed] [Article] [CrossRef] [PubMed]
Warren, Jr., W. H. Morris, M. W. Kalish, M. (1988). Perception of translation heading from optical flow. Journal of Experimental Psychology: Human Perception and Performance, 14, 646–660. [PubMed] [CrossRef] [PubMed]
Watamaniuk, S. N. Sekuler, R. (1992). Temporal and spatial integration in dynamic random-dot stimuli. Vision Research, 32, 2341–2347. [PubMed] [CrossRef] [PubMed]
Watson, A. B. (1979). Probability summation over time. Vision Research, 19, 515–522. [PubMed] [CrossRef] [PubMed]
Watson, A. B. Boff,, K. Kaufman,, L. Thomas, J. (1986). Temporal sensitivity. Handbook of perception and human performance. New York: Wiley.
Watson, A. B. Turano, K. (1995). The optimal motion stimulus. Vision Research, 35, 325–336. [PubMed] [CrossRef] [PubMed]
Zemel, R. S. Sejnowski, T. J. (1998). A model for encoding multiple object motions and self-motion in area MST of primate visual cortex. The Journal of Neuroscience, 18, 531–547. [PubMed] [Article] [PubMed]
Figure 1
 
The curvilinear paradigm. (a) Bird's-eye view of a typical 15 m/s and 15 deg/s trial showing four time points starting with the initial heading angle of 15 deg at t = 0, which was adjusted over time to a final heading setting corresponding to a heading angle of 0.7 deg at t = 9.3 s. (b) A sample instantaneous flow field of the 3D random-dot cloud produced by complex 3D motion with the translation and rotation rates at 10 m/s and 5 deg/s. In this flow field example, heading is at the center of the display as indicated by the cross. The open circle shows the singularity (pseudo-FOE) generated by the closest points and represents the best case 2D-based performance (Royden, 1994; Stone & Perrone, 1997).
Figure 1
 
The curvilinear paradigm. (a) Bird's-eye view of a typical 15 m/s and 15 deg/s trial showing four time points starting with the initial heading angle of 15 deg at t = 0, which was adjusted over time to a final heading setting corresponding to a heading angle of 0.7 deg at t = 9.3 s. (b) A sample instantaneous flow field of the 3D random-dot cloud produced by complex 3D motion with the translation and rotation rates at 10 m/s and 5 deg/s. In this flow field example, heading is at the center of the display as indicated by the cross. The open circle shows the singularity (pseudo-FOE) generated by the closest points and represents the best case 2D-based performance (Royden, 1994; Stone & Perrone, 1997).
Figure 2
 
Heading angle (mean final heading setting ± SE across trials) as a function of R:T ratio for the static- and dynamic-scene conditions for six observers (five naive volunteers and an author, L.L.). Positive heading angles indicate that the observer's final gaze direction was deviated to the right of true heading, and conversely, negative values indicate a deviation to the left.
Figure 2
 
Heading angle (mean final heading setting ± SE across trials) as a function of R:T ratio for the static- and dynamic-scene conditions for six observers (five naive volunteers and an author, L.L.). Positive heading angles indicate that the observer's final gaze direction was deviated to the right of true heading, and conversely, negative values indicate a deviation to the left.
Figure 3
 
Heading error (mean ± SE across trials averaged across the two directions) as a function of R:T ratio for the static- and dynamic-scene conditions for six observers (five naive volunteers and an author, L.L.). Positive heading errors indicate that the observer's final gaze direction was biased in the direction of rotation, and conversely, negative values indicate a bias away from the direction of rotation. The dashed horizontal line indicates perfect performance and the solid line indicates performance of zero compensation for the rotational flow by responding to the pseudo-FOE of the closest points.
Figure 3
 
Heading error (mean ± SE across trials averaged across the two directions) as a function of R:T ratio for the static- and dynamic-scene conditions for six observers (five naive volunteers and an author, L.L.). Positive heading errors indicate that the observer's final gaze direction was biased in the direction of rotation, and conversely, negative values indicate a bias away from the direction of rotation. The dashed horizontal line indicates perfect performance and the solid line indicates performance of zero compensation for the rotational flow by responding to the pseudo-FOE of the closest points.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×