Free
Research Article  |   January 2011
Perceiving path from optic flow
Author Affiliations
Journal of Vision January 2011, Vol.11, 22. doi:https://doi.org/10.1167/11.1.22
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Li Li, Joseph C. K. Cheng; Perceiving path from optic flow. Journal of Vision 2011;11(1):22. https://doi.org/10.1167/11.1.22.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We examined how people perceive their path of traveling from optic flow. Observers viewed displays simulating their traveling on a circular path over a textured ground, a random-dot ground, or a dynamic random-dot ground display in which dots were periodically redrawn to remove extended dot motion trajectories (flow lines) in the flow field. Five viewing conditions were tested in which the simulated observer gaze direction was pointed to (1) a target on the path at 30° away from the initial heading, (2) a target at 15° outside of the path, (3) a target at 15° inside of the path, (4) along the instantaneous heading, or (5) along the Z-axis of the simulated environment. Path performance was similar for all three display conditions, indicating that observers did not rely on flow lines to perceive path from optic flow. Furthermore, contrary to the idea that looking where you want to go provides accurate path perception, path perception was accurate only when the simulated observer gaze direction pointed in the instantaneous heading direction. In contrast, heading perception was accurate and not affected by path curvature regardless of the simulated gaze direction. The results suggest that heading perception is more robust than path perception.

Introduction
Accurate perception and control of self-motion is essential for humans to successfully move around in the world. Gibson (1950) first postulated that humans use the visual motion of the image of the environment on the retina experienced during locomotion (optic flow) to determine components of self-motion. For example, when traveling on the straight path with no eye, head, or body rotation (pure translation), the focus of expansion (FOE) in the resulting radial retinal flow pattern indicates one's instantaneous direction of self-motion (heading) and can thus be used for the control of self-motion. To illustrate, to steer toward a target, we keep the FOE on the target; to stay in the lane during driving, we keep the FOE at the center of road; and to steer to avoid an obstacle, we make sure the FOE is not on the obstacle, etc. Previous psychophysical studies have shown that humans can locate the FOE in optic flow to estimate heading within 1° of visual angle (e.g., Crowell & Banks, 1993; Warren, Morris, & Kalish, 1988). 
Under more complex but natural conditions when traveling on a straight path with eye, head, or body rotation or traveling on a curved path (translation and rotation), things become complicated. The retinal flow pattern is not radial anymore as the rotation in optic flow shifts the FOE away from the heading direction (Regan & Beverly, 1982). How do people recover heading in this case? Many mathematical models propose to use global flow and motion parallax information to remove the rotation and recover one's heading from a single 2D retinal velocity field (e.g., Bruss & Horn, 1983; Cutting, 1996; Fermuller & Aloimonos, 1995; Heeger & Jepson, 1990; Hildreth, 1992; Koenderink & van Doorn, 1987; Longuet-Higgins & Prazdny, 1980; Rieger & Lawton, 1985), a computation that has been implemented with neurophysiological models of primate extrastriate visual cortex (Lappe & Rauschecker, 1993; Perrone & Stone, 1994; Royden, 1997; Zemel & Sejnowski, 1998). To determine whether humans can perceive heading according to the computational models, a number of behavioral studies examined human heading perception during translation and rotation. While some studies reported that observers still needed extra-retinal information to remove the rotational component in the flow field for accurate heading perception at high rotation rates (e.g., Banks, Ehrlich, Backus, & Crowell, 1996; Royden, Banks, & Crowell, 1992), more studies reported that observers could estimate heading within 2° of visual angle by relying on information solely from optic flow regardless of whether the rotation was due to simulated eye movement or path rotation (e.g., Cutting, Vishton, Flückiger, Baumberger, & Gerndt, 1997; Grigo & Lappe, 1999; Li, Chen, & Peng, 2009; Li, Sweet, & Stone, 2006; Stone & Perrone, 1997; van den Berg, 1992). However, among the studies that examined heading perception during rotation, many have confused heading perception with path trajectory perception and used a task in which participants were asked to judge their perceived future trajectory of locomotion with respect to an environmental reference point (e.g., Li & Warren, 2000; van den Berg, 1996; Warren, Mestre, Blackwell, & Morris, 1991). 
Although heading is an important component of self-motion and many locomotion control tasks can be achieved using heading, an equally important component of self-motion is one's future trajectory of traveling (path). The common locomotion control tasks that can be achieved using heading can be similarly accomplished using path trajectory information. The instantaneous heading direction coincides with the path trajectory when one travels on a straight path but the two diverge when one follows a curved path, as in the latter case, the instantaneous heading is along the tangent direction of the curving path trajectory (Figure 1). Since Regan and Beverly (1982) raised the question of how to perceive heading from optic flow during translation and rotation, research over the last three decades has focused almost exclusively on this issue. It is surprising that very few studies have examined what information in optic flow people use to perceive path, how people perceive path, and even fewer studies have examined the relationship between heading and path perception. 
Figure 1
 
An illustration of the relationship between the instantaneous heading direction and the path trajectory for traveling on a curved path.
Figure 1
 
An illustration of the relationship between the instantaneous heading direction and the path trajectory for traveling on a curved path.
It should be noted that while heading can be recovered from a single retinal velocity field during translation and rotation, path recovery requires more. A single velocity field gives no information about the source of the rotation in the retinal flow field, either it is induced by eye, head, or body rotation or due to traveling on curved path (Royden, 1994). The instantaneous velocity field during translation and rotation is associated with one heading but is consistent with a continuum of path scenarios ranging from traveling on a straight path with eye, head, or body rotation to a circular path with no eye, head, or body rotation (Banks et al., 1996; Li & Warren, 2000; Stone & Perrone, 1997; van den Berg, 1996). This path ambiguity problem can only be solved using information beyond a single retinal velocity field such as the acceleration in the translational flow field (Rieger, 1983), dot motion over an extended amount of time (Royden, 1994), or extra-retinal signals (Banks et al., 1996). All these cues can be used to determine whether the rotational component in retinal flow is due to eye, head, body, or path rotation. However, after observers correctly estimate the source of rotation in the flow field, it still remains a question as to how observers perceive their trajectory path of traveling. 
The theories on perceiving path from optic flow can be categorized by whether path perception depends on heading perception. We discuss the theories in each of the two categories in detail below. 
Heading-dependent path perception
Estimating path curvature from rotation in the flow field. We propose that after observers determine how much of the rotational component in the flow is due to path rotation, they can then estimate path curvature, which is mathematically given by the ratio of rotation to translation rate. To locate their path of traveling, observers need to use their perceived heading as the reference direction to anchor the curving path trajectory (Figure 2a). 
Figure 2
 
Illustrations of heading-dependent path perception from optic flow: (a) Estimating path curvature from translation and rotation components in the flow field and then perceiving path relative to heading. (b) Estimating path curvature from the change of heading with respect to the translation speed and then perceiving path relative to heading. (c) Perceiving path by updating heading with respect to a reference object (e.g., a rigid environmental point) in the scene.
Figure 2
 
Illustrations of heading-dependent path perception from optic flow: (a) Estimating path curvature from translation and rotation components in the flow field and then perceiving path relative to heading. (b) Estimating path curvature from the change of heading with respect to the translation speed and then perceiving path relative to heading. (c) Perceiving path by updating heading with respect to a reference object (e.g., a rigid environmental point) in the scene.
Estimating path curvature from the change of heading. If observers fixate along a fixed axis (e.g., the Z-axis) in the environment when traveling on a curved path (Figure 2b), the velocity field is radial as there is no rotation in the retinal flow at any moment in time. However, the FOE in the retinal velocity field drifts to reflect the changing heading direction due to traveling on a curved path. Theoretically, observers can estimate path curvature from the change of heading with respect to their translation speed and then perceive their path of traveling relative to heading (Figure 2b). 
Updating heading with respect to a reference object. Li and Warren (2000) have postulated that when the display contains rigid environmental points that can serve as reference objects, observers do not have to determine the rotational component in the flow field to estimate path curvature. A large field of view and dense motion parallax information in the display allow observers to recover their instantaneous heading in the retinal coordinate system, and the path of traveling in the world can then be recovered by updating heading with respect to a reference object in the scene (Figure 2c). 
Heading-independent path perception
Passing flow line. Lee and Lishman (1977) have noted that the observer's trajectory path of traveling on the ground is specified by the flow line (i.e., the extended motion streamlines of environmental points) that passes directly beneath the observer (Figure 3a). To define this flow line, a dense flow field that contains elements on or near the path of traveling is required. 
Figure 3
 
Illustrations of heading-independent path perception from optic flow. (a) Perceiving path by identifying the flow line that passes directly beneath. (b) Perceiving path by identifying the reversal boundary in the flow field. (c) Perceiving path by locating the center of the path using the normals to any two velocity vectors on the ground. (d) Perceiving path by integrating vertical flow lines in the flow field. (e) Perceiving path by updating self-position with respect to a reference object (e.g., a rigid environmental point) in the scene.
Figure 3
 
Illustrations of heading-independent path perception from optic flow. (a) Perceiving path by identifying the flow line that passes directly beneath. (b) Perceiving path by identifying the reversal boundary in the flow field. (c) Perceiving path by locating the center of the path using the normals to any two velocity vectors on the ground. (d) Perceiving path by integrating vertical flow lines in the flow field. (e) Perceiving path by updating self-position with respect to a reference object (e.g., a rigid environmental point) in the scene.
Reversal boundary. Warren et al. (1991) have noted that when the observer is traveling on a curved path on the ground, the path trajectory is also specified by the boundary between the points on the ground that reverse their horizontal direction of motion on the image plane and those that do not, as the former points lie on the inside of the path and the latter on the outside of the path (Figure 3b). Again, to locate such boundary, a dense flow field with foreground motion is essential. 
Vector normals. Warren et al. (1991) have also proposed that when traveling on a circular path on the ground, rather than relying on flow lines, observers can locate the center of the path from the motion of a few points on the ground. Mathematically, the normals to any two velocity vectors on the ground intersect at the path center (i.e., the center of rotation). Given the path center and their current position, observers can subsequently determine their path of traveling (Figure 3c). Warren et al. (1991) showed that observers could accurately estimate their future path even when the display contained only two dots randomly distributed on the ground, thus supporting the use of the vector normal but not the passing flow line or the reversal boundary strategy for path perception. Note that to support accurate path perception, the above three theories assume that the simulated observer gaze direction is aligned with the observer's instantaneous heading direction in the display, i.e., observers look where they are going at each moment in time. 
Integrating vertical flow lines. More recently, another theory emphasizing direct path perception from optic flow proposes that accurate path perception requires observers to look where they want to go. That is, when fixating a target on the future path when traveling on a curved path, the flow lines of points on the path are vertical (Figure 3d), observers can thus recover their path of traveling by integrating all vertical flow lines in the flow field without recovering heading (Kim & Turvey, 1999; Wann & Land, 2000; Wann & Swapp, 2000). Although mathematically possible (Wann & Swapp, 2000), empirical evidence supporting the use of this strategy for path perception in humans is lacking. Some recent studies by Wilkie et al. (2008, 2010) showed that when steering a bend, instead of looking at the tangent point of the inside edge of the road as reported by Land and Lee (1994), observers actually directed their gaze at a point on the road at 1–2 s ahead. This finding, however, only indirectly supports the theory. 
Updating self-position with respect to a reference object. Similar to updating heading with respect to a reference object in the scene to recover path (Li & Warren, 2000), Li et al. (2006) have also proposed that path can be recovered by updating the perceived self-position relative to a reference object (such as a rigid environmental point) in the scene (Figure 3e). In this case, observers do not need to perceive their heading first to perceive their trajectory path of traveling. 
The purpose of the current study was thus to systematically test which of the above theories explains human path perception to find out whether people perceive path directly from optic flow independent of heading and what specific cues people use for path perception. First, we tested three display conditions. Specifically, we presented observers with displays simulating their traveling on a circular path over a textured ground, a random-dot ground, or a dynamic random-dot ground display (Figure 4). For the textured ground display, the ground plane was mapped with a green multi-scale texture to provide a dense flow field required for the use of flow lines in path perception. For the random-dot ground display, the ground plane was composed of randomly distributed dots to provide a sparse flow field with a limited number of flow lines. For the dynamic random-dot ground display, the lifetime of the dots on the ground was limited to 100 ms (6 frames at 60 Hz) to match the integration time of local motion processing (Burr, 1981; Watson & Turano, 1995), thus providing a sequence of velocity fields but no temporally integrated flow lines or rigid environmental points that could serve as reference objects for path perception. If observers relied on flow lines or fixed environmental points for path perception (such as recovering path using the passing flow line, the reversal boundary, by integrating vertical flow lines, or by updating heading/self-position with respect to a reference object), their path perception should be most accurate for the textured ground display followed by the random-dot ground display. On the other hand, if observers used the vector normal strategy or estimated path curvature from the rotation in the flow field and then perceived path relative to heading, their path performance for the three display conditions should be similarly accurate. 
Figure 4
 
Illustrations of the display conditions: (a) A textured ground and (b) a random-dot ground. The fixation cross appears at the center of the display at the beginning of the trial.
Figure 4
 
Illustrations of the display conditions: (a) A textured ground and (b) a random-dot ground. The fixation cross appears at the center of the display at the beginning of the trial.
Second, we varied the simulated observer gaze direction in the display to examine whether fixating on a target on the future path helps path perception as required by the theory of integrating vertical flow lines for path perception (Kim & Turvey, 1999; Wann & Swapp, 2000) and to investigate whether observers could estimate path curvature from the drift of heading in a radial flow field. Specifically, in Experiment 1, we tested five viewing conditions in which the simulated observer gaze direction was pointed to (1) a target on the path trajectory at 30° away from the initial heading, (2) a target at 15° outside of the path trajectory, (3) a target at 15° inside of the path trajectory, (4) along the instantaneous heading, or (5) along the Z-axis of the simulated environment (Figure 5). If observers needed to fixate a target on the future path to integrate all vertical lines in the flow field for path trajectory recovery, their path perception should be accurate only when the simulated observer gaze direction was on a target on the path. On the other hand, if observers could estimate path curvature from the change of heading when there was no rotation in the flow field, path perception should also be accurate when the simulated observer gaze direction was along the Z-axis in the world. 
Figure 5
 
A schematic illustration of the bird's-eye view of the five viewing conditions. The dotted green line indicates the simulated observer gaze direction on a target at eye height on the path at 30° away from the initial heading (target-on-path condition), the dotted blue line indicates the gaze direction on a target at eye height placed at 15° outside of the path at the beginning of the trial (target-outside-path condition), the dotted magenta line indicates the gaze direction on a target at eye height placed at 15° inside of the path at the beginning of the trial (target-inside-path condition), the cyan solid line indicates the gaze direction along the instantaneous heading (gaze-along-heading condition), and the solid black line indicates the gaze direction along the Z-axis of the environment (gaze-along-Z-axis condition).
Figure 5
 
A schematic illustration of the bird's-eye view of the five viewing conditions. The dotted green line indicates the simulated observer gaze direction on a target at eye height on the path at 30° away from the initial heading (target-on-path condition), the dotted blue line indicates the gaze direction on a target at eye height placed at 15° outside of the path at the beginning of the trial (target-outside-path condition), the dotted magenta line indicates the gaze direction on a target at eye height placed at 15° inside of the path at the beginning of the trial (target-inside-path condition), the cyan solid line indicates the gaze direction along the instantaneous heading (gaze-along-heading condition), and the solid black line indicates the gaze direction along the Z-axis of the environment (gaze-along-Z-axis condition).
Finally, as many previous studies on heading perception during rotation confused heading perception with path perception, to assess whether heading perception is indeed different from path perception, in Experiment 2, we examined observers' heading perception in the three target-fixation viewing conditions with simulated eye rotation. We expected that if unlike path perception, heading perception during translation and rotation requires only successful removal but not attribution of the source of rotation in the flow field, observers should be able to perceive their heading even when traveling on a curved path with simulated eye movements. Furthermore, if observers could perceive heading using information from the instantaneous velocity field of optic flow, their heading performance should be similar for all three display conditions. 
Experiment 1: Path perception
As mentioned above, in this experiment, we varied the simulated observer gaze direction in the scene and examined observers' path perception on three types of displays. At the end of the trial, a probe appeared on the ground at the distance of 10 m, and observers were asked to use the mouse to place the probe on their perceived future path trajectory. The logic of the experiment was given as follows. As the textured ground display provided a dense flow field and a large number of rigid elements on the ground, if observers relied on any of the flow-line-based strategies or used the rigid points on the ground as reference objects to perceive path from optic flow, their performance should be most accurate for the textured ground display, followed by the random-dot ground display. In contrast, if observers could estimate path curvature from the rotation in the flow field and then perceive path relative to heading, their performance on the three types of displays should be similar. Likewise, if observers relied on the vector normal strategy for path perception, their performance on the three types of displays should also be similar and converge with the mathematical predictions of locating the center of path using the intersection point of vector normals. Furthermore, if observers relied on integrating the vertical lines in the flow field to recover the path trajectory, their path performance should be accurate only when the simulated observer gaze direction was on a target on the path (see Kim & Turvey, 1999; Wann & Swapp, 2000). Finally, if observers could estimate path curvature from the change of heading relative to their translation speed when the retinal flow did not contain any rotation, their path performance should also be accurate when the simulated observer gaze direction was fixed along the Z-axis in the world. 
Methods
Participants
Eight students and staff (all naive as to the specific goals of the study; three males, five females) between the age of 21 and 30 at the University of Hong Kong participated in the experiment. All had normal or corrected-to-normal vision. One student (female) showed unstable path performance compared with other participants (overall SD > 15°) and was thus excluded from further data analyses. 
Visual stimuli
The display (110°H × 94°V) simulated an observer traveling on a circular path (T = 3 m/s, R = ±3°/s, ±4.5°/s, or ±6°/s; path curvature = ±0.017 m−1, ±0.026 m−1, or ±0.035 m−1, negative values indicate leftward curvature and positive values rightward curvature) over a ground plane (depth range: 1.41–50 m). These three path curvatures were well above the threshold for observers to detect their traveling on a circular path (Turano & Wang, 1994). Three types of displays were tested: (1) a textured ground in which the ground plane was mapped with a green multi-scale texture composed of a low-pass-filtered Julesz pattern with a power spectrum of 1/f (maximum luminance contrast +99%) thus providing a dense flow field (Figure 4a); (2) a random-dot ground in which the ground plane was composed of 300 green dots (0.5° in diameter, luminance contrast +99%) that were randomly placed on the ground plane such that about the same number of dots at each distance in depth was displayed on each frame. This display thus provided a sparse flow field with foreground motion (Figure 4b); and (3) a dynamic random-ground display in which the lifetime of the dots on the ground plane was limited to 100 ms (6 frames at 60 Hz) to match the known psychophysical integration time of local motion processing (Burr, 1981; Watson & Turano, 1995). The dot lifetime was chosen to be as short as possible without degrading motion perception per se to provide a sequence of velocity fields but no temporally integrated flow lines or acceleration information (Stone & Ersheid, 2006). For both random-dot ground displays, the number of visible dots per frame and the dot density distribution in depth were kept constant throughout the trial, i.e., if a certain number of dots moved outside of the field of view in one frame, the same number of dots were regenerated in that frame with an algorithm that maintained the depth layout of the dots on the ground plane. The background sky was black in all three display conditions. 
The three display conditions were crossed with five viewing conditions (Figure 5) in which the simulated observer gaze direction was (1) on a target at eye height on the path at 30° away from the initial heading 1 (target-on-path condition), (2) on a target at eye height at 15° outside of the path (target-outside-path condition), (3) on a target at eye height at 15° inside of the path (target-inside-path condition), (4) along the instantaneous heading (gaze-along-heading condition), or (5) along the Z-axis of the simulated environment (gaze-along-Z-axis condition). The viewing distance of the fixation target in the target-on-path viewing condition was used to place the target for the target-outside-path and the target-inside-path viewing conditions at the beginning of the trial. Given that the same initial target-heading angle resulted in different target viewing distances for different path curvatures, the initial target viewing distances were 57.3 m, 38.2 m, and 28.7 m for the path curvature of 0.017 m−1, 0.026 m−1, and 0.035 m−1, respectively. Figure 6 shows a sample velocity field for the five viewing conditions. 
Figure 6
 
Sample velocity fields of the random-dot ground display (path curvature = −0.035 m−1) for the (a) target-on-path, (b) target-outside-path, (c) target-inside-path, (d) gaze-along-heading, and (e) gaze-along-Z-axis viewing conditions. The cross at the center of the display indicates the simulated observer gaze direction. The solid red circle indicates heading at the end of the trial, and the black horizontal line indicates the extent to which heading drifts throughout the trial. Each other black line corresponds to a vector associated with a dot (blue) in the environment, and the red curve indicates the trajectory path of traveling.
Figure 6
 
Sample velocity fields of the random-dot ground display (path curvature = −0.035 m−1) for the (a) target-on-path, (b) target-outside-path, (c) target-inside-path, (d) gaze-along-heading, and (e) gaze-along-Z-axis viewing conditions. The cross at the center of the display indicates the simulated observer gaze direction. The solid red circle indicates heading at the end of the trial, and the black horizontal line indicates the extent to which heading drifts throughout the trial. Each other black line corresponds to a vector associated with a dot (blue) in the environment, and the red curve indicates the trajectory path of traveling.
The visual stimuli were generated on a Dell Precision Workstation 670n with an NVIDIA Quadro FX 1800 graphics card at the frame rate of 60 Hz. They were rear-projected on a large screen (110°H × 94°V) with an Epson EMP-9300 LCD projector (native resolution: 1400 × 1050 pixels, refresh rate: 60 Hz) in a light-excluded viewing booth. The screen edges were covered in matte black cloth to minimize the availability of an artificial frame of reference. Participants viewed the visual stimuli monocularly with their dominant eye from a chin rest. The simulated eye height in the display was at 1.51 m corresponding to the average eye height of participants sitting on a high chair at 0.56 m away from the screen. 
Procedure
At the beginning of each trial, observers were asked to fixate on a cross at the center of the screen corresponding to the simulated observer gaze direction in the scene. The cross disappeared when observers clicked a mouse button to start a trial that lasted 1 s. This is to remove any extraneous relative motion in the display during the course of the trial. Observers were, however, instructed to keep on fixating the center of the screen (i.e., the simulated observer gaze direction) throughout the trial. At the end of the trial, a blue probe (5.6°V) appeared on the ground at the distance of 10 m in a random position within ±20° from the center of the screen (negative values to the left and positive values to the right of the center of the screen). Observers were asked to use the mouse to place the probe on their perceived future path trajectory (so that they would hit it if they continued traveling). The angle between the perceived and the actual path position at 10 m, defined as path error, was measured. Positive values indicate that the perceived path is more curved than the actual path (i.e., inside the actual path trajectory), and negative values indicate that the perceived path is less curved than the actual path (i.e., outside of the actual path trajectory). 
Each participant viewed all three types of displays in all five viewing conditions. The experiment was composed of three sessions, corresponding to the three display conditions, with each session containing 450 randomized trials (15 trials × 5 viewing conditions × 6 path curvatures). Before each session started, participants received 90 randomized practice trials (3 trials × 5 viewing conditions × 6 path curvatures) to make sure that they understood the instructions and were able to perform the task. No feedback was given during the practice or experimental trials. The testing order of the three sessions (i.e., the display conditions) was counterbalanced between participants. The whole experiment lasted less than 1.5 h. 
Results
Given symmetrical path performance for left and right path curvatures, we collapsed the path response data across left and right path curvatures to generate measures of path error as a function of path curvature. Figure 7 plots the mean path error averaged across seven observers as a function of path curvature for the three display conditions and the five viewing conditions, respectively. Positive path errors indicate an overestimation of path curvature and negative errors indicate an underestimation. A flat function at 0° indicates accurate path perception not affected by path curvature, whereas a positive slope indicates that observers progressively perceived a more curved path with increased path curvature and a negative slope indicates the opposite. 
Figure 7
 
Mean path error as a function of path curvature for the three display conditions in the (a) target-on-path, (b) target-outside-path, (c) target-inside-path, (d) gaze-along-heading, and (e) gaze-along-Z-axis viewing conditions. The dotted horizontal gray line indicates perfect performance. The dashed black line indicates performance of locating the path center of using the normals to any two velocity vectors on the ground, and the dotted black line indicates performance of perceiving traveling on a straight path with zero rotation. Error bars are SEs across seven participants.
Figure 7
 
Mean path error as a function of path curvature for the three display conditions in the (a) target-on-path, (b) target-outside-path, (c) target-inside-path, (d) gaze-along-heading, and (e) gaze-along-Z-axis viewing conditions. The dotted horizontal gray line indicates perfect performance. The dashed black line indicates performance of locating the path center of using the normals to any two velocity vectors on the ground, and the dotted black line indicates performance of perceiving traveling on a straight path with zero rotation. Error bars are SEs across seven participants.
For each of the five viewing conditions, a 3 (display type) × 3 (path curvature) repeated-measures ANOVA was conducted. For all but the gaze-along-heading viewing condition, only the main effect of path curvature was significant (F(2,12) > 88.63, p ≪ 0.0001). For the gaze-along heading viewing condition, no effect was significant. For all but the gaze-along-heading viewing condition, path errors decreased with path curvature, indicating that observers progressively perceived a less curved path as path curvature increased. In contrast, for the gaze-along-heading condition, path errors were small and not affected by path curvature. 
To better depict the change in path error as a function of path curvature for the five viewing conditions, given the lack of effect of display condition, we collapsed data across the three display types. Figure 8 plots the mean path error averaged across the three display conditions and seven observers as a function of path curvature for the five viewing conditions. A 5 (viewing condition) × 3 (path curvature) repeated-measures ANOVA on path errors revealed that the main effects of viewing condition and path curvature as well as the interaction effect of viewing condition and path curvature were all highly significant (F(4,24) = 27.49, p ≪ 0.0001, F(2,12) = 522.59, p ≪ 0.0001, and F(8,48) = 14.81, p ≪ 0.0001). The highly significant interaction effect prompted us to perform separate one-way repeated-measures ANOVAs for the five viewing conditions. Again, we found that the effect of path curvature was highly significant for all (F(2,12) = 88.63, p ≪ 0.0001) but the gaze-along-heading viewing condition (F(2,12) = 0.88, p = 0.44). That is, while the mean path error for the target-on-path, target-outside-path, target-inside-path, and the gaze-along-Z-axis conditions decreased with path curvature (mean slope ± SE: −1.66 ± 0.16, −1.92 ± 0.14, −1.62 ± 0.16, and −2.33 ± 0.12, respectively), the mean path error was not affected by path curvature for the gaze-along-heading viewing condition (−0.23 ± 0.27). 
Figure 8
 
Mean path error averaged across the three display conditions for the five viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across seven participants.
Figure 8
 
Mean path error averaged across the three display conditions for the five viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across seven participants.
Separate t-tests showed that the path errors for the gaze-along-heading viewing condition (mean ± SE: 1.11° ± 0.93°) were not significantly different from zero (t(20) = 1.88, p = 0.08), while the path errors for the target-inside-path viewing conditions (4.05° ± 2.07°) were significantly larger than zero (t(20) = 3.1, p < 0.01), and the path errors for the target-on-path (−2.73° ± 1.63°), target-outside-path (−6.77° ± 1.08°), and the gaze-along-Z-axis conditions (−10.6° ± 0.63°) were significantly smaller than zero (t(20) = −2.54, p < 0.05, t(20) = −8,14, p ≪ 0.0001, and t(20) = −14.22, p ≪ 0.0001, respectively). This indicates that path performance was accurate only for the gaze-along-heading viewing condition. For the gaze-inside-path viewing condition, observers overestimated the path curvature, and for the rest of the three viewing conditions, observers underestimated the path curvature. The overall underestimation of the path curvature was the largest for the gaze-along-Z-axis condition, followed by the target-outside-path and the target-on-path conditions (t(20) > 4.18, p < 0.001). 
To examine whether observers used vector normals for path perception, we plotted the predicted path errors assuming that observers located the center of the path using the intersection point of the normals to any two velocity vectors on the ground in Figure 7. We used the average velocity field during the course of a trial for this prediction. For the gaze-along-heading viewing condition, the vector normal prediction lead to accurate path perception corresponding to a flat function at 0° (Figure 7d). For all three target-fixation viewing conditions (Figures 7a7c), t-tests showed that the mean slope of path error as a function of path curvature averaged across the three display conditions was significantly different from that predicted by the vector normal strategy (for the target-on-path condition: −1.66 vs. −0.85, t(6) = −5.11, p < 0.01, for the target-outside-path condition: −1.92 vs. −1.32, t(6) = −4.15, p < 0.01, and for the target-inside-path condition: −1.62 vs. 0.46, t(6) = −7.38, p < 0.001). Furthermore, for the target-inside-path condition, the vector normal strategy predicted negative path errors while the performance data showed positive path errors (Figure 7c). The path performance for the three target-fixation viewing conditions thus indicates that observers did not rely on the vector normal strategy for path perception. 
Given that the velocity field for the gaze-along-Z-axis viewing condition was radial (Figure 7e), the vector normal strategy did not apply to this viewing condition due to the lack of rotation in the retinal flow field. To examine whether observers perceived traveling on a straight path and placed the probe in their perceived instantaneous heading direction in this case, we plotted the predicted path errors assuming that observers were placing the probe on the FOE in the flow field in Figure 7e. Again, we used the average velocity field during the course of the trial for this prediction. T-tests showed that for all three path curvatures tested, the mean path error averaged across the three display conditions was not significantly different from the predicted value (t(6) < −1.66, p > 0.14). Accordingly, the mean slope of path error as a function of path curvature averaged across the three display conditions was not significantly different from that predicted by the straight path response (−2.33 vs. −2.18, t(6) = −1.18, p = 0.28), indicating that observers indeed responded as if they were traveling on a straight path. 
Discussion
We can draw the following conclusions from the results in the current experiment. First, despite the fact that the dynamic random-dot ground displays contain spurious motion noise due to the scintillating dots, path performance for this display condition is comparable to that for the random-dot and the textured ground display conditions. This indicates that observers did not rely on flow lines or self-displacement cues relative to the rigid environmental points provided by the latter two display conditions for path perception. 
Second, for the target-on-path viewing condition, path errors are mostly below 0° and display a significant negative slope (−1.66), indicating an overall underestimation of path curvature as path curvature increases. This shows that contrary to the idea that observers fixate a target on the future path and then integrate all the vertical lines in the flow field to perceive their path of traveling (Kim & Turvey, 1999; Wann & Swapp, 2000), looking where you want to go does not help path perception. 
Third, the large path errors observed for the gaze-along-Z-axis viewing condition indicate that observers could not use the drift of heading over time (shown as the black horizontal line in Figure 6e) to estimate path curvature. Instead, observers perceived traveling on a straight path. This is different from the previous findings showing that when the simulated observer orientation (i.e., gaze direction) in the display was fixed in space, observers could still adequately regenerate the path trajectory they traveled on, although significantly different from veridical (Bertin & Israel, 2005; Bertin, Israel, & Lappe, 2000). We surmise that our different findings are due to the following two reasons. First, the displays were presented for 1 s in the current experiment while the display duration was 8 s in the studies by Bertin and Israel (2005) and Bertin et al. (2000). It is possible that using the drift of heading on the screen to perceive path curvature takes time, thus the longer the display duration, the more likely observers would perceive traveling on a curved path. Second, in the current experiment, observers were asked to place a probe on their perceived future path, while in their studies, observers were asked to draw their perceived self-movement on a tablet. Consequently, the adequate but not veridical path responses in their studies could still lead to large path errors using our measurement of path perception. 
In summary, the findings from this experiment indicate that path perception is accurate only when the simulated observer gaze direction points in the instantaneous heading direction, i.e., looking where you are going but not where you want to go gives accurate path perception. This is consistent with the previous findings showing that observers drew the most veridical path trajectory when the simulated observer body orientation (i.e., observer gaze direction) in the display was tangential to the circular path traveled (Bertin & Israel, 2005; Bertin et al., 2000). As the vector normal strategy predicts accurate path perception for the gaze-along-heading viewing condition, it is tempting to conclude that observers used such strategy for path perception as reported by Warren et al. (1991). Indeed, in the study by Warren et al. (1991), the simulated observer gaze direction in the scene always pointed in the heading direction. However, for the three target-fixation viewing conditions, the observed path errors do not converge with the predicted values of the vector normal strategy, indicating that observers did not use vector normals to locate the center of the path for path perception. 
Then, how do observers perceive their trajectory path of traveling from optic flow? The results from the current experiment rule out all but one theory for path perception mentioned in the Introduction section. We propose that observers estimate path curvature from rotation and translation components in the velocity field and then perceive path relative to their perceived heading. For the gaze-along-heading viewing condition, the rotation in the flow field is solely due to path rotation, and observers can thus straightforwardly estimate path curvature. However, for the three target-fixation viewing conditions, the simulated observer gaze direction is on a fixed target in the environment. The rotation in the flow field is due to both path rotation and the simulated eye rotation for tracking the fixed target in the environment while traveling on a circular path. As the velocity field gives no information about whether the rotation in the flow field is due to eye rotation or traveling on a curved path (see Royden, 1994), it is likely that observers could not accurately estimate the amount of rotation due to path rotation and thus misperceived their path of traveling. In fact, several studies reported that when there was simulated observer eye or body rotation in the display, observers wrongly attributed their simulated eye or body rotation to path rotation and misperceived the traveled path curvature (Bertin & Israel, 2005; Bertin et al., 2000; Ehrlich, Beck, Crowell, Freeman, & Banks, 1998). Given that the rotation in the retinal flow is from both the simulated eye rotation and path rotation, if observers attributed all rotations in the display to path rotation when estimating path curvature, the path errors for the three target-fixation viewing conditions would be the same as the predictions of the vector normal strategy. Thus, the finding that observers did not use vector normals for path perception also supports the idea that observers did not combine the simulated eye rotation with path rotation for the estimation of path curvature. 
Our theory assumes that heading perception precedes path perception. While accurate path perception requires correct estimation of path rotation in the flow field, accurate heading perception during translation and rotation only needs successful removal of the rotational component in the flow field (Banks et al., 1996; Li & Warren, 2000), a computation that is mathematically possible from a single 2D retinal velocity field (e.g., Bruss & Horn, 1983; Cutting, 1996; Fermuller & Aloimonos, 1995; Heeger & Jepson, 1990; Hildreth, 1992; Koenderink & van Doorn, 1987; Longuet-Higgins & Prazdny, 1980; Rieger & Lawton, 1985). We thus hypothesize that perceiving heading should be more robust to the simulated eye rotation than perceiving path from optic flow, i.e., heading perception should not show as much dependence on gaze direction compared with path perception. In the next experiment, we directly tested this hypothesis and examined whether observers could accurately perceive heading regardless of the simulated observer gaze direction. 
Experiment 2: Heading perception
It has been reported that people can accurately perceive heading when traveling on a circular path (Li et al., 2009, 2006; Stone & Perrone, 1997) or when traveling on a straight path with simulated eye rotation (e.g., Cutting et al., 1997; Grigo & Lappe, 1999; Li & Warren, 2000, 2004; van den Berg, 1992). To the best of our knowledge, no study has examined heading perception when traveling on a curved path with simulated eye rotation. Although van den Berg, Beintema, and Frens (2001) reported that observers appeared to underestimate heading when traveling on a curved path with simulated eye rotation, they only measured observers' path perception and derived heading as the tangent to the perceived path. Thus, the heading measurement in their study did not necessarily reflect observers' heading performance. 
In this experiment, we examined observers' heading perception for the three target-fixation viewing conditions on the three display types used in Experiment 1. The goal was to find out whether heading perception was similarly affected by the simulated observer gaze direction like path perception in Experiment 1. Thus, at the end of a trial, instead of placing a probe on the perceived future path trajectory, a horizontal line appeared on the center of the screen, and observers were asked to use the mouse to move a vertical bar on the horizontal line to indicate their perceived instantaneous direction of traveling at the end of the trial. We did not test the other two viewing conditions in Experiment 1 due to the following reasons. For the gaze-along-heading viewing condition, the rotation in the flow field was solely due to path rotation, and previous studies reported that observers could accurately perceive heading within 2° of visual angle in this case (Li et al., 2009, 2006; Stone & Perrone, 1997). For the gaze-along-Z-axis viewing condition, the velocity field was radial and did not contain any rotation. Many previous studies reported that given a radial flow field, observers could locate the FOE in the flow field and recover heading within 1° of visual angle (e.g., Crowell & Banks, 1993; Warren et al., 1988). We hypothesized that if heading perception during translation and rotation required only successful removal but not attribution of the source of rotation in the flow field, observers should be able to perceive their heading in the presence of the simulated eye rotation when traveling on curved path, i.e., their heading performance should be accurate for all three target-fixation viewing conditions. Furthermore, if observers could perceive heading from the instantaneous velocity field without access to higher order optic flow information, their heading performance should be similar for all three display conditions. 
Methods
Participants
Nine students and staff (all naive as to the specific goals of the study; four males, five females) between the age of 19 and 30 at the University of Hong Kong participated in the experiment. Among them, two (one male and one female) had also participated in Experiment 1. All had normal or corrected-to-normal vision. One student (female) showed unstable heading performance compared with other participants (overall SD > 15°) and was thus excluded from further data analyses. 
Visual stimuli
The visual stimuli were the same as in Experiment 1. All three display conditions were used. However, only the three target-fixation viewing conditions (i.e., target-on-path, target-outside-path, and target-inside-path conditions) were tested in this experiment. 
Procedure
On each trial, observers were asked to fixate on a cross at the center of the screen. The cross disappeared when observers clicked a mouse button to start a trial that lasted 1 s. Observers were instructed to keep on fixating the center of the screen (i.e., the simulated observer gaze direction) throughout the trial. At the end of the trial, a white horizontal line appeared at the center of the screen on a gray background, and observers were asked to use the mouse to move a white vertical bar (3.7°V) that appeared in a random position within ±20° from the center of the screen along the horizontal line to indicate their perceived instantaneous direction of traveling, i.e., heading, at the end of the trial. The angle between the perceived heading and the actual heading at the end of the trial, defined as heading error, was measured. Positive values indicate that the perceived heading is biased toward the direction of path curvature and negative values indicate the opposite. 
Each participant viewed all three display types in the three target-fixation viewing conditions. The experiment was composed of three sessions, corresponding to the three display types, with each session containing 270 randomized trials (15 trials × 3 viewing conditions × 6 path curvatures). Before each session started, participants received 54 randomized practice trials (3 trials × 3 viewing conditions × 6 path curvatures) to make sure that they understood the instructions and were able to perform the task. No feedback was given during the practice or experimental trials. The testing order of the three sessions (i.e., the display conditions) was counterbalanced between participants. The whole experiment lasted less than 1 h. 
Results and discussion
Given symmetrical heading performance for left and right path curvatures, we collapsed the heading response data across left and right path curvatures to generate measures of heading error as a function of path curvature. Figures 9a9c plot the mean heading error averaged across eight observers as a function of path curvature for the three display conditions and the three target-fixation viewing conditions, respectively. Positive heading errors indicate a heading bias toward the direction of path curvature and negative values indicate the opposite. A flat function at 0° indicates accurate heading perception not affected by path curvature, whereas a positive slope indicates that observers perceived heading progressively in the direction of path curvature and a negative slope indicates that observers perceived heading progressively in the opposite direction of path curvature as it increased. 
Figure 9
 
Mean heading error as a function of path curvature for the three display conditions in the (a) target-on-path, (b) target-outside-path, and (c) target-inside-path viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across eight participants.
Figure 9
 
Mean heading error as a function of path curvature for the three display conditions in the (a) target-on-path, (b) target-outside-path, and (c) target-inside-path viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across eight participants.
For each of the three viewing conditions, a 3 (display type) × 3 (path curvature) repeated-measures ANOVA was conducted. For all three viewing conditions, no effect was significant, indicating that heading performance was not affected by path curvature. To better depict the change in path error as a function of path curvature for the three viewing conditions, we then collapsed data across the three display conditions. Figure 10 plots the mean heading error averaged across the three display conditions and eight observers as a function of path curvature for the three viewing conditions. 
Figure 10
 
Mean heading error averaged across the three display conditions for the three viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across eight participants.
Figure 10
 
Mean heading error averaged across the three display conditions for the three viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across eight participants.
A 3 (viewing condition) × 3 (path curvature) repeated-measures ANOVA on heading errors revealed that only the main effect of viewing condition was significant (F(2,14) = 13.01, p < 0.001). Tukey HSD tests showed that the heading errors for the target-inside-path viewing condition (mean ± SE: 4.17° ± 1.95°) were significantly larger than those for the target-on-path (−0.53° ± 1.2°) and the target-outside-path (−2.09° ± 1.15°) conditions (p < 0.01 and p < 0.001, respectively), and the heading errors for the latter two conditions were not significantly different from each other (p = 0.46). Separate t-tests showed that the heading errors for the target-on-path viewing condition are not significantly different from zero (t(23) = −0.79, p = 0.44), and the heading errors for the target-inside-path viewing conditions are significantly larger than zero (t(23) = 3.86, p < 0.001). This indicates that while heading performance was not affected by path curvature for all viewing conditions, it was more accurate for the target-on-path and the target-outside-path conditions than for the target-inside-path condition. Given that the different target positions in the three target-fixation viewing conditions resulted in larger heading eccentricities for the target-inside-path (about 45°) than the target-outside-path (about 15°) and the target-on-path (about 30°) viewing conditions (see Figure 6), the larger heading errors observed for the target-inside-path condition (about 4.2°) were consistent with the increased heading discrimination threshold at a similar heading eccentricity range reported by Crowell and Banks (1993) and were within the previously reported required error range for safe control of human locomotion (Cutting, Springer, Braren, & Johnson, 1992). 
In summary, the above results show that different from path perception, heading perception is accurate and not affected by path curvature for all three target-fixation viewing conditions. This supports the claim that observers are able to perceive their heading from optic flow when traveling on a circular path with simulated eye rotation, possibly due to the fact that heading perception during translation and rotation needs only the removal but not an exact estimation of the source of rotation in the flow field. Furthermore, the similar and accurate heading performance on all three display conditions is consistent with our previous finding that observers can perceive heading from a sequence of velocity fields without access to flow lines or acceleration information (Li et al., 2009, 2006). 
General discussion
Combining the results from the two experiments, we can draw several conclusions. Regarding path perception, the similar path performance on all three display conditions indicates that observers do not use flow lines or rigid points on the ground as reference objects for path perception. Furthermore, the path errors observed for the three target-fixation viewing conditions do not converge with the vector normal predictions, indicating that observers do not rely on the vector normal strategy for path perception. Path performance is only accurate for the gaze-along-heading viewing condition, showing that perceiving path from optic flow requires you to look where you are going but not where you want to go. 
Regarding heading perception, the similar heading performance on all three types of displays and the robust heading estimation for all three target-fixation viewing conditions show that observers can perceive heading from a sequence of velocity fields when traveling on a circular path with simulated eye rotation. This suggests that heading is directly available from the retinal velocity field and is a more robust cue than path trajectory information for the online control of self-motion. Indeed, in a separate study in which we asked participants to use a joystick to control the curvature of their trajectory path of traveling to steer toward a target, we found that participants aligned their perceived heading but not the path trajectory to the goal (Li, Stone, & Chan, 2008). 
Regarding the relationship between path and heading perception, inaccurate path but accurate heading perception for the three target-fixation viewing conditions suggests that heading and path perception are different processes. This is supported by the recent finding from human brain-imaging studies showing that the cortical area processing path information is distinct from that processing heading information (Field, Wilkie, & Wann, 2007; Wall & Smith, 2008). This is also supported by our previous finding suggesting that when the velocity field does not contain sufficient information to support the removal of rotation for accurate heading perception, observers would rely on flow lines or higher order optic flow information to perceive path, and then derive heading as the tangent to the path (Li et al., 2009). In a separate study in which we asked participants to use a joystick to steer and align the vehicle orientation with their perceived heading when traveling on a circular path while facing random perturbations to the vehicle orientation, we also found that providing visual path information increased the heading control gain at low frequencies (<0.3 Hz) and reduced the response delay (Peng, Stone, & Li, 2008). 
Finally, although the similar path performance on the three display conditions in Experiment 1 shows that observers did not use rigid environmental points in the scene for path perception, it is still possible that if a salient reference object is added to the scene, observers might use the reference object for path perception. Bertin and Israel (2005) showed that when the display contained simulated observer body rotation, adding even one salient reference object in the scene improved observers' drawing of their traveled path trajectory. However, whether observers in their study used the reference object to update their perceived heading (Li & Warren, 2000) or self-position (Li et al., 2006) over time for path perception remains a question and needs further research. 
Acknowledgments
This study was supported by a grant from the Research Grants Council of Hong Kong (HKU 7478/08H) to L. Li. We thank two anonymous reviewers and Diederick Niehorster for their helpful comments on a previous draft of this article and Lee Stone for helpful discussion. 
Commercial relationships: none. 
Corresponding author: Li Li. 
Email: lili@hku.hk. 
Address: Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong SAR. 
Footnote
Footnotes
1  Although the flow lines on the path are perfectly vertical when the target on the path is 45° away from the initial heading (see Wann & Swapp, 2000), for the three path curvatures tested in the experiment, our simulations of the flow field over the 1-s trial duration showed no perceptible difference in the vertical flow lines when the target on the path was 30° or 45° away from the initial heading. We thus placed the target on the path at 30° away from the initial heading to make sure that the path trajectory at the probe distance of 10 m was not too close to the edge of the screen.
References
Banks M. S. Ehrlich S. M. Backus B. T. Crowell J. A. (1996). Estimating heading during real and simulated eye movements. Vision Research, 36, 431–443. [CrossRef] [PubMed]
Bertin R. J. V. Israel I. (2005). Optic-flow-based perception of two-dimensional trajectories and the effects of a single landmark. Perception, 34, 453–475. [CrossRef] [PubMed]
Bertin R. J. V. Israel I. Lappe M. (2000). Perception of two-dimensional, simulated ego-motion trajectories from optic flow. Vision Research, 40, 2951–2971. [CrossRef] [PubMed]
Bruss A. R. Horn B. P. (1983). Passive navigation. Computer Vision, Graphics, and Image Processing, 21, 3–20. [CrossRef]
Burr D. C. (1981). Temporal summation of moving images by the human visual system. Proceedings of the Royal Society of London B: Biological Science, 211, 321–339. [CrossRef]
Crowell J. A. Banks M. S. (1993). Perceiving heading with different retinal regions and types of optic flow. Perception & Psychophysics, 53, 325–337. [CrossRef] [PubMed]
Cutting J. E. (1996). Wayfinding from multiple sources of local information in retinal flow. Journal of Experimental Psychology: Human Perception and Performance, 22, 1299–1313. [CrossRef]
Cutting J. E. Springer K. Braren P. A. Johnson S. H. (1992). Wayfinding on foot from information in retinal, not optic flow. Journal of Experimental Psychology: General, 121, 41–72. [CrossRef] [PubMed]
Cutting J. E. Vishton P. M. Fluckiger M. Baumberger B. Gerndt J. D. (1997). Heading and path information from retinal flow in naturalistic environments. Perception & Psychophysics, 59, 426–441. [CrossRef] [PubMed]
Ehrlich S. M. Beck D. M. Crowell J. A. Freeman T. C. Banks M. S. (1998). Depth information and perceived self-motion during simulated gaze rotations. Vision Research, 38, 3129–3145. [CrossRef] [PubMed]
Fermuller C. Aloimonos Y. (1995). Direct perception of three-dimensional motion from patterns of visual motion. Science, 270, 1973–1976. [CrossRef] [PubMed]
Field D. T. Wilkie R. M. Wann J. P. (2007). Neural systems in the visual control of steering. Journal of Neuroscience, 27, 8002–8010. [CrossRef] [PubMed]
Gibson J. J. (1950). The perception of the visual world. Boston: Houghton Mifflin.
Grigo A. Lappe M. (1999). Dynamical use of different sources of information in heading judgments from retinal flow. Journal of the Optical Society of America A, 16, 2079–2091. [CrossRef]
Heeger D. J. Jepson A. D. (1990). Visual perception of three-dimensional motion. Neural Computation, 2, 129–137. [CrossRef]
Hildreth E. (1992). Recovering heading from visually-guided navigation. Vision Research, 32, 1177–1192. [CrossRef] [PubMed]
Kim N. G. Turvey M. T. (1999). Eye movements and a rule for perceiving direction of heading. Ecological Psychology, 11, 233–248. [CrossRef]
Koenderink J. J. van Doorn A. J. (1987). Facts on optic flow. Biological Cybernetics, 56, 247–254. [CrossRef] [PubMed]
Land M. F. Lee D. N. (1994). Where we look when we steer. Nature, 369, 742–744. [CrossRef] [PubMed]
Lappe M. Rauschecker J. P. (1993). A neural network for the processing of optic flow from ego-motion in higher animals. Neural Computation, 5, 374–391. [CrossRef]
Lee D. N. Lishman R. (1977). Visual control of locomotion. Scandinavian Journal of Psychology, 18, 224–230. [CrossRef] [PubMed]
Li L. Chen J. Peng X. (2009). Influence of visual path information on human heading perception during rotation. Journal of Vision, 9, (3):29, 1–14, http://www.journalofvision.org/content/9/3/29, doi:10.1167/9.3.29. [PubMed] [Article] [CrossRef] [PubMed]
Li L. Stone L. S. Chan K. S. (2008). Visual control of steering toward a goal uses heading but not path information [Abstract]. Journal of Vision, 8, (6):1162, 1162a, http://www.journalofvision.org/content/8/6/1162, doi:10.1167/8.6.1162. [CrossRef]
Li L. Sweet B. T. Stone L. S. (2006). Humans can perceive heading without visual path information. Journal of Vision, 6, (9):2, 874–881, http://www.journalofvision.org/content/6/9/2, doi:10.1167/6.9.2. [PubMed] [Article] [CrossRef]
Li L. Warren W. H. (2000). Perception of heading during rotation: Sufficiency of dense motion parallax and reference objects. Vision Research, 40, 3873–3894. [CrossRef] [PubMed]
Li L. Warren W. H. (2004). Path perception during rotation: Influence of instructions, depth range, and dot density. Vision Research, 44, 1879–1889. [CrossRef] [PubMed]
Longuet-Higgins H. C. Prazdny K. (1980). The interpretation of a moving retinal image. Proceedings of the Royal Society of London B, 208, 385–397. [CrossRef]
Peng X. Z. Stone L. S. Li L. (2008). Humans can control heading independent of visual path information [Abstract]. Journal of Vision, 8, (6):1160, 1160a, http://www.journalofvision.org/content/8/6/1160, doi:10.1167/8.6.1160. [CrossRef]
Perrone J. A. Stone L. S. (1994). A model of self-motion estimation within primate extrastriate visual cortex. Vision Research, 34, 2917–2938. [CrossRef] [PubMed]
Regan D. Beverly K. I. (1982). How do we avoid confounding the direction we are looking and the direction we are moving? Science, 215, 194–196. [CrossRef] [PubMed]
Rieger J. H. (1983). Information in optical flows induced by curved paths of observation. Journal of the Optical Society of America, 73, 339–344. [CrossRef] [PubMed]
Rieger J. H. Lawton D. T. (1985). Processing differential image motion. Journal of the Optical Society of America A, 2, 354–360. [CrossRef]
Royden C. S. (1994). Analysis of misperceived observer motion during simulated eye rotations. Vision Research, 34, 3215–3222. [CrossRef] [PubMed]
Royden C. S. (1997). Mathematical analysis of motion-opponent mechanisms used in the determination of heading and depth. Journal of the Optical Society of America A, 14, 2128–2143. [CrossRef]
Royden C. S. Banks M. S. Crowell J. A. (1992). The perception of heading during eye movements. Nature, 360, 583–585. [CrossRef] [PubMed]
Stone L. S. Ersheid R. (2006). Time course of human sensitivity to visual acceleration. Paper presented at the Society of Neuroscience 36th Annual Meeting, San Diego, CA.
Stone L. S. Perrone J. A. (1997). Human heading estimation during visually simulated curvilinear motion. Vision Research, 37, 573–590. [CrossRef] [PubMed]
Turano K. Wang X. (1994). Visual discrimination between a curved and straight path of self motion: Effects of forward speed. Vision Research, 34, 107–114. [CrossRef] [PubMed]
van den Berg A. V. (1992). Robustness of perception of heading from optic flow. Vision Research, 32, 1285–1296. [CrossRef] [PubMed]
van den Berg A. V. (1996). Judgments of heading. Vision Research, 36, 2337–2350. [CrossRef] [PubMed]
van den Berg A. V. Beintema J. A. Frens M. A. (2001). Heading and path percepts from visual flow and eye pursuit signals. Vision Research, 41, 3467–3486. [CrossRef] [PubMed]
Wall M. B. Smith A. T. (2008). The representation of egomotion in the human brain. Current Biology, 18, 1–4. [CrossRef] [PubMed]
Wann J. P. Land M. (2000). Steering with or without the flow: Is the retrieval of heading necessary? Trends in Cognitive Sciences, 4, 319–324. [CrossRef] [PubMed]
Wann J. P. Swapp D. K. (2000). Why you should look where you are going. Nature Neuroscience, 3, 647–648. [CrossRef] [PubMed]
Warren W. H. Mestre D. R. Blackwell A. W. Morris M. W. (1991). Perception of circular heading from optical flow. Journal of Experimental Psychology: Human Perception and Performance, 17, 28–43. [CrossRef] [PubMed]
Warren W. H. Morris M. W. Kalish M. (1988). Perception of translation heading from optic flow. Journal of Experimental Psychology: Human Perception and Performance, 14, 646–660. [CrossRef] [PubMed]
Watson A. B. Turano K. (1995). The optimal motion stimulus. Vision Research, 35, 325–336. [CrossRef] [PubMed]
Wilkie R. M. Kountouriotis G. K. Merat N. Wann J. P. (2010). Using vision to control locomotion: Looking where you want to go. Experimental Brain Research, 204, 539–547. [CrossRef] [PubMed]
Wilkie R. M. Wann J. P. Allison R. S. (2008). Active gaze, visual look-ahead, and locomotor control. Journal of Experimental Psychology: Human Perception and Performance, 34, 1150–1164. [CrossRef] [PubMed]
Zemel R. S. Sejnowski T. J. (1998). A model for encoding multiple object motions and self-motion in area MST of primate visual cortex. Journal of Neuroscience, 18, 531–547. [PubMed]
Figure 1
 
An illustration of the relationship between the instantaneous heading direction and the path trajectory for traveling on a curved path.
Figure 1
 
An illustration of the relationship between the instantaneous heading direction and the path trajectory for traveling on a curved path.
Figure 2
 
Illustrations of heading-dependent path perception from optic flow: (a) Estimating path curvature from translation and rotation components in the flow field and then perceiving path relative to heading. (b) Estimating path curvature from the change of heading with respect to the translation speed and then perceiving path relative to heading. (c) Perceiving path by updating heading with respect to a reference object (e.g., a rigid environmental point) in the scene.
Figure 2
 
Illustrations of heading-dependent path perception from optic flow: (a) Estimating path curvature from translation and rotation components in the flow field and then perceiving path relative to heading. (b) Estimating path curvature from the change of heading with respect to the translation speed and then perceiving path relative to heading. (c) Perceiving path by updating heading with respect to a reference object (e.g., a rigid environmental point) in the scene.
Figure 3
 
Illustrations of heading-independent path perception from optic flow. (a) Perceiving path by identifying the flow line that passes directly beneath. (b) Perceiving path by identifying the reversal boundary in the flow field. (c) Perceiving path by locating the center of the path using the normals to any two velocity vectors on the ground. (d) Perceiving path by integrating vertical flow lines in the flow field. (e) Perceiving path by updating self-position with respect to a reference object (e.g., a rigid environmental point) in the scene.
Figure 3
 
Illustrations of heading-independent path perception from optic flow. (a) Perceiving path by identifying the flow line that passes directly beneath. (b) Perceiving path by identifying the reversal boundary in the flow field. (c) Perceiving path by locating the center of the path using the normals to any two velocity vectors on the ground. (d) Perceiving path by integrating vertical flow lines in the flow field. (e) Perceiving path by updating self-position with respect to a reference object (e.g., a rigid environmental point) in the scene.
Figure 4
 
Illustrations of the display conditions: (a) A textured ground and (b) a random-dot ground. The fixation cross appears at the center of the display at the beginning of the trial.
Figure 4
 
Illustrations of the display conditions: (a) A textured ground and (b) a random-dot ground. The fixation cross appears at the center of the display at the beginning of the trial.
Figure 5
 
A schematic illustration of the bird's-eye view of the five viewing conditions. The dotted green line indicates the simulated observer gaze direction on a target at eye height on the path at 30° away from the initial heading (target-on-path condition), the dotted blue line indicates the gaze direction on a target at eye height placed at 15° outside of the path at the beginning of the trial (target-outside-path condition), the dotted magenta line indicates the gaze direction on a target at eye height placed at 15° inside of the path at the beginning of the trial (target-inside-path condition), the cyan solid line indicates the gaze direction along the instantaneous heading (gaze-along-heading condition), and the solid black line indicates the gaze direction along the Z-axis of the environment (gaze-along-Z-axis condition).
Figure 5
 
A schematic illustration of the bird's-eye view of the five viewing conditions. The dotted green line indicates the simulated observer gaze direction on a target at eye height on the path at 30° away from the initial heading (target-on-path condition), the dotted blue line indicates the gaze direction on a target at eye height placed at 15° outside of the path at the beginning of the trial (target-outside-path condition), the dotted magenta line indicates the gaze direction on a target at eye height placed at 15° inside of the path at the beginning of the trial (target-inside-path condition), the cyan solid line indicates the gaze direction along the instantaneous heading (gaze-along-heading condition), and the solid black line indicates the gaze direction along the Z-axis of the environment (gaze-along-Z-axis condition).
Figure 6
 
Sample velocity fields of the random-dot ground display (path curvature = −0.035 m−1) for the (a) target-on-path, (b) target-outside-path, (c) target-inside-path, (d) gaze-along-heading, and (e) gaze-along-Z-axis viewing conditions. The cross at the center of the display indicates the simulated observer gaze direction. The solid red circle indicates heading at the end of the trial, and the black horizontal line indicates the extent to which heading drifts throughout the trial. Each other black line corresponds to a vector associated with a dot (blue) in the environment, and the red curve indicates the trajectory path of traveling.
Figure 6
 
Sample velocity fields of the random-dot ground display (path curvature = −0.035 m−1) for the (a) target-on-path, (b) target-outside-path, (c) target-inside-path, (d) gaze-along-heading, and (e) gaze-along-Z-axis viewing conditions. The cross at the center of the display indicates the simulated observer gaze direction. The solid red circle indicates heading at the end of the trial, and the black horizontal line indicates the extent to which heading drifts throughout the trial. Each other black line corresponds to a vector associated with a dot (blue) in the environment, and the red curve indicates the trajectory path of traveling.
Figure 7
 
Mean path error as a function of path curvature for the three display conditions in the (a) target-on-path, (b) target-outside-path, (c) target-inside-path, (d) gaze-along-heading, and (e) gaze-along-Z-axis viewing conditions. The dotted horizontal gray line indicates perfect performance. The dashed black line indicates performance of locating the path center of using the normals to any two velocity vectors on the ground, and the dotted black line indicates performance of perceiving traveling on a straight path with zero rotation. Error bars are SEs across seven participants.
Figure 7
 
Mean path error as a function of path curvature for the three display conditions in the (a) target-on-path, (b) target-outside-path, (c) target-inside-path, (d) gaze-along-heading, and (e) gaze-along-Z-axis viewing conditions. The dotted horizontal gray line indicates perfect performance. The dashed black line indicates performance of locating the path center of using the normals to any two velocity vectors on the ground, and the dotted black line indicates performance of perceiving traveling on a straight path with zero rotation. Error bars are SEs across seven participants.
Figure 8
 
Mean path error averaged across the three display conditions for the five viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across seven participants.
Figure 8
 
Mean path error averaged across the three display conditions for the five viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across seven participants.
Figure 9
 
Mean heading error as a function of path curvature for the three display conditions in the (a) target-on-path, (b) target-outside-path, and (c) target-inside-path viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across eight participants.
Figure 9
 
Mean heading error as a function of path curvature for the three display conditions in the (a) target-on-path, (b) target-outside-path, and (c) target-inside-path viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across eight participants.
Figure 10
 
Mean heading error averaged across the three display conditions for the three viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across eight participants.
Figure 10
 
Mean heading error averaged across the three display conditions for the three viewing conditions. The dotted horizontal gray line indicates perfect performance. Error bars are SEs across eight participants.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×