Free
Research Article  |   March 2009
Influence of visual path information on human heading perception during rotation
Author Affiliations
Journal of Vision March 2009, Vol.9, 29. doi:https://doi.org/10.1167/9.3.29
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Li Li, Jing Chen, Xiaozhe Peng; Influence of visual path information on human heading perception during rotation. Journal of Vision 2009;9(3):29. https://doi.org/10.1167/9.3.29.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How does visual path information influence people's perception of their instantaneous direction of self-motion (heading)? We have previously shown that humans can perceive heading without direct access to visual path information. Here we vary two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points, to investigate the conditions under which visual path information influences human heading perception. The display simulated an observer traveling on a circular path. Observers used a joystick to rotate their line of sight until deemed aligned with true heading. Four FOV sizes (110 × 94°, 48 × 41°, 16 × 14°, 8 × 7°) and depth ranges (6–50 m, 6–25 m, 6–12.5 m, 6–9 m) were tested. Consistent with our computational modeling results, heading bias increased with the reduction of FOV or depth range when the display provided a sequence of velocity fields but no direct path information. When the display provided path information, heading bias was not influenced as much by the reduction of FOV or depth range. We conclude that human heading and path perception involve separate visual processes. Path helps heading perception when the display does not contain enough optic-flow information for heading estimation during rotation.

Introduction
Accurate perception and control of self-motion is essential for humans to successfully move around in the world. Two defining features of human locomotion are one's instantaneous direction of self-motion (heading) and one's future trajectory of self-motion (path). They coincide when traveling on a straight path but diverge when traveling on a curved path, as in the latter case, heading is the tangent to one's current curved path ( Figure 1). 
Figure 1
 
An illustration of the relationship between heading and path for (a) traveling on a straight path and (b) traveling on a curved path.
Figure 1
 
An illustration of the relationship between heading and path for (a) traveling on a straight path and (b) traveling on a curved path.
Theoretically, one can recover heading from a single 2D retinal velocity field of the visual motion of the environment (optic flow) experienced during self-motion. When traveling on the straight path with no eye or body rotation, it has long been known that the focus of expansion (FOE) in the resulting radial retinal flow pattern indicates one's translational heading (Gibson, 1950). Under more complex (but natural) conditions such as when traveling on a curved path or rotating one's head or eyes, the retinal flow pattern is not radial any more as the rotation shifts the FOE away from the heading direction (Regan & Beverly, 1982). Nevertheless, mathematically, one can still rely on other sources of information in optic flow (such as global flow motion and motion parallax) to compensate for the rotation and recover one's heading from a single 2D retinal velocity field (e.g., Bruss & Horn, 1983; Cutting, 1996; Fermüller & Aloimonos, 1995; Heeger & Jepson, 1990; Hildreth, 1992; Koenderink & van Doorn, 1987; Longuet-Higgins & Prazdny, 1980; Rieger & Lawton, 1985). Indeed, previous psychophysical studies have shown that humans can estimate their heading within 1° of visual angle during simulated translation (e.g., Warren, Morris, & Kalish, 1988), and within 2° of visual angle during translation and rotation, regardless of whether the rotation is due to simulated eye movement or path rotation (e.g., Li, Sweet, & Stone, 2006; Li & Warren, 2000). 
Although the instantaneous velocity field during translation and rotation is associated with one heading direction, it is nevertheless consistent with a continuum of path scenarios ranging from traveling on a straight path with eye or head rotation to a circular path with no eye or head rotation (Banks, Ehrlich, Backus, & Crowell, 1996; Li & Warren, 2004; Stone & Perrone, 1997; van den Berg, 1996). This path ambiguity can only be resolved with information beyond a single retinal velocity field such as acceleration or motion over time of environmental points (Royden, 1994) or extra-retinal signals to determine whether the rotation in the flow field is due to eye, head, or path rotation (Banks et al., 1996; Crowell & Andersen, 2001). In addition, Li and Warren (2000, 2004) have proposed that a large field of view (112°H × 95°V) and realistic scenes allow observers to use motion parallax in the retinal velocity field to recover the instantaneous heading in the retino-centric coordinate system. The path through the world is then recovered by updating the heading with respect to the reference objects in the scene. On the other hand, it has also been proposed that humans can recover path relying on the extended streamline trajectories of individual environmental points in the retinal flow without recovering heading or integrating extra-retinal signals (Kim & Turvey, 1999; Wann & Land, 2000; Wann & Swapp, 2000). The supporting evidence for this claim comes from a study showing that when the display simulated an observer traveling on a curved path, observers could correctly direct their gaze toward their future path but not their current heading direction (Wilkie & Wann, 2006). However, using a dynamic visual display in which environmental points were periodically redrawn to remove the extended streamline trajectories and dot acceleration information, Li et al. (2006) found that humans can perceive heading during curvilinear motion without direct access to visual path information, supporting the claim that heading is directly available from the retinal velocity field for the control of locomotion and steering. 
While heading and path might involve two separate perceptual processes, people can perceive one without perceiving the other, heading and path can also be derived from each other. As mentioned before, when traveling on a curved path, given heading is the tangent to the current path ( Figure 1), observers can infer heading as soon as they perceive path. Conversely, when observers perceive heading from optic flow, they can recover path by updating heading with respect to reference objects in the scene (Li & Warren, 2000, 2004). In the present study, we investigated the conditions under which visual path information influences human heading perception by varying two key parameters for estimating heading from optic flow, the field of view (FOV) and the depth range of environmental points (Figure 2). The display simulated an observer traveling on a circular path through a random-dot 3D cloud. As in Li et al. (2006), two viewing conditions were tested: in the static-scene condition, dots were displayed until they left the field of view, thus the display provided optic flow as well as visual cues beyond a single optic-flow velocity field (such as the extended streamline trajectories of dot motion and dot acceleration) that allowed for path perception; in the dynamic-scene condition, the dot lifetime was limited to 100 ms (6 frames at 60 Hz) to match the integration time of human motion processing (Burr, 1981; Watson & Turano, 1995), thus the display provided a sequence of velocity fields but no visual cues that could allow one to recover path independent of heading. On each trial, observers used a joystick to rotate their line of sight until deemed aligned with true heading. 
Figure 2
 
A schematic illustration of the FOV and the depth range manipulations.
Figure 2
 
A schematic illustration of the FOV and the depth range manipulations.
Mathematically, a large FOV and a large depth range of environmental points increase the magnitude of motion parallax in the flow field, which allows the visual system to compensate for the rotation and determine the instantaneous heading accurately (Koenderink & van Doorn, 1987). The influence of the FOV and the dot depth range on heading estimation from optic-flow velocity fields is illustrated in Figures 3 and 4, respectively. Figure 3a shows a velocity field generated when an observer is traveling on a curved path at the translation and rotation rates of 8 m/s and 16°/s toward a depth plane at 6 m. The square indicates the heading direction, which is at the center, and the circle indicates a pseudo-FOE, i.e., the zero-velocity point in the velocity field similar to the FOE in the radial expansion flow pattern generated when an observer is traveling on a straight path at the speed of 8 m/s in the direction of the pseudo-FOE (Figure 3b). Given that the FOE indicates the heading direction for pure translation along a straight path (Gibson, 1950), if observers could not resolve the rotational component in the flow field, they would use the 2D pseudo-FOE-based strategy as a crude estimator for heading estimation. Figures 3c3f illustrate the difference between Figures 3a and 3b at four FOV sizes. We can see that a large FOV increases the magnitude of motion parallax at the peripheral regions of the display. As the FOV decreases, the difference between Figures 3a and 3b (i.e., the rotational component in the flow field) becomes less noticeable. For the effect of depth range, Figures 4a4d show a velocity field for traveling on a circular path (8 m/s and 16°/s) with dot velocity vectors at four different depth ranges, and Figures 4e4h show a velocity field for traveling on a straight path (8 m/s) toward the pseudo-FOE at the nearest depth plane (6 m in this case) at the same four depth ranges. As the depth range decreases, the velocity field in the upper panel becomes more radial and the difference between the upper and lower vertical pairs becomes less noticeable. As a consequence, observers would become more likely to ignore the rotation and rely on the pseudo-FOE for heading estimation (i.e., generating a heading bias toward the pseudo-FOE in the display). Indeed, previous studies have found that at small FOV sizes or when there is no depth variation in the environmental points, heading performance converges to the pseudo-FOE-based prediction (e.g., Grigo & Lappe, 1999; Stone & Perrone, 1997). 
Figure 3
 
An illustration of the FOV effect. (a) A sample velocity field (110° FOV) produced by traveling on a circular path (8 m/s and 16°/s) with 150 dots sampled at the depth of 6 m. Heading is at the center as indicated by the square. The solid circle shows the singularity (pseudo-FOE) and represents the optimal 2D pseudo-FOE-based heading estimation performance (Grigo & Lappe, 1999; Royden, Crowell, & Banks, 1994; Stone & Perrone, 1997). (b) A radial flow velocity field (110° FOV) produced by traveling on a straight path (8 m/s) toward the direction of the pseudo-FOE in (a). (c)–(f) Velocity fields produced by subtracting (b) from (a) at four different FOV sizes (110°, 48°, 16°, and 8°).
Figure 3
 
An illustration of the FOV effect. (a) A sample velocity field (110° FOV) produced by traveling on a circular path (8 m/s and 16°/s) with 150 dots sampled at the depth of 6 m. Heading is at the center as indicated by the square. The solid circle shows the singularity (pseudo-FOE) and represents the optimal 2D pseudo-FOE-based heading estimation performance (Grigo & Lappe, 1999; Royden, Crowell, & Banks, 1994; Stone & Perrone, 1997). (b) A radial flow velocity field (110° FOV) produced by traveling on a straight path (8 m/s) toward the direction of the pseudo-FOE in (a). (c)–(f) Velocity fields produced by subtracting (b) from (a) at four different FOV sizes (110°, 48°, 16°, and 8°).
Figure 4
 
An illustration of the depth range effect. (a)–(d) A sample velocity field (110° FOV) produced by traveling on a circular path (8 m/s and 16°/s) with 150 dots sampled at four depth ranges (6–50 m, 6–25 m, 6–12.5 m, and 6–9 m), respectively. Heading is at the center as indicated by the square. The solid circle shows the singularity (pseudo-FOE) generated at the nearest depth plane (6 m). (e)–(h) A radial flow velocity field (110° FOV) produced by traveling on a straight path (8 m/s) toward the direction of the pseudo-FOE in (a)–(d) with dots sampled at the same four depth ranges, respectively.
Figure 4
 
An illustration of the depth range effect. (a)–(d) A sample velocity field (110° FOV) produced by traveling on a circular path (8 m/s and 16°/s) with 150 dots sampled at four depth ranges (6–50 m, 6–25 m, 6–12.5 m, and 6–9 m), respectively. Heading is at the center as indicated by the square. The solid circle shows the singularity (pseudo-FOE) generated at the nearest depth plane (6 m). (e)–(h) A radial flow velocity field (110° FOV) produced by traveling on a straight path (8 m/s) toward the direction of the pseudo-FOE in (a)–(d) with dots sampled at the same four depth ranges, respectively.
Although the effects of the FOV (Grigo & Lappe, 1999; Li & Warren, 2004) and the depth range (van den Berg & Brenner, 1994; but see Ehrlich, Beck, Crowell, Freeman, & Banks, 1998) on heading perception during rotation have been reported before, no attempt has been made to systematically evaluate the influence of these two parameters on heading estimation during rotation. We thus tested four FOV sizes (110°H × 94°V, 48°H × 41°V, 16°H × 14°V, and 8°H × 7°V) and four depth ranges (6–50 m, 6–25 m, 6–12.5 m, and 6–9 m) in the current study. We predict that if visual path information does not affect heading perception during rotation, heading performance would decrease with the reduction of the FOV or the depth range, as illustrated in Figures 3 and 4, for both the static- and the dynamic-scene displays. On the other hand, if observers can use the visual path information to derive heading when there is insufficient motion parallax information in optic-flow velocity fields for accurate heading recovery, heading performance would be better on the static- rather than on the dynamic-scene displays at small FOV sizes or depth ranges. 
General methods
Visual stimuli
The display simulated an observer traveling on a circular path (yaw rate: 8–16°/s) through a random-dot 3D cloud at six translation speeds (4, 6, 8, 9, 12, and 16 m/s) under two viewing conditions:
  •  
    static scene in which dots were displayed until they left the field of view, and
  •  
    dynamic scene in which dot lifetime was limited to 100 ms (6 frames at 60 Hz) to match the known psychophysical integration time of human motion processing (Burr, 1981; Watson & Turano, 1995).
In the case of the static scene, there are several possibilities to derive one's path. The path can be extracted from the extended dot motion, the dot acceleration motion (Royden, 1994), the extended streamline trajectories of individual environmental points (Kim & Turvey, 1999; Wann & Land, 2000; Wann & Swapp, 2000), or from updating one's initial and final heading with respect to the fixed scene (Li & Warren, 2000, 2004). In the case of the dynamic scene, the dot lifetime was chosen to be as short as possible without degrading motion perception per se to provide a sequence of velocity fields. Neither the velocity vectors nor the associated environmental points persist over time. Although it is mathematically possible to derive acceleration information from the six-frame sequence of motion for path perception, Stone and Ersheid (2006) have reported that within such a limited time frame, humans cannot reliably perceive acceleration. Thus, the dynamic-scene displays do not provide any dot displacement cues or higher order derivatives of motion that can be used over time to determine one's path independent of heading (see also Li et al., 2006).
The 3D cloud was composed of white dots (3 × 3 pixels) randomly distributed on a black background (luminance contrast: +99%). The dots were generated within a pyramidal frustum subtending the size of the field of view ( Figure 2). The frustum moved with the simulated line of sight (i.e., with the vehicle orientation) that was controlled by the joystick displacement. The dots were placed in the frustum such that about the same number of dots at each distance in depth was displayed on each frame. The number of visible dots per frame was also kept relatively constant throughout the trial, i.e., if a certain number of dots moved outside of the frustum in one frame, the same number of dots were regenerated in the frustum in that frame. The visual stimuli were generated using a Dell Precision Workstation 670n with an NVIDIA Quadro FX graphics card at the frame rate of 60 Hz and were rear-projected on a large screen (110°H × 94°V) with an Epson EMP-9300 LCD projector (native resolution: 1400 × 1050 pixels) at a 60-Hz refresh rate. Observers viewed the visual stimuli from a chin rest at the distance of 56.5 cm from the large screen. We defocused the projector to blur the black grid on the screen caused by the pixilation of the LCD projector. The defocusing amount was small enough to prevent any noticeable image degradation of the visual stimuli. Eighteen translation and rotation rate combinations were simulated for the circular traveling path, with the translation and rotation rates at 4 m/s and ±8 deg/s, 6 m/s and ±8 deg/s, 6 m/s and ±12 deg/s, 8 m/s and ±8 deg/s, 8 m/s and ±16 deg/s, 9 m/s and ±12 deg/s, 12 m/s and ±12 deg/s, 12 m/s and ±16 deg/s, and 16 m/s and ±16 deg/s, respectively. 
Procedure
The procedure in this study was similar to that in Li et al. (2006). On each trial, participants were asked to imagine that they were looking through the windshield of a car traveling on a circular path. They were asked to use a joystick (B&G Systems, FlyBox) to point their virtual line of sight to their perceived heading in the simulated scene, i.e., until they believed that they were looking straight in the instantaneous direction they were traveling, which was equivalent to their straight-ahead viewpoint out of the windshield of their virtual car (see the curvilinear paradigm in Figure 1 in Li et al., 2006). The initial virtual line of sight was randomly offset from the initial heading. Participants started the trial with a trigger pull, and once they felt properly aligned, they ended the trial with another trigger pull. Each trial generally lasted less than 20 sec. In the 2D display screen coordinates, the task would be that participants were rotating their initially misaligned heading direction on the screen to align it with the center of the screen. The final angle between the participant's virtual line of sight and heading, defined as heading angle, was recorded as the indicator of heading-estimation performance. The advantage of our interactive method-of-adjustment task over the traditional passive heading judgment task is that it allows observers to actively explore the flow field both inside and outside the path curvature and thus makes it easier for them to find their “straight-ahead” viewpoint (i.e., heading). 
Participants viewed both the static- and the dynamic-scene displays monocularly with their dominant eye. No fixation point in the display was provided to remove any extraneous relative motion in the display. To make sure that participants understood the task and became familiar with the joystick control dynamics, they received practice trials with the static-scene displays only before the experiment. For the practice trials, the dots were placed in the depth range of 2–50 m in the frustum. Participants received feedback, a blue tunnel indicating the shape of the circular path they were traveling on, during practice. We used motion parameters different from those in the actual experiment to avoid the possibility that participants might base their heading estimation on memorized 2D flow characteristics from practice trials. No feedback was given during the actual experiment. 
Modeling
To quantitatively assess the effects of the FOV and the dot depth range on heading perception, we performed simulations using a modified heading estimation model developed by Perrone and Stone (1994). The model is composed of two layers. The first layer consists of sets of speed- and direction-tuned MT-like motion sensors tiling the entire visual field. They send inputs to the second layer, which consists of sets of MST-like heading detectors. Because most MSTd neurons respond to various combinations of translational and rotational flow patterns (Graziano, Andersen, & Snowden, 1994), each MST-like heading detector is tuned to a particular heading and rotation combination, and the estimated heading corresponds to the MST-like heading detector with the highest response (see details in Perrone & Stone, 1994). We chose this model to simulate the FOV size and the depth range effects because it shows the general idea that heading can be extracted from a single optic-flow velocity field with a template computation strategy that is biologically plausible. In fact, Perrone and Stone (1998) compared the response of MST-like heading detectors in the model with that of the primate MST neurons under matched visual stimulus conditions and found that the response property of MST neurons could be explained by the response patterns of the MST-like heading detectors. Although this model has been criticized for its gaze stabilization assumption to simplify the estimation of simulated eye rotation (Crowell, 1997), the rotation in our visual displays is entirely due to path curvature with no eye rotation involved. 
Given that as the FOV or the depth range of environmental points decreases, the rotational components in the velocity field become less noticeable ( Figures 3 and 4), we made the reduction of the FOV or the depth range function as a rotation response retarder to the MST-like heading detectors in the output layer, such that the magnitude of rotation ( R) each MST-like heading detector responds to changes as a power function of the FOV ( F) or the depth range ( D) following Steven's power law for magnitude estimation:  
R = R p ( F / F max ) n
(1)
or  
R = R p ( D / D max ) n ,
(2)
where R p is the actual path rotation in the visual display, and F max and D max correspond to the FOV and the depth range that have been shown adequate for accurate heading estimation during rotation from optic flow. In our simulations, F max and D max were set to 60° (H) and 25 m, respectively, based on the previous findings that at the FOV of 60° (H) and the depth range of 25 m, observers can compensate for the rotation in the flow field for accurate heading recovery (Li et al., 2006; Stone & Perrone, 1997). The exponent of the power function (n) was set to 0.2 and 0.4 for the FOV and the depth range, respectively, based on our pilot data. 
The input to the model was a velocity field consisting of a vector at each dot location. The velocity field was computed using the translation–rotation combinations in the visual stimuli. To obtain optimal simulation outputs, we set the speed- and direction-tuning function of the MT-like motion sensors with the translation rates (4, 6, 8, 12, or 16 m/s) and the depth range (four equidistant depth planes were sampled) used to generate the visual stimuli. For the MST-like heading detectors in the output layer, because the path rotation was along the axis perpendicular to the ground plane (yaw rotation), heading only varied in the azimuth direction. We thus sampled the heading direction every 1° in the range of −20° to 20° from the center of the display (negative values to the left and positive to the right) along the azimuth. This resulted in 41 MST-like heading detectors in the output layer. We limited the heading direction sampling in this range because the largest heading error assuming that observers were ignoring rotation and using the pseudo-FOE at the nearest depth plane (6 m) for heading estimation was ±11.84° from the display center for the translation–rotation combinations tested. We set the initial path rotation rate that each MST-like detector in the output layer responded to within the range of 8–16°/s of the visual stimuli. 
Experiment 1: Effect of FOV
Many previous studies have reported that a large FOV allows for improved perception and control of self-motion (see Wolpert, 1990, for a review). For heading perception during rotation, Grigo and Lappe (1999) reported accurate heading judgment with a large frontoparallel plane (90°H × 90°V) whereas Warren and Hannon (1990) found that heading estimation was near the chance level with a small frontoparallel plane (40°H × 32°V). Li and Warren (2004) also found improved heading judgment during simulated eye rotation with a random-dot ground display as the display FOV increased from 69°H × 59°V to 115°H × 94°V. Despite these findings, no attempt has been made to systematically examine the effect of the FOV on heading perception during rotation. 
The purpose of this experiment was to systematically vary the display FOV and examine its effect on heading perception during rotation for the static- and the dynamic-scene displays. We tested four FOV sizes, 110°H × 94°V, 48°H × 41°V, 16°H × 14°V, and 8°H × 7°V. As illustrated in Figure 3, a large FOV increases the magnitude of motion parallax in the peripheral region of the velocity field and allows observers to compensate for the rotation for accurate heading recovery. When the FOV is too small, observers tend to ignore the rotation and rely on the pseudo-FOE in the velocity field for heading estimation. Thus, for the dynamic-scene displays that provided observers with no visual cues beyond velocity fields for heading estimation, we expected that heading performance would drop with the reduction of the FOV, as predicted by our heading estimation model. However, for the static-scene displays in which observers had direct access to both velocity fields and visual path information, even when the FOV was too small for observers to accurately extract heading from the velocity field, they could still use visual path information to perceive path and then infer heading as the path's tangent. Thus, we expected that heading performance would not be influenced by the reduction of the FOV. 
Methods
Participants
Five students and staff (four naive as to the specific goals of the study; three males, two females) between the age of 22 and 31 at the University of Hong Kong participated in the experiment. All had normal or corrected-to-normal vision. 
Visual stimuli
In this experiment, the 3D cloud was composed of 150 white dots that were placed in the depth range of 6–20 m in the viewing pyramidal frustum. Four FOV sizes (110°H × 94°V, 48°H × 41°V, 16°H × 14°V, and 8°H × 7°V) were tested for both the static- and the dynamic-scene displays. Changing the FOV correspondingly changed the size of the pyramidal frustum through which observers were viewing. The number of dots in the frustum was kept constant ( Figure 2). The initial virtual line of sight in the simulated scene was randomly offset from the heading within −7° to −4° and 4° to 7° (negative values to the left and positive values to the right). This range was chosen so that the initial heading direction in the display was still visible even for the smallest 8°H × 7°V FOV. 
Procedure
Each participant viewed both the static- and dynamic-scene displays at all four FOV sizes. Each participant received two blocks of 54 practice trials (3 trials × 18 translation–rotation combinations) on the static-scene displays with the largest FOV size (110°H × 94°V), followed by eight blocks (2 display types × 4 FOV sizes) of 108 experimental trials (6 trials × 18 translation–rotation combinations). Trials were blocked by display type and FOV size, randomized within blocks. The testing order of display type and FOV size was counterbalanced between participants. The eight blocks of trials were divided into four sessions, with each session lasting about 1 h. Participants were asked to take as much rest as they wanted in-between the blocks, and the four sessions were run over 2–4 days. 
Results
Taking advantage of the previous findings that heading estimation during rotation depends on the rotation-to-translation (R:T) rate ratios but not the individual translation or rotation rates (Li et al., 2006; Stone & Perrone, 1997), we collapsed the heading angle data across these two variables to generate measures of heading error as a function of the R:T ratios that ranged from −2 to 2. Figures 5a and 5b plot the mean heading error averaged across five observers as a function of R:T ratio for the four FOV sizes for the static and dynamic display conditions, respectively. A flat function indicates that heading error is not affected by R:T ratio, whereas a positive slope indicates that error increases in the direction of R:T ratio. For the static condition, heading errors are much better than those predicted from locating the pseudo-FOE in the velocity field for heading estimation (the black line) at all four FOV sizes tested, remaining under 5° at the highest R:T ratio even for the smallest FOV. In contrast, for the dynamic condition, at the two large FOV sizes, heading errors are small, similar to those in the corresponding conditions for the static-scene displays. At the two small FOV sizes, heading errors increase with the R:T ratio. At the smallest FOV, heading errors are close to those predicted by the pseudo-FOE-based strategy. 
Figure 5
 
Mean heading error as a function of R:T ratio for the four FOV sizes for (a) the static- and (b) the dynamic-scene displays. The dashed horizontal line indicates perfect performance, and the solid black line indicates performance of zero compensation for the rotation by locating the pseudo-FOE in the nearest depth plane at 6 m. (c) Mean slope of heading error against FOV size for the static- and the dynamic-scene displays and the model simulations. Error bars are SEs across five participants.
Figure 5
 
Mean heading error as a function of R:T ratio for the four FOV sizes for (a) the static- and (b) the dynamic-scene displays. The dashed horizontal line indicates perfect performance, and the solid black line indicates performance of zero compensation for the rotation by locating the pseudo-FOE in the nearest depth plane at 6 m. (c) Mean slope of heading error against FOV size for the static- and the dynamic-scene displays and the model simulations. Error bars are SEs across five participants.
To better depict the change in heading error as a function of R:T ratio, Figure 5c plots the mean slope of heading error averaged across five observers for the two display types as well as the slope of heading error from the model simulation data. For the model simulation of the FOV effect on heading judgment, a complete set of trials (i.e., 6 trials at each of the 18 translation–rotation combinations) was used at each FOV, and a different velocity field was tested for each trial. As we expected, the slopes for the dynamic condition are much closer to those from the model simulations than those for the static condition. A 2 × 4 (display type × FOV size) repeated-measures ANOVA on the slopes reveals that both the main effects of display type and FOV size are significant ( F(1,4) = 40.39, p < 0.01 and F(3,12) = 13.39, p < 0.001, respectively) as well as the interaction effect of display type and FOV size ( F(3,12) = 19.35, p < 0.0001). The highly significant interaction effect prompted us to perform separate one-way repeated-measures ANOVAs for the two display conditions. We found that the effect of FOV size was highly significant for the dynamic display condition ( F(3,12) = 22.56, p < 0.0001) but not for the static condition ( F(3,12) = 2.2, p = 0.14). That is, while the slopes decrease with the increase of the FOV for the dynamic display condition as predicted by the model, they are not affected by the FOV for the static condition. At the largest FOV of 110°H × 94°V, separate paired t-test reveals that the unsigned heading errors for the dynamic condition (mean ± SE across 5 observers: 3.15° ± 0.59°) are not significantly different from those for the static condition (2.93° ± 0.40°), with t(29) = −0.80, p = 0.43. This is consistent with our previous findings that when the FOV is large enough for observers to resolve the rotational component in the flow field, they could perceive heading without direct access to the visual path information (Li et al., 2006). 
Discussion
The results indicate that the reduction of the FOV has different effects on heading performance for the static- vs. the dynamic-scene displays. For the static-scene displays in which observers have direct access to both visual path and optic-flow velocity fields for heading estimation, heading performance is not influenced by the reduction of the FOV. For the dynamic-scene displays in which observers have direct access to velocity fields but no direct access to visual path information, the accuracy of heading estimation decreases with the reduction of the FOV. At the smallest FOV (8°H × 7°V) tested, heading performance converges to that of the 2D pseudo-FOE-based heading estimation strategy. The pattern of results for the dynamic display condition is consistent with the simulation data of our template heading estimation model that uses a single velocity field of the optic flow as the input. On average, there is about a 2–3° rightward bias in the observed heading errors for both display conditions. This is due to the fact that before the commencement of the experiment, we calibrated observers' sitting position to make sure that their cyclopean eye was centered at the screen, and observers were run monocularly with their dominant eye (i.e., the right eye for all observers in this experiment). Assuming observers used the position of their dominant but not their cyclopean eye to perceive their “straight-ahead” direction, their perceived “straight-ahead” direction would be shifted 2–3° to the right of the center of the screen and subsequently cause a 2–3° rightward bias in their judged heading direction. 
The present results allow us to draw two conclusions. First, human heading and path perception involve separate visual processes. At the large FOV of 110°H × 94°V, accurate heading performance on both the static- and dynamic-scene displays replicate our previous findings that humans can perceive heading from optic flow with no direct access to path information (Li et al., 2006). At the small FOV of 8°H × 7°V, large heading errors observed with the dynamic-scene displays but still accurate heading performance on the static-scene displays supports the claim that humans can use the visual cues beyond the velocity field (e.g., the extended streamline trajectories of the dot displacement, the extended dot motion, or the dot acceleration) to perceive path (Kim & Turvey, 1999; Wann & Land, 2000) and infer heading as the path's tangent. Second, visual path information helps heading performance when the display does not contain sufficient optic-flow information for accurate heading estimation during rotation. At small FOV sizes when there is insufficient amount of motion parallax information in the velocity field for observers to compensate for the rotation to recover heading, access to visual path information improves heading judgment. The results from the current experiment indicate that the split point of the FOV size when visual path information starts to help heading performance is near 48°H × 41°V. 
Experiment 2: Effect of depth range
Stone and Perrone (1997) have reported that for heading perception during curvilinear motion, placing environmental points at more than one depth plane is necessary for accurate heading judgments. For heading perception during simulated eye rotation, Li and Warren (2000) have also found that a larger depth range of the ground plane provides more differential motion parallax information, which can help observers to compensate for the rotational component in the optic-flow velocity field. 
In this experiment, we systematically varied the depth range of environmental points placed in the viewing frustum. The purpose was to examine the effect of the dot depth range on heading perception during rotation for the static- and the dynamic-scene displays. We tested four depth ranges, 6–50 m, 6–25 m, 6–12.5 m, and 6–9 m. As illustrated in Figure 4, a large depth range helps observers to differentiate the velocity field of translation and rotation from that of pure translation and thus facilitates accurate heading judgment. With the reduction of the depth range, the velocity field of translation and rotation looks more and more like a radial expansion pattern, and observers thus tend to locate the 2D pseudo-FOE in the field for heading estimation. For the dynamic-scene displays that provided a sequence of velocity fields, we expected that heading performance would drop with the reduction of the depth range, as predicted by our heading estimation model. For the static-scene displays in which observers had access to both velocity fields and visual path information, performance might not degrade with the reduction of the depth range as observers could derive heading from the path's tangent. 
Methods
Participants
Six students and staff (five naive as to the specific goals of the study; four males, two females) between the age of 22 and 35 at the University of Hong Kong participated in the experiment. All had normal or corrected-to-normal vision. 
Visual stimuli
In this experiment, the random-dot 3D cloud display was composed of 300 white dots. Four different depth ranges in which the dots were placed in the viewing pyramidal frustum (6–50 m, 6–25 m, 6–12.5 m, and 6–9 m) were tested for both the static- and the dynamic-scene displays. The FOV size was kept constant at 110°H × 94°V and the number of dots in the display was kept the same for all four depth ranges. The initial virtual line of sight was randomly offset from the true heading within −16° to −8° and 8° to 16° (negative values to the left and positive values to the right). 
Procedure
Each participant viewed both the static- and the dynamic-scene displays with all four depth ranges. Each participant received two blocks of 54 practice trials (3 trials × 18 translation–rotation combinations) on the static-scene display with the largest depth range (6–50 m), followed by eight blocks (2 display types × 4 depth ranges) of 108 experimental trials (6 trials × 18 translation–rotation combinations). Trials were blocked by display type and depth range, randomized within blocks. The testing order of display type and depth range was counterbalanced between participants. As in Experiment 1, the eight blocks of trials were divided into four sessions, with each session lasting about 1 h. Participants were asked to take as much rest as they wanted in-between the blocks, and the four sessions were run over 2–4 days. 
Results
Figures 6a and 6b plot the mean heading error averaged across six observers as a function of R:T ratio for the four depth ranges for the static and the dynamic display conditions, respectively. At the two large depth ranges, heading performance on the static and the dynamic display conditions are both accurate, with the unsigned heading error remaining under 4° at all R:T ratios tested. At the two small depth ranges, although heading errors increase with the R:T ratio for both display conditions, the slopes for the dynamic condition appear to be larger than those for the static condition. At the smallest depth range (6–9 m), the slope for the dynamic condition is close to that predicted from the 2D pseudo-FOE-based strategy (the black line). 
Figure 6
 
Mean heading error as a function of R:T ratio for the four depth ranges for (a) the static- and (b) the dynamic-scene displays. The dashed horizontal line indicates perfect performance, and the solid black line indicates performance of zero compensation for the rotation by locating the pseudo-FOE in the nearest depth plane at 6 m. (c) Mean slope of heading error against depth range for the static- and the dynamic-scene displays and the model simulations. Error bars are SEs across six participants.
Figure 6
 
Mean heading error as a function of R:T ratio for the four depth ranges for (a) the static- and (b) the dynamic-scene displays. The dashed horizontal line indicates perfect performance, and the solid black line indicates performance of zero compensation for the rotation by locating the pseudo-FOE in the nearest depth plane at 6 m. (c) Mean slope of heading error against depth range for the static- and the dynamic-scene displays and the model simulations. Error bars are SEs across six participants.
Figure 6c plots the mean slope of heading error averaged across six observers as a function of depth range for the two display types as well as the slope of heading error from the model simulation data. For the model simulation of the effect of the depth range on heading judgment, a complete set of trials (i.e., 6 trials at each of the 18 translation–rotation combinations) was used at each depth range, and a different velocity field was tested for each trial. A 2 × 4 (display type × depth range) repeated-measures ANOVA on the slopes reveals that only the main effects of display type and depth range are significant ( F(1,5) = 38.75, p < 0.01 and F(3,15) = 186.95, p < 0.0001, respectively). There is a trend for the slopes to increase with the reduction of the depth range for both display conditions. However, the slopes for the dynamic condition are larger and much closer to those from the model simulations than those for the static condition. Nevertheless, at the two large depth ranges (6–50 m and 6–25 m) tested, separate paired t-tests reveal that the unsigned heading errors for the dynamic display condition (mean ± SE across 6 observers: 2.01° ± 0.54° and 2.88° ± 0.71°) are not significantly different from those for the static display (2.34° ± 0.73° and 2.86° ± 0.47°), t(35) = −0.80, p = 0.43 and t(35) = 0.04, p = 0.97, respectively. This is consistent with our previous findings that when the display contains enough motion parallax information provided by a large depth range for observers to recover heading during rotation, humans can perceive heading without direct access to the visual path information (Li et al., 2006). 
Discussion
Different from the FOV effect on heading performance as found in Experiment 1, the results of the current experiment show that as the depth range decreases, the slopes of heading error increase for both the static- and the dynamic-scene displays. Furthermore, the slopes for the dynamic display condition are larger and closer to those from the model simulation data than the slopes for the static condition, suggesting an overall better heading performance for the static than for the dynamic condition. However, the unsigned heading errors show that at the two large depth ranges tested (6–50 m and 6–25 m), performance is accurate for both display conditions with the mean heading bias remaining under 3°. 
At the two small depth ranges (6–12.5 m and 6–9 m) tested, heading errors for the dynamic-scene displays are consistent with the model predictions and are larger than those for the static-scene displays in which observers can rely on visual cues beyond the velocity field to perceive path and then infer heading as the path's tangent. We argue that unlike the reduction of the FOV size, the reduction of the depth range of environmental points makes the perception of the curved path more difficult, as it has been shown that the estimation of the future path depends on the perceived depth of the reference environmental points (Ehrlich et al., 1998). Thus, the lack of an effect of the FOV size on heading perception for the static display condition observed in Experiment 1 might be due to the fact that path perception is not affected by reducing the FOV size. In contrast, the degraded heading performance at small depth ranges observed for the static condition in the current experiment is likely to be because the shorter the depth range, the less accurate the perception of the future path trajectory, and thus the larger estimation error when inferring heading as the path's tangent. The data also indicate that at the two small depth ranges, the estimation of heading as the path's tangent for the static condition is still more accurate than the estimation of heading from the optic-flow velocity field for the dynamic condition. 
General discussion
Combining the results from the two experiments, we can draw several conclusions. First, despite the fact that the dynamic-scene displays contain spurious motion noise due to the scintillating dots, at a large FOV or a depth range when the display contains sufficient optic-flow information for observers to compensate for the rotation in the velocity field, heading errors for the static- and the dynamic-scene displays are similar and remain under 4°. This error is within the previously reported required error range for safe control of human locomotion (Cutting, 1986). This result thus confirms our previous findings that humans can perceive heading directly from the optic-flow velocity field without visual path information (Li et al., 2006). 
Second, at a small FOV or a depth range when the display does not contain enough optic-flow information for accurate heading perception during rotation, access to visual path information makes the recovery of heading more robust to rotational masking. As illustrated in Figures 3 and 4, a large FOV increases the magnitude of motion parallax in the peripheral regions of the display and a large depth range improves the salience of the rotational component in the velocity field for accurate heading recovery. At a small FOV or a depth range, observers tend to ignore the rotation in the velocity field and rely on the pseudo-FOE of the nearest points for heading estimation. This is what we observed for the dynamic-scene displays. For the static-scene displays, because observers can use visual cues beyond the velocity field (e.g., the extended streamline trajectories of the dot displacement, the extended dot motion, or the dot acceleration) to perceive path and then infer heading as the path's tangent, they can still accurately estimate heading even when the optic-flow velocity field does not allow them to do so. The accurate heading performance for the static-scene displays at all four FOV sizes tested indicates that path perception is not affected by the display FOV (although this was not tested directly). However, the increase of heading errors with the reduction of depth range for the static-scene displays indicates that path perception is affected by the depth range of environmental points (see also Ehrlich et al., 1998). Nevertheless, the better heading performance on the static- than the dynamic-scene displays at small depth ranges shows that the reduction of depth range degrades heading more than path perception. 
Our findings seem to be at odds with those of Wilkie and Wann (2006) who showed that observers could accurately direct their gaze to path but not heading during simulated curvilinear motion even after they explicitly instructed their subjects that heading was the tangent to their curved path. We surmise that the large heading errors observed in their study could be due to their subjects' lack of training to understand the heading task, as the authors mentioned in their paper that their subjects felt that it was easy to understand what the path was but “required a more elaborate explanation to understand the concept of instantaneous heading.” This could be because their displays depicted observers traveling over a ground plane, which provided perspective depth cues and made path rather than heading a more prominent feature in the display. For the type of non-interactive displays they used and without providing their subjects with feedback during practice for the non-intuitive heading task, observers might not know what to look for in the display to judge heading. In contrast, in our present study, we used displays composed of 3D random dots with no perspective depth cue to enhance the path. Furthermore, our displays were interactive thus observers could point their simulated gaze to either inside, outside of the path curvature, or look straight ahead to their perceived heading. By providing our participants with training and feedback during practice on the static-scene displays, we made sure that they understood the heading task before we started the experiment. 
Third, our findings are in agreement with the proposal that observers can perceive path before they perceive heading from optic flow (Kim & Turvey, 1999; Wann & Land, 2000; Wann & Swapp, 2000). When the display does not contain enough optic-flow information (e.g., a small FOV or a depth range) for accurate heading recovery during rotation, observers can still accurately estimate heading as long as they have access to visual path information. The accurate heading performance in this case must be due to the fact that observers use visual cues beyond the optic-flow velocity field to first perceive path and then infer heading as the path's tangent. Recent human brain-imaging studies have also suggested that the cortical area processing path information is different from that processing heading information (Field, Wilkie, & Wann, 2007; Wall & Smith, 2008). 
Finally, to address the question of the usage of heading versus path during human visual control of locomotion (e.g. Wann & Land, 2000; Warren, Kay, Zosh, Duchon, & Sahuc, 2001), our results suggest that when the display contains sufficient optic-flow information, heading is directly available for the online active control of steering. In a recent study we conducted in which participants were asked to use a joystick to control the curvature of their traveling path to steer toward a target on the ground, we found that participants aligned their perceived heading but not path to the goal (Li, Stone, & Chan, 2008). On the other hand, our results also suggest that visual path information helps heading perception when the display is impoverished and does not contain enough optic-flow information. Our other recent study on active control of heading in which we asked participants to use a joystick to steer and align their simulated gaze direction with their true heading while facing random gaze perturbations, we found that access to path information not only increased the control gain at low frequencies (<0.3 Hz) but also reduced the response delay (Peng, Stone, & Li, 2008). We propose that whether people use heading or path for active control of self-motion in the natural world depends on the nature of the control task. For driving around a bend, because heading direction is constantly changing, it might be more parsimonious to fixate a point on the future path and rely on the locomotor flow lines to adjust the steering error (Mars, 2008). However, for tasks that require participants to avoid a collision or steer toward a goal when the display contains rich optic-flow information, relying on the emergent feature, heading, is more efficient (Warren et al., 2001). 
Our findings have real-world implications for the design of navigational user interface, robots, and unmanned vehicles. As robots or unmanned vehicles typically rely on optic flow alone for locomotion, using the data reported in this study, design engineers can estimate the magnitude of heading errors that robots or unmanned vehicles will make when they reduce the FOV size or the depth range of the navigational environment. When the situation does not allow a large FOV or the environment has a small depth range, design engineers must provide path information to supplement the impoverished optic-flow velocity field, if they wish to support reasonably accurate control of locomotion. 
Acknowledgments
This study was supported by Hong Kong Research Grant Council (HKU 7471//06H). We thank John Perrone for providing the MatLab programming codes for the model simulations, Lee Stone for his helpful discussions, and Jeff Saunders and David Field for their helpful comments on a previous draft of this article. 
Commercial relationships: none. 
Corresponding author: Li Li. 
Email: lili@hku.hk. 
Address: Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong SAR. 
References
Banks, M. S. Ehrlich, S. M. Backus, B. T. Crowell, J. A. (1996). Estimating heading during real and simulated eye movements. Vision Research, 36, 431–443. [PubMed] [CrossRef] [PubMed]
Bruss, A. R. Horn, B. P. (1983). Passive navigation. Computer Vision, Graphics, and Image Processing, 21, 3–20. [CrossRef]
Burr, D. C. (1981). Temporal summation of moving images by the human visual system. Proceedings of Royal Society of London B: Biological Sciences, 211, 321–339. [PubMed] [CrossRef]
Crowell, J. A. (1997). Testing the Perrone and Stone (1994 model of heading estimation. Vision Research, 37, 1653–1671. [PubMed] [CrossRef] [PubMed]
Crowell, J. A. Andersen, R. A. (2001). Pursuit compensation during self-motion. Perception, 30, 1465–1488. [PubMed] [CrossRef] [PubMed]
Cutting, J. E. (1986). Perception with an eye for motion. Cambridge, MA: MIT Press.
Cutting, J. E. (1996). Wayfinding on foot from multiple sources of local information in retinal flow. Journal of Experimental Psychology: Human Perception and Performance, 22, 1299–1313. [CrossRef]
Ehrlich, S. M. Beck, D. M. Crowell, J. A. Freeman, T. C. Banks, M. S. (1998). Depth information and perceived self-motion during simulated gaze rotations. Vision Research, 38, 3129–3145. [PubMed] [CrossRef] [PubMed]
Fermüller, C. Aloimonos, Y. (1995). Direct perception of three-dimensional motion from patterns of visual motion. Science, 270, 1973–1976. [PubMed] [CrossRef] [PubMed]
Field, D. T. Wilkie, R. M. Wann, J. P. (2007). Neural systems in the visual control of steering. Journal of Neuroscience, 27, 8002–8010. [PubMed] [Article] [CrossRef] [PubMed]
Gibson, J. J. (1950). The perception of the visual world. Boston: Houghton Mifflin.
Graziano, M. S. Andersen, R. A. Snowden, R. J. (1994). Tuning of MST neurons to spiral motions. Journal of Neuroscience, 14, 54–67. [PubMed] [Article] [PubMed]
Grigo, A. Lappe, M. (1999). Dynamical use of different sources of information in heading judgments from retinal flow. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 16, 2079–2091. [PubMed] [CrossRef] [PubMed]
Heeger, D. J. Jepson, A. D. (1990). Visual perception of three-dimensional motion. Neural Computation, 2, 129–137. [CrossRef]
Hildreth, E. C. (1992). Recovering heading from visually-guided navigation. Vision Research, 32, 1177–1192. [PubMed] [CrossRef] [PubMed]
Kim, N. G. Turvey, M. T. (1999). Eye movements and a rule for perceiving direction of heading. Ecological Psychology, 11, 233–248. [CrossRef]
Koenderink, J. J. van Doorn, A. J. (1987). Facts on optic flow. Biological Cybernetics, 56, 247–254. [PubMed] [CrossRef] [PubMed]
Li, L. Stone, L. Chan, E. (2008). Visual control of steering toward a goal uses heading but not path information [Abstract]. Journal of Vision, 8, (6):1162. [CrossRef]
Li, L. Sweet, B. T. Stone, L. S. (2006). Humans can perceive heading without visual path information. Journal of Vision, 6, (9):2, 874–881, http://journalofvision.org/6/9/2/, doi:10.1167/6.9.2. [PubMed] [Article] [CrossRef]
Li, L. Warren, Jr., W. H. (2000). Perception of heading during rotation: Sufficiency of dense motion parallax and reference objects. Vision Research, 40, 3873–3894. [PubMed] [CrossRef] [PubMed]
Li, L. Warren, Jr., W. H. (2004). Path perception during rotation: Influence of instructions, depth range, and dot density. Vision Research, 44, 1879–1889. [PubMed] [CrossRef] [PubMed]
Longuet-Higgins, H. C. Prazdny, K. (1980). The interpretation of a moving retinal image. Proceedings of the Royal Society of London B: Biological Sciences, 208, 385–397. [PubMed] [CrossRef]
Mars, F. (2008). Driving around bends with manipulated eye-steering coordination. Journal of Vision, 8, (11):10, 1–11, http://journalofvision.org/8/11/10/, doi:10.1167/8.11.10. [PubMed] [Article] [CrossRef] [PubMed]
Peng, X. Stone, L. S. Li, L. (2008). Humans can control heading independent of visual path information [Abstract]. Journal of Vision, 8, (6):1160. [CrossRef]
Perrone, J. A. Stone, L. S. (1994). A model of self-motion estimation within primate extrastriate visual cortex. Vision Research, 34, 2917–2938. [PubMed] [CrossRef] [PubMed]
Perrone, J. A. Stone, L. S. (1998). Emulating the visual receptive-field properties of MST neurons with a template model of heading estimation. Journal of Neuroscience, 18, 5958–5975. [PubMed] [Article] [PubMed]
Regan, D. Beverly, K. I. (1982). How do we avoid confounding the direction we are looking and the direction we are moving? Science, 215, 194–196. [PubMed] [CrossRef] [PubMed]
Rieger, J. H. Lawton, D. T. (1985). Processing differential image motion. Journal of the Optical Society of America A, Optics and Image Science, 2, 354–360. [PubMed] [CrossRef] [PubMed]
Royden, C. S. (1994). Analysis of misperceived observer motion during simulated eye rotations. Vision Research, 34, 3215–3222. [PubMed] [CrossRef] [PubMed]
Royden, C. S. Crowell, J. A. Banks, M. S. (1994). Estimating heading during eye movements. Vision Research, 34, 3197–3214. [PubMed] [CrossRef] [PubMed]
Stone, L. S. Ersheid, R. (2006). Time course of human sensitivity to visual acceleration.
Stone, L. S. Perrone, J. A. (1997). Human heading estimation during visually simulated curvilinear motion. Vision Research, 37, 573–590. [PubMed] [CrossRef] [PubMed]
van den Berg, A. V. (1996). Judgments of heading. Vision Research, 36, 2337–2350. [PubMed] [CrossRef] [PubMed]
van den Berg, A. V. Brenner, E. (1994). Humans combine the optic flow with static depth cues for robust perception of heading. Vision Research, 34, 2153–2167. [PubMed] [CrossRef] [PubMed]
Wall, M. B. Smith, A. T. (2008). The representation of egomotion in the human brain. Current Biology, 18, 191–194. [PubMed] [Article] [CrossRef] [PubMed]
Wann, J. Land, M. (2000). Steering with or without the flow: Is the retrieval of heading necessary? Trends in Cognitive Sciences, 4, 319–324. [PubMed] [CrossRef] [PubMed]
Wann, J. P. Swapp, D. K. (2000). Why you should look where you are going. Nature Neuroscience, 3, 647–648. [PubMed] [CrossRef] [PubMed]
Warren, Jr., W. H. Hannon, D. J. (1990). Eye movements and optical flow. Journal of the Optical Society of America A, Optics and Image Science, 7, 160–169. [PubMed] [CrossRef] [PubMed]
Warren, Jr., W. H. Kay, B. A. Zosh, W. D. Duchon, A. P. Sahuc, S. (2001). Optic flow is used to control human walking. Nature Neuroscience, 4, 213–216. [PubMed] [CrossRef] [PubMed]
Warren, Jr., W. H. Morris, M. W. Kalish, M. (1988). Perception of translation heading from optic flow. Journal of Experimental Psychology: Human Perception and Performance, 14, 646–660. [PubMed] [CrossRef] [PubMed]
Watson, A. B. Turano, K. (1995). The optimal motion stimulus. Vision Research, 35, 325–336. [PubMed] [CrossRef] [PubMed]
Wilkie, R. M. Wann, J. P. (2006). Judgments of path, not heading, guide locomotion. Journal of Experimental Psychology: Human Perception and Performance, 32, 88–96. [PubMed] [CrossRef] [PubMed]
Wolpert, L. Warren, R. Wertheim, A. H. (1990). Field-of-view information for self-motion perception. Perception & control of self-motion. (pp. 101–126). Hillsdale, NJ: Lawrence Erlbaum.
Figure 1
 
An illustration of the relationship between heading and path for (a) traveling on a straight path and (b) traveling on a curved path.
Figure 1
 
An illustration of the relationship between heading and path for (a) traveling on a straight path and (b) traveling on a curved path.
Figure 2
 
A schematic illustration of the FOV and the depth range manipulations.
Figure 2
 
A schematic illustration of the FOV and the depth range manipulations.
Figure 3
 
An illustration of the FOV effect. (a) A sample velocity field (110° FOV) produced by traveling on a circular path (8 m/s and 16°/s) with 150 dots sampled at the depth of 6 m. Heading is at the center as indicated by the square. The solid circle shows the singularity (pseudo-FOE) and represents the optimal 2D pseudo-FOE-based heading estimation performance (Grigo & Lappe, 1999; Royden, Crowell, & Banks, 1994; Stone & Perrone, 1997). (b) A radial flow velocity field (110° FOV) produced by traveling on a straight path (8 m/s) toward the direction of the pseudo-FOE in (a). (c)–(f) Velocity fields produced by subtracting (b) from (a) at four different FOV sizes (110°, 48°, 16°, and 8°).
Figure 3
 
An illustration of the FOV effect. (a) A sample velocity field (110° FOV) produced by traveling on a circular path (8 m/s and 16°/s) with 150 dots sampled at the depth of 6 m. Heading is at the center as indicated by the square. The solid circle shows the singularity (pseudo-FOE) and represents the optimal 2D pseudo-FOE-based heading estimation performance (Grigo & Lappe, 1999; Royden, Crowell, & Banks, 1994; Stone & Perrone, 1997). (b) A radial flow velocity field (110° FOV) produced by traveling on a straight path (8 m/s) toward the direction of the pseudo-FOE in (a). (c)–(f) Velocity fields produced by subtracting (b) from (a) at four different FOV sizes (110°, 48°, 16°, and 8°).
Figure 4
 
An illustration of the depth range effect. (a)–(d) A sample velocity field (110° FOV) produced by traveling on a circular path (8 m/s and 16°/s) with 150 dots sampled at four depth ranges (6–50 m, 6–25 m, 6–12.5 m, and 6–9 m), respectively. Heading is at the center as indicated by the square. The solid circle shows the singularity (pseudo-FOE) generated at the nearest depth plane (6 m). (e)–(h) A radial flow velocity field (110° FOV) produced by traveling on a straight path (8 m/s) toward the direction of the pseudo-FOE in (a)–(d) with dots sampled at the same four depth ranges, respectively.
Figure 4
 
An illustration of the depth range effect. (a)–(d) A sample velocity field (110° FOV) produced by traveling on a circular path (8 m/s and 16°/s) with 150 dots sampled at four depth ranges (6–50 m, 6–25 m, 6–12.5 m, and 6–9 m), respectively. Heading is at the center as indicated by the square. The solid circle shows the singularity (pseudo-FOE) generated at the nearest depth plane (6 m). (e)–(h) A radial flow velocity field (110° FOV) produced by traveling on a straight path (8 m/s) toward the direction of the pseudo-FOE in (a)–(d) with dots sampled at the same four depth ranges, respectively.
Figure 5
 
Mean heading error as a function of R:T ratio for the four FOV sizes for (a) the static- and (b) the dynamic-scene displays. The dashed horizontal line indicates perfect performance, and the solid black line indicates performance of zero compensation for the rotation by locating the pseudo-FOE in the nearest depth plane at 6 m. (c) Mean slope of heading error against FOV size for the static- and the dynamic-scene displays and the model simulations. Error bars are SEs across five participants.
Figure 5
 
Mean heading error as a function of R:T ratio for the four FOV sizes for (a) the static- and (b) the dynamic-scene displays. The dashed horizontal line indicates perfect performance, and the solid black line indicates performance of zero compensation for the rotation by locating the pseudo-FOE in the nearest depth plane at 6 m. (c) Mean slope of heading error against FOV size for the static- and the dynamic-scene displays and the model simulations. Error bars are SEs across five participants.
Figure 6
 
Mean heading error as a function of R:T ratio for the four depth ranges for (a) the static- and (b) the dynamic-scene displays. The dashed horizontal line indicates perfect performance, and the solid black line indicates performance of zero compensation for the rotation by locating the pseudo-FOE in the nearest depth plane at 6 m. (c) Mean slope of heading error against depth range for the static- and the dynamic-scene displays and the model simulations. Error bars are SEs across six participants.
Figure 6
 
Mean heading error as a function of R:T ratio for the four depth ranges for (a) the static- and (b) the dynamic-scene displays. The dashed horizontal line indicates perfect performance, and the solid black line indicates performance of zero compensation for the rotation by locating the pseudo-FOE in the nearest depth plane at 6 m. (c) Mean slope of heading error against depth range for the static- and the dynamic-scene displays and the model simulations. Error bars are SEs across six participants.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×