Free
Article  |   June 2011
Can observers judge future circular path relative to a target from retinal flow?
Author Affiliations
Journal of Vision June 2011, Vol.11, 16. doi:10.1167/11.7.16
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Jeffrey A. Saunders, Ka-Yiu Ma; Can observers judge future circular path relative to a target from retinal flow?. Journal of Vision 2011;11(7):16. doi: 10.1167/11.7.16.

      Download citation file:


      © 2016 Association for Research in Vision and Ophthalmology.

      ×
  • Supplements
Abstract

We investigated the ability of observers to judge whether they will pass left or right of a visible target from simulated motion along a circular path. Strategies based on optic flow would generally require compensation for pursuit eye movements. J. P. Wann and D. K. Swapp (2000) proposed an alternative strategy that requires only retinal flow. The experiments compared three conditions that provide the same retinal flow but different observer-relative optic flow. In the heading-relative view condition, simulated view direction rotated with change in heading, as naturally occurs when driving a car. In target-relative view condition, simulated view direction rotated to keep the direction of the target constant. In world-relative view condition, the simulated view direction was fixed relative to the environment. If an observer fixates the target, these conditions produce the same retinal flow. The initial heading direction of simulated motion was varied across trials, and responses were used to compute PSEs representing perceptual bias. Judgments were most accurate in the heading-relative condition. In the target-relative and world-relative view conditions, PSEs indicated large biases consistent with underestimation of path curvature. The large biases suggest that retinal flow is not sufficient to judge future circular path relative to a target.

Introduction
In this paper, we investigate visual perception of self-motion relative to a target for the case of traveling along a circular path on the ground. Driving a car is a common task for which self-motion is along piecewise circular paths. The ability to see whether one's current future path would intersect a target would be useful for steering control. 
Our specific goal is to test whether observers can perceive their future path relative to a target based on retinal flow, as proposed by Wann and Swapp (2000). Optic flow is a powerful source of information about self-motion, and various ways of using optic flow for control of locomotion have been proposed (e.g., Gibson, 1958; Lee & Lishman, 1977; Wann & Land, 2000; Warren, Blackwell et al., 1991; Warren, Mestre, Blackwell, & Morris, 1991). Most strategies involve egocentric optic flow—visual motion in body-centered coordinates, as opposed to retinal coordinates. Wann and Swapp proposed an alternative strategy based on curvature of retinal motion over time, which can be defined entirely in retinal coordinates. This strategy is illustrated in Figure 1. If an observer fixates a target object that is on their future circular path, the visual trajectories of points on the retina follow straight lines (Figure 1a). If the current path would pass to the left of the fixated target, the visual trajectories on the retina bend counterclockwise over time (Figure 1b), while if the current path would pass to the right of the target, visual trajectories bend clockwise (Figure 1c). Thus, rotational acceleration within retinal flow could be used to determine if one's current circular path would pass to the left or right of the target. A similar strategy based on retinal flow was independently proposed by Kim and Turvey (1999). 
Figure 1
 
Illustration of how retinal flow over time could be used to determine whether an observer's future path would pass to the left or right of a fixated target (Wann & Swapp, 2000). (a) If the observer is on path to intersect the fixated target, retinal trajectories of objects over time form straight lines. (b) If the current path will pass to the right of the target, retinal trajectories are curved, accelerating in a clockwise direction. (c) If the current path will pass to the left of the target, retinal trajectories are curved in the opposite direction, accelerating in a counterclockwise direction.
Figure 1
 
Illustration of how retinal flow over time could be used to determine whether an observer's future path would pass to the left or right of a fixated target (Wann & Swapp, 2000). (a) If the observer is on path to intersect the fixated target, retinal trajectories of objects over time form straight lines. (b) If the current path will pass to the right of the target, retinal trajectories are curved, accelerating in a clockwise direction. (c) If the current path will pass to the left of the target, retinal trajectories are curved in the opposite direction, accelerating in a counterclockwise direction.
A potential advantage of using information available from retinal flow is that it avoids the so-called rotation problem (Banks, Ehrlich, Backus, & Crowell, 1996; Li & Warren, 2000; Royden, Banks, & Crowell, 1992). During self-motion, observers typically make pursuit eye movements to fixate a stationary point in the scene. Consequently, the pattern of motion on the retina is different from the optic flow in egocentric coordinates. A steering strategy that uses egocentric optic flow would require that the visual system account for the effect of pursuit eye movements. For example, a tau equalization strategy (Fajen, 2001) uses the rate of convergence of target direction and the heading direction. This convergence rate cannot be unambiguously determined from retinal flow alone; additional extraretinal information would be required. Similarly, strategies using the egocentric motion of a target (Wann & Land, 2000) or the locomotor flow line (Lee & Lishman, 1977) require motion in egocentric coordinates. In contrast, the potential cue identified by Wann and Swapp (2000) is defined in terms of retinal flow over time: whether retinal flow accelerates clockwise or counterclockwise. Use of this cue would not require compensation for eye movements. 
On the other hand, there is evidence that the visual system is able to compensate for eye movements using extraretinal information (e.g., Banks et al., 1996; Crowell, Banks, Shenoy, & Andersen, 1998; Royden et al., 1992), so this requirement may not be a liability. If the visual system essentially subtracts the rotational component due to eye movements from retinal flow, path perception strategies based on egocentric optic flow would be viable. 
A potential disadvantage of the retinal flow strategies proposed by Kim and Turvey (1999) and Wann and Swapp (2000) is that they are based on rotational acceleration and, therefore, require analysis of optic flow over time. The results of Warren, Blackwell et al. (1991) and Warren, Mestre et al. (1991) suggest that that instantaneous optic flow is sufficient for judgments of future circular path. There is also evidence that the visual system may be insensitive to acceleration of visual trajectories over time. Simulating lateral rotation of the observer while traveling on a straight path often produces the illusion of traveling on a curved path (Banks et al., 1996; Ehrlich, Beck, Crowell, Freeman, & Banks, 1998; Royden, Banks, & Crowell, 1994; Royden, Cahill, & Conti, 2006). Because optic flow over time could, in principle, distinguish this situation from an actual curved path, this illusion has been attributed to insensitivity to such acceleration (Ehrlich et al., 1998). This could also explain a complementary illusion: simulated motion along a curved path without view rotation can appear as a straight path (Li & Cheng, 2011; Saunders, 2010). Thus, there is reason to question whether the visual system could utilize a cue based on acceleration of visual trajectories over time. 
Previous evidence does not resolve whether observers can judge relative future path based on retinal flow. To test this, the key manipulation would be to present conditions that dissociate retinal flow from viewer-relative optic flow. Many previous studies have used this type of manipulation. However, as we will discuss in the next section, previous studies either did not isolate retinal flow or used conditions that would not be ideal for a strategy based on retinal flow. 
Previous studies
For the situation of simulated travel on a straight path, many previous studies have tested conditions that produce equivalent retinal flow but differ in the presence of extraretinal information (Banks et al., 1996; Ehrlich et al., 1998; Royden et al., 1992, 2006, 1994; Wilkie & Wann, 2003). These studies have found that simulated rotation without accompanying extraretinal cues can produce large biases in perceived path. Saunders (2010) dissociated retinal flow and optic flow in an analogous way for judgments of future circular paths and found a large difference in performance between conditions that would be expected to generate the same retinal flow. These results suggest that retinal flow is not sufficient for perception of future circular path. 
However, a limitation of these previous studies was that path judgments were not relative to the point of fixation. Wann and Swapp (2000) emphasize the potential benefit of fixating the target of steering, and their proposed retinal flow strategy assumes that the target is fixated. In this case, the sign of rotational acceleration (clockwise vs. counterclockwise) indicates whether one's future path would pass inside or outside the target. To judge path relative to some point other than the fixation, the visual system would have to distinguish different rates of rotational acceleration rather than just sign of acceleration. This situation would not be ideal for use of a retinal flow strategy. An analogy can be made to perception of heading during simulated translation and rotation. Although simulated rotation can produce large biases in path judgments, observers can easily judge whether their heading is to the left or right of the fixation point (Cutting, Vishton, Fluckiger, Baumberger, & Gerndt, 1997). Fixation might similarly facilitate judgments of future circular path relative to the fixation point, as suggested by Wann and Swapp. 
Wilkie and Wann (2003) have investigated steering performance in virtual reality conditions that dissociate extraretinal information from retinal flow. One manipulation was to add a simulated lateral rotation of the visual display during the steering task, thereby adding an extraretinal rotation signal while leaving retinal flow unchanged. This rotation affected performance, suggesting that retinal flow alone is not sufficient for accurate steering. Another manipulation tested by Wilkie and Wann (2002, 2003) was to simulate planar rotation of the ground around the fixated target, thereby changing the retinal flow without changing the egocentric direction and motion of the target. This manipulation also affected performance, which Wilkie and Wann interpret as evidence for use of retinal flow. They propose that steering is based on a weighted combination of strategies based on retinal flow, extraretinal information, and egocentric direction of the target. 
A problem with the interpretation of Wilkie and Wann (2002, 2003) is that their manipulation of retinal flow had other confounded effects. First, when the ground is rotated around the target, as in Wilkie and Wann, the instantaneous heading specified by the optic flow from the ground is shifted away or toward the target. Strategies that depend on the angle between heading and target directions would, therefore, be affected by this manipulation. Second, rotation of the ground around the target effectively changes the amount of simulated view rotation in the display. For example, if the ground is rotated clockwise around the target, the optic flow is equivalent to traveling along a stationary ground with a more rightward instantaneous heading and with less rightward rotation of the simulated viewing direction. A rotational component within optic flow that is not due to eye movements provides a potential cue to path curvature (Saunders, 2010; Saunders & Neihorster, 2010), so the ground rotation manipulation could have caused curvature to be overestimated or underestimated. Thus, the effect observed by Wilkie and Wann from manipulating optic flow was not necessarily due to use of retinal flow as an independent cue. The effect could alternatively be explained in terms of instantaneous heading and/or rotation. 
Another concern is that steering performance may not provide a direct measure of perceptual capability. Steering control is the natural application of being able to perceive one's future circular path relative to a target, but it also imposes additional task demands. Perceptual effects may be hard to distinguish from effects of a visual-motor control strategy. For example, suppose that observers consistently oversteer toward a peripheral target, as has been reported by Fajen (2001). One explanation is that observers misperceive their future path as passing to the outside of the target. Alternatively, the apparent bias could reflect a steering strategy of first bringing one's heading near the target or a strategy of centering the target on the screen. Observers might be able to perceive that their current circular trajectory will intersect the target but, nevertheless, prefer some alternate path. The effect of global display rotation observed by Wilkie and Wann (2003) could potentially be explained in this manner. 
Thus, it remains an open question whether observers can use the retinal flow cue identified by Wann and Swapp (2000) to perceive their future circular path relative to a target. Conditions tested in previous path judgment studies would not be ideal for use of retinal flow, and studies of steering performance have not isolated the contribution of retinal flow. 
Present study
Our goal was to test whether observers are capable of using the retinal flow strategy proposed by Wann and Swapp (2000). We tested perceptual judgments of perceived path, rather than steering performance, to isolate the perceptual component. Observers judged whether their future path would pass to the left or right of the target. An important aspect of Wann and Swapp's strategy is that it depends on fixating a point on the future path. In our conditions, observers judged their path relative to a target that was visible throughout the simulated motion and were explicitly instructed to fixate the target. Thus, strategies that depend on fixating the target would be viable. 
We compared conditions that present different optic flow but the same retinal flow when the target was fixated. In heading-relative view condition ( Figure 2a), the simulated view direction rotated as the observer's path curved, so that the heading direction in screen coordinates was constant. This is like the view through the windshield of a car while driving around a curve, except that heading was not generally at the center of the display. Most studies of circular path perception have used this type of viewing condition (Ehrlich et al., 1998; Fajen & Kim, 2002; Kim & Turvey, 1998; Turano & Wang, 1994; Warren, Blackwell et al., 1991; Warren, Mestre et al., 1991). In target-relative view condition (Figure 2b), simulated view direction rotated to keep the visual direction of the target constant. In screen coordinates, the target moves downward and expands but neither leftward nor rightward. Finally, in the world-relative view condition (Figure 2c), the simulated view direction was kept constant relative to the environment. These three conditions differ only in the way that the simulated view direction changed over time. If observers accurately tracked the targets with their eyes, all three conditions would produce the same retinal flow, except for differences in visibility of regions at the edges of the field of view. If our visual system used a retinal flow strategy such as that proposed by Wann and Swapp (2000), one would expect little difference in performance for these conditions. On the other hand, strategies based on egocentric optic flow would be ineffective and biased in the less natural cases (Figures 2b and 2c). 
Figure 2
 
The three view conditions. (a) In the heading-relative view condition, the simulated view direction rotated with heading, as in normal driving. (b) In the target-relative view condition, view direction was rotated to counter the horizontal drift of the target, so that the target maintained a fixed horizontal position. (c) In the world-relative view condition, the simulated view remained fixed relative to the environment. (d) All three conditions produce the same retinal flow when the target is fixated.
Figure 2
 
The three view conditions. (a) In the heading-relative view condition, the simulated view direction rotated with heading, as in normal driving. (b) In the target-relative view condition, view direction was rotated to counter the horizontal drift of the target, so that the target maintained a fixed horizontal position. (c) In the world-relative view condition, the simulated view remained fixed relative to the environment. (d) All three conditions produce the same retinal flow when the target is fixated.
We also manipulated duration of optic flow. A potential problem with using a cue based on acceleration is that optic flow might have to be analyzed over an extended time period (Ehrlich et al., 1998). This would be disadvantageous for purposes of visual-motor control. On the other hand, determining relative path error may require only the qualitative direction of acceleration, which is less demanding (Wann & Land, 2000). In Experiment 1, we tested simulated motion stimuli with 500-ms duration. In Experiment 2, we increased stimulus duration to 1 s to test whether longer presentation time would allow curvature in retinal flow to be utilized. 
Experiment 1
Methods
Participants
Eleven students at the University of Hong Kong participated in Experiment 1. All were naive to the purposes of the experiment and had normal or corrected-to-normal vision. 
Apparatus
Stimuli were presented on large screen Samsung WD-65736 DLP TV and viewed from a position 1 m away and centered relative to the screen. The image region of the display was 140 cm wide and 78 cm tall, corresponding to angular width of 70° by 43°. Observers viewed the displays monocularly with their right eye, and their heads were not constrained. The display had a resolution of 1776 × 1000 pixels and a refresh rate of 60 Hz. Images were generated using OpenGL on a Dell Inspiron 530 computer with an ATI Radeon HD 3600 series graphics card. 
Stimuli
Displays simulated 500 ms of observer movement on a circular path along the ground with constant turning rate of 8°/s and tangent speed of 8 m/s. The ground plane was mapped with a texture that was chosen to have both low and high spatial frequency components, to provide rich flow at both near and far regions of the ground plane. An example of the ground texture is shown in Figure 3. Observer eye height was simulated to be 165 cm. The initial heading direction was varied from trial to trial (see Procedure section). 
Figure 3
 
Static view of a sample stimuli.
Figure 3
 
Static view of a sample stimuli.
There were three view conditions that differed in how the simulated camera direction changed over the course of a trial ( Figure 2). In heading-relative view condition, the simulated view direction rotated with heading. The rotation was around a vertical axis centered at the viewer's eye. In target-relative view condition, view direction was rotated to keep the target direction constant. In world-relative view condition, the simulated view direction was fixed relative to the environment. 
In addition to the ground plane, a cylindrical target object was visible throughout the trials. The target was 0.25 m tall and positioned on the ground at an initial distance of 24 m, measured in the direction of the z-axis. Six initial horizontal target positions were used: ±7.1°, ±12.4°, and ±17.7° relative to the center of the screen. Target positions to the right were used for trials with rightward curvature, and vice versa. These target positions were chosen so correct intersection paths would have initial headings of −5°, 0°, or 5° relative to the center of the screen, in the direction of curvature. 
Procedure
In each trial, the first frame of the stimulus was shown for 2 s to allow participants to fixate the target, followed by 500 ms of simulated motion. The target remained visible throughout the trial, and participants were instructed to fixate the target object during the motion. Participants judged whether they would pass to the left or right of the target. Initial heading direction was varied across trials by an adaptive staircase (Saunders & Backus, 2006). 
The three view conditions were tested in separated blocks. Each experimental block consistent of 144 trials, corresponding to six staircases (initial target position × sign of curvature with 24 trials each). The staircases were randomly intermixed within blocks. Before each experimental block, participants performed 20 practice trials. Three experiment blocks were performed in each session, which took approximately 1 h to complete. Subjects performed two sessions, yielding a total of 96 trials for each combination of view condition (heading-relative, target-relative, or world-relative) and initial target position. The order of view conditions within a session was counterbalanced across participants. 
Results
Mean accuracy and reliability
For each observer and experimental condition, responses were fit to a psychometric function in order to estimate a point of subjective equality (PSE) representing accuracy and a just-noticeable-difference (JND) threshold representing reliability. Trials in a condition varied by initial simulated heading. This parameter was recoded as the difference between the simulated initial heading and the initial heading that would result in an intersection path. We will refer to this as the heading offset of a path. The PSE represents the heading offset for which observers would be equally likely to judge their path as passing to the left or right of the target. If responses were unbiased, one would expect the PSE have heading offset of zero. Mirror symmetric conditions were normalized and combined to fit psychometric functions. We will, therefore, refer to paths as passing inside vs. outside of the target rather than leftward vs. rightward. 
For two observers, we were not able to fit a PSE and JND to responses in some of the target-relative conditions. There was no problem fitting the data from the other view conditions for these observers, and for the other nine observers, psychometric functions were well fit in all conditions. We will discuss individual results in the next section. In this section, we present mean results from the nine observers for which psychometric functions could be fit in all conditions. 
Figure 4 plots mean PSE, as a function of initial target positions, for three view conditions. Performance was dramatically different for the view conditions, while initial target position had no apparent effect. Judgments were most accurate in the heading-relative view condition. The PSEs in this condition were close to zero, averaging 0.7°. The PSEs in the target-relative view and world-relative view conditions were much larger, averaging 8.3° and 9.2°, respectively. These biases correspond to judging a correct intersection path as passing outside the target and perceiving a path passing inside the target as being an intersection path. 
Figure 4
 
Mean PSEs from Experiment 1, expressed as heading offsets. Heading offset is defined as the difference between the initial heading of a simulated path and the initial heading corresponding to an intersection path. Accurate performance would correspond to a PSE of 0°. Positive heading offsets correspond to biases in the direction of curvature. Error bars depict ±1 standard error.
Figure 4
 
Mean PSEs from Experiment 1, expressed as heading offsets. Heading offset is defined as the difference between the initial heading of a simulated path and the initial heading corresponding to an intersection path. Accurate performance would correspond to a PSE of 0°. Positive heading offsets correspond to biases in the direction of curvature. Error bars depict ±1 standard error.
An ANOVA on PSEs revealed a main effect of view condition ( F(1.27, 10.15) = 16.32, p = 0.002) but no effect of initial target position ( F(2, 16) = 2.15, p = 0.15), nor an interaction ( F(1.54,12.28) = 1.45, p = 0.27). We performed pairwise comparisons between view conditions after averaging PSEs across initial target position. We found that PSEs in heading-relative view condition were significantly smaller than in world-relative view condition ( t(8) = 10.55, p < 0.001) and the target-relative view condition ( t(8) = 3.89, p = 0.005). No significant difference was found between PSEs in the world-relative view and target-relative view conditions ( t(8) = 0.48, p = 0.64, n.s.). 
For analysis of JNDs, we log-transformed the individual JNDs before averaging and performing statistical tests, in order to make the distribution more normally distributed. Figure 5 plots mean JND as a function of initial target positions for three view conditions. Judgments were most precise in the world-relative view condition, with mean JND of 2.7°. The JNDs in the heading-relative and target-relative view conditions were higher, averaging 6.1° and 5.5°, respectively. 
Figure 5
 
Mean JND thresholds from Experiment 1. Error bars depict ±1 standard error.
Figure 5
 
Mean JND thresholds from Experiment 1. Error bars depict ±1 standard error.
An ANOVA on log JNDs revealed that there were main effect of both view condition ( F(2, 16) = 10.91, p = 0.001) and initial target position ( F(2, 16) = 6.20, p = 0.01) but no interaction ( F(4, 32) = 0.565, p = 0.69). JNDs were averaged across initial target position conditions for further comparisons between view conditions. We found that JNDs in world-relative view condition were significantly smaller than in heading-relative view condition ( t(8) = 5.04, p = 0.001) or the target-relative view condition ( t(8) = 3.70, p = 0.006). No significant difference was found between JNDs in the heading-relative and target-relative view conditions ( t(8) = 0.53, p = 0.61). 
To further analyze the effect of initial target position, we averaged across view conditions and did pairwise comparisons. Judgments were most variable in the least eccentric target position. We found that JNDs in 7° initial target condition were significantly higher than in the 12.4° target condition ( t(8) = 3.13, p = 0.014) and the 17.7° target condition ( t(8) = 2.50, p = 0.037). No significant difference was found between JNDs in the 12.4° and 17.7° target conditions ( t(8) = 0.16, p = 0.88). 
Perceived intersection paths
Figure 6 illustrates the paths perceived to be intersecting the target based on observers' mean responses. Because target position had little effect, we averaged across different target positions to obtain a mean PSE and JND for each view condition. The PSE represents the heading offset for which observers would be equally likely to judge their path as passing inside or outside of the target. A path with this heading offset could, therefore, be inferred to be the path that observers perceive as intersecting the target. The left panels of Figure 6 plot perceived intersection paths from a top-down view (solid lines) for each of the three view conditions. The shaded regions around the path indicate ±1 JND around the PSE. The correct intersection path is also shown (dashed line). For the heading-relative view condition (top), the perceived intersection path passes close to the target and is less than a JND away from the correct intersection path. For the target-relative (middle) and world-relative (bottom) view conditions, the perceived intersection paths pass far inside the target. Even the paths with heading offset that is one JND less than the PSE, which are to be judged passing outside 75% of the time on average, pass to the inside of the target. 
Figure 6
 
Estimated perceived intersection paths for the three view conditions based on the PSEs observed in Experiment 1. Middle panels show a top-down view. Red lines show the PSE paths, gray regions depict PSE ± JND, and the dashed lines show the correct intersection path. Open circles show the target position, and filled circles show the final position of the observer. Right panels illustrate the optic flow for the PSE paths. The red line shows the future path. The final heading is shown by a black circle, and gray circles show the progression of heading over the course of the trial for the target-relative and world-relative conditions.
Figure 6
 
Estimated perceived intersection paths for the three view conditions based on the PSEs observed in Experiment 1. Middle panels show a top-down view. Red lines show the PSE paths, gray regions depict PSE ± JND, and the dashed lines show the correct intersection path. Open circles show the target position, and filled circles show the final position of the observer. Right panels illustrate the optic flow for the PSE paths. The red line shows the future path. The final heading is shown by a black circle, and gray circles show the progression of heading over the course of the trial for the target-relative and world-relative conditions.
The right panels of Figure 6 show the optic flow produced by simulated motion along the perceived intersection path. The gray line in each graph depicts points on the observer's future path, which passes near the target in the heading-relative view condition (top) but passes inside the target for the target-relative (middle) and world-relative (bottom) view conditions. The direction of instantaneous heading at various moments over the course of a trial is also plotted on these figures. In the target-relative and world-relative view conditions, heading direction in egocentric coordinates (or screen coordinates) changes over time. For the PSE paths in these conditions, instantaneous heading direction passes near or crosses the target. Observers did not appear to account for path curvature, suggesting that they perceived their path as nearly straight in these cases. 
Individual psychometric functions
For the target-relative conditions, we were not able to fit responses of two observers with a standard cumulative Gaussian with PSE and JND parameters. To analyze these cases, we computed estimated psychometric functions for individual observers and view conditions. Trials were combined across target directions and left–right mirror symmetry. The 288 trials from a view condition were grouped into 8 bins of 36 trials, which were averaged to obtain a mean heading offset and mean response for each bin. Because a staircase procedure was used to select heading offsets, these bins were unevenly spaced and varied across observers. 
Figure 7 shows the resulting psychometric functions for the two exceptional observers. For comparison, we also plotted cumulative Gaussian functions with mean and width parameters corresponding to the average PSEs and JNDs from the other participants (gray lines). The psychometric functions for the heading-relative and world-relative view conditions for these observers are similar to the mean performance of the other observers. In these conditions, we were able to fit PSE and JND parameters to both these observers' data. In the target-relative condition, judgments of these two observers did not systematically vary with heading offset. For one observer (top), the psychometric function was essentially flat, indicating that overall chance performance. The psychometric function of the other exceptional observer (bottom) indicates that they were able to distinguish between different simulated paths, but responses were not a monotonic function of heading offset. For negative heading offsets, the psychometric function of this observer was similar to mean performance, while for positive heading offsets, the slope of the function reverses. The data, therefore, cannot be fit with a standard cumulative Gaussian function. 
Figure 7
 
Psychometric functions for two observers (top and bottom) for whom PSEs and JNDs could not be fit in the target-aligned condition. Dashed lines show psychometric functions consistent with mean PSEs and JNDs from the other nine observers.
Figure 7
 
Psychometric functions for two observers (top and bottom) for whom PSEs and JNDs could not be fit in the target-aligned condition. Dashed lines show psychometric functions consistent with mean PSEs and JNDs from the other nine observers.
The performance of these two observers is generally consistent with the overall results described previously. In the heading-relative and world-relative view conditions, their performance was similar to the other observers. The only difference was in the target-relative condition. Data from the other observers were consistent with high JNDs in the target-relative condition, while for these two observers, responses were not sufficiently reliable to estimate JNDs. For all observers, the target-relative view condition appeared to be difficult. Responses were either highly biased, highly unreliable, or both. 
Discussion
The three view conditions produced equivalent retinal flow when the target is fixated, yet we observed substantial differences in performance. Judgments in the heading-relative view condition were accurate, while in the target-relative and world-relative view conditions, judgments showed large biases. There were also differences in reliability of judgments. If perception of relative path were based on retinal flow, as proposed by Wann and Swapp (2000), one would expect little difference between conditions. Our findings are inconsistent with such a model. 
Saunders (2010) tested display conditions similar to our world-relative condition and found that path estimates were consistent with a straight rather than curved path. This is consistent with the present findings. The direction of bias observed here (in the direction of curvature) is consistent with perceptual underestimation of curvature. Furthermore, the PSE of path judgments corresponds to a situation where the heading starts on one side of the target and ends on the other side (see Figure 6). If observers perceived themselves to be going on a straight path in the direction of instantaneous heading, it would account for our results. 
The target-relative view condition appeared to be the most difficult. Both accuracy (PSEs) and reliability (JNDs) were poor, and some observers' responses could not be fit to a standard psychometric function. If perception of relative path were based on retinal flow, there is no reason to expect this condition to be more difficult than the other view conditions. On the other hand, this condition would pose difficulties for perceptual strategies based on egocentric optic flow and target motion. For example, one potential cue for path error is the rate of convergence between target and heading direction (Wann & Land, 2000). In normal driving conditions, one could infer this convergence from the direction and motion of the target relative to an egocentric reference frame, such as the windshield (Wilkie & Wann, 2003). In our target-relative view condition, however, the target had no horizontal motion. This condition would also be problematic for a model that extrapolates future path based on instantaneous translation and rotation (see Discussion section). 
The heading-relative view condition is comparable to conditions tested by Warren, Blackwell et al. (1991) and Warren, Mestre et al. (1991). They observed some bias in judgments when circular path radius was small (<25 eye heights) but accurate performance when the radius was large (50 eye heights). For the path radius tested here, 35.8 eye heights, the results of Warren et al. would suggest little or no bias, consistent with our results. With respect to reliability, Warren et al. observed average JNDs of approximately 2°, which is lower than we observed in the heading-relative condition. This difference could be explained by the longer display duration in Warren et al., 3.7 s vs. 0.5 s in the present study. 
Li and Cheng (2011) tested path estimation for conditions similar to the three view conditions tested here. As in our experiment, Li and Cheng presented displays simulating the same circular path but varied the amount of simulated view rotation. However, an important difference is that observers fixated the center of display rather than a reference object in the scene, so retinal flow was not matched across their view conditions. Despite this difference, the results of Li and Cheng were generally consistent with ours. They found that judgments were accurate in the heading-relative view condition, consistent with underestimated curvature in the target-relative condition and consistent with a straight path in the world-relative view condition. In our experiment, observers were allowed to fixate an object in the scene, and they judged their path relative to the fixated object, yet we observed similar biases as Li and Cheng. This further supports the hypothesis that path perception is based on egocentric optic flow rather than retinal flow. 
We found no effect of target eccentricity on accuracy. An effect of target position would be expected if observers used the egocentric direction and motion of the target relative to the screen (see Discussion section). However, across a 10° range of initial target positions, we found that perceived intersection paths had constant heading offsets. This indicates that judgments depended on the direction and motion of the target relative to the heading direction rather than relative to the reference frame of the screen. 
Target eccentricity did affect the precision of judgments, with more eccentric targets producing lower JNDs. This might be related to a previous finding of Wilkie and Wann (2003). Wilkie and Wann presented motion in a limited rectangular window on their display screen and varied whether the window was stationary or moved with the target. They observed better steering performance when the viewing window was stationary and suggest that observers used the target motion relative to window edge to improve steering. This might similarly explain the effect of target eccentricity in our world-relative and heading-relative view conditions, though not in our target-relative view condition. 
One reason that retinal motion was not sufficient for accurate judgments in Experiment 1 could be the relatively short duration of the stimuli, 500 ms. It is possible that longer motion sequences are required to successfully use a cue based on acceleration. This is tested in Experiment 2
Experiment 2
The results of Experiment 1 provided no evidence that observers could use retinal flow to directly judge relative future path. In Experiment 2, we increased stimulus duration to 1 s rather than 500 ms. The conditions and task were otherwise the same as before. Increasing the stimulus duration could potentially facilitate detection of rotational acceleration within retinal flow, as required for the retinal flow strategy proposed by Wann and Swapp (2000). One might, therefore, expect smaller perceptual biases and less difference between the three view conditions. On the other hand, if perception of relative future path were based on first-order optic flow, increasing stimulus duration would have little effect. 
Methods
Participants
Twelve students at the University of Hong Kong participated in Experiment 2. All were naive to the purposes of the experiment and had normal or corrected-to-normal vision. One participant was found to misunderstand the task in the same way as the excluded participant in Experiment 1, resulting in reverse of signs of responses. The data from this participant was discarded. 
Apparatus
The display apparatus was the same as in the previous experiment. 
Stimuli
The stimuli were identical to those of previous experiment, except that duration of motion stimuli was 1 s rather than 500 ms. All other stimulus variables were the same as in the previous experiment (e.g., observer speed, initial target locations, etc.). 
Procedure
The task and procedure were the same as in the previous experiment. Subjects completed six blocks in two sessions. As before, view conditions were blocked, and order of conditions was counterbalanced across sessions and subjects. Trials were self-paced, and each block took about 20 min to complete. 
Results
Mean accuracy and reliability
PSEs and JNDs were computed in the same way as that in Experiment 1. For two observers, we were not able to fit PSE and JND to responses in some of the target-relative view conditions. The target-relative view condition was the condition that also produced some unfittable data in the previous experiment. The data from the heading-relative and world-relative view conditions for these two observers were unremarkable, as before. We present mean results from the nine observers for which psychometric functions could be fit in all conditions. 
Figure 8 plots mean PSE as a function of initial target positions for three view conditions. The results were similar to those of Experiment 1, except that the biases in world-relative view conditions were smaller in Experiment 2. Judgments were most accurate in the heading-relative view condition, with mean PSE close to zero, −0.7°. The PSEs in the target-relative view and world-relative view conditions were larger, averaging 7.8° and 6.7°, respectively. 
Figure 8
 
Mean PSEs from Experiment 2, expressed as heading offsets. Accurate performance would correspond to a PSE of 0°. Positive heading offsets correspond to biases in the direction of curvature. Error bars depict ±1 standard error.
Figure 8
 
Mean PSEs from Experiment 2, expressed as heading offsets. Accurate performance would correspond to a PSE of 0°. Positive heading offsets correspond to biases in the direction of curvature. Error bars depict ±1 standard error.
An ANOVA on PSEs revealed a main effect of view condition ( F(2, 16) = 81.15, p < 0.001) but no effect of initial target position ( F(2, 16) = 2.99, p = 0.079), nor an interaction ( F(1.30, 10.37) = 0.96, p = 0.38). PSEs were averaged across initial target position conditions for further comparisons between view conditions. We found that PSEs in heading-relative view condition were significantly smaller than in the world-relative view condition ( t(8) = 9.52, p < 0.001) or the target-relative view condition ( t(8) = 9.68, p < 0.001). We also found that PSEs in world-relative view condition were significantly smaller than in target-relative view condition ( t(8) = 2.50, p = 0.037). 
We compared PSEs from Experiments 1 and 2 separately for each view condition. Because initial target position had no effect, we first combined across the initial target conditions before conducting comparisons. The PSEs in the world-relative view condition were significantly smaller in Experiment 2 ( t(31.17) = −3.40, p = 0.002), while there was no significant difference between experiments for the heading-relative ( t(37.91) = −1.51, p = 0.14) or the target-relative ( t(29.90) = −0.28, p = 0.78) view conditions. 
Figure 9 plots mean JND as a function of initial target positions for the three view conditions. JNDs were again log-transformed for averaging and statistical analysis. Results were similar to those of Experiment 1, except that JNDs in the heading-aligned condition were smaller in Experiment 2. Judgments were again most precise in the world-relative view condition, with mean JND of 2.0°. The JNDs in the heading-relative view and target-relative conditions were higher, averaging 3.0° and 4.8°, respectively. 
Figure 9
 
Mean JND thresholds from Experiment 2. Error bars depict ±1 standard error.
Figure 9
 
Mean JND thresholds from Experiment 2. Error bars depict ±1 standard error.
An ANOVA on log JNDs revealed a main effect of both view condition ( F(2, 16) = 8.79, p = 0.003) and initial target position ( F(2, 16) = 4.11, p = 0.036) but no interaction ( F(1.68, 13.45) = 0.82, p = 0.44). JNDs were averaged across initial target position conditions for further comparisons between view conditions. We found that JNDs in world-relative view condition were significantly smaller than in heading-relative view condition ( t(8) = 2.72, p = 0.026) or the target-relative view condition ( t(8) = 3.99, p = 0.004). No significant difference was found between JNDs in the heading-relative and target-relative view conditions ( t(8) = 1.93, p = 0.09). 
As in the previous experiment, judgments were less precise in the least eccentric target location. Pairwise comparisons found that JNDs in 7° initial target condition were significantly higher than in the 12.4° target condition ( t(8) = 2.66, p = 0.029) but not significantly different from the 17.7° target condition ( t(8) = 1.82, p = 0.11). No significant difference was found between JNDs in the 12.4° and 17.7° target conditions ( t(8) = 0.91, p = 0.39). 
We compared JNDs from Experiments 1 and 2 for each view condition, after first averaging across initial target conditions. The JNDs in the heading-relative view condition were significantly smaller in Experiment 2 ( t(52) = −6.22, p < 0.001), while there was no difference between experiments for the target-relative ( t(52) = −0.58, p = 0.56) or the world-relative ( t(42.91) = −1.85, p = 0.071) view conditions. 
Discussion
Experiment 2 tested whether increasing display duration to 1 s would allow observers to judge their future path based on retinal flow. If so, one would expect judgments to be invariant to view condition. However, differences between view conditions remained, and overall performance was similar to the previous experiment. Figure 10 shows the perceived intersection paths implied by the mean PSEs in each condition, plotted in the same way as Figure 6. In the heading-relative view condition, judgments showed little or no bias. In the other conditions, simulated paths had to have heading offset of 6° or more to appear to be a path toward the target. Thus, increasing the display duration was not sufficient to eliminate the large perceptual biases in the target-relative and world-relative view conditions. 
Figure 10
 
Estimated perceived intersection paths based on the PSEs observed in Experiment 1, illustrated in the same way as Figure 6. Middle panels show a top-down view. Red lines show the PSE paths, gray regions show the PSE ± JND, dashed lines show the correct intersection path, and circles show the position of the target and observer. Right panels illustrate the optic flow for the PSE paths. The red line shows the future path, and progression of headings is shown by circles.
Figure 10
 
Estimated perceived intersection paths based on the PSEs observed in Experiment 1, illustrated in the same way as Figure 6. Middle panels show a top-down view. Red lines show the PSE paths, gray regions show the PSE ± JND, dashed lines show the correct intersection path, and circles show the position of the target and observer. Right panels illustrate the optic flow for the PSE paths. The red line shows the future path, and progression of headings is shown by circles.
The mean bias in the world-relative view condition was smaller than in Experiment 1, which could indicate a partial contribution of retinal flow. Longer displays would provide a more reliable rotational acceleration cue. If multiple self-motion cues were integrated according to their reliability, one would expect more contribution from retinal flow for longer displays and, therefore, less bias. A problem with this explanation is that we did not observe a similar improvement in accuracy for the target-relative view condition. Moreover, for a weighted cue model, the expected contribution from retinal flow would be greater for the target-relative view condition than in the world-relative view condition, because judgments were much less reliable overall. One would, therefore, expect less bias in the target-relative condition, contrary to our results. 
We believe that a more likely explanation for performance in the world-relative view condition involves the direction of instantaneous heading relative to the target. When a circular path of self-motion is simulated with no view rotation, as in the world-relative view condition, observers have difficulty perceiving path curvature (Saunders, 2010). If path were perceived as approximately straight, then an intersection path would correspond to when the heading direction is aligned with the target. One might, therefore, expect judgments to depend on the direction of heading relative to the target. This is consistent with the PSE paths we observed in the world-relative view conditions. For the PSE path in Experiment 1, the heading direction started 2.4° inside the target and ended 1.6° outside the target. For the PSE path in Experiment 2, the heading direction started 5.3° inside the target and ended 2.6° outside the target. For both experiments, the PSE paths were those for which the heading passed by the target near the end of the trial. Thus, judgments in the world-relative view condition are consistent with use of relative heading direction rather than accurate extrapolation of a circular path. This could also account for the comparatively low JND thresholds, because task would essentially be judging the direction of heading relative to fixation. 
The target-relative view condition was again the most difficult, as indicated by both large biases and high thresholds. An anecdotal observation was that the simulated motion in this condition sometimes appeared to be an “S”-shaped path. We noticed this phenomenon, and it was reported by a number of observers as well. We suspect that this illusion was due to the sign of simulated rotation changing over the course of a trial, which occurred when the heading offset was large enough that the heading direction passed across the target during a trial. Perceiving an “S”-shaped path would be consistent with use of rotation as a cue to path curvature (Saunders, 2010; Saunders & Neihorster, 2010). This illusion is not apparent in the other two view conditions, which is also consistent with this interpretation: in the world-relative condition, there was no simulated rotation, and in the heading-relative condition, simulated rotation was constant. 
The JND thresholds in the heading-relative view condition were lower than in Experiment 1, indicating that the longer stimuli led to more reliable judgments. These thresholds are comparable to those observed previously by Warren, Blackwell et al. (1991) and Warren, Mestre et al. (1991), whereas the thresholds from Experiment 1 were higher. The JNDs in the world-relative view condition were similar across Experiments 1 and 2, consistent with our hypothesis that observers treated this as a relative heading task. 
General discussion
Insufficiency of retinal flow
Our goal was to test whether human observers make use of a retinal flow strategy to perceive relative future path, as proposed by Kim and Turvey (1999) and Wann and Swapp (2000). If observers use such a strategy, similar performance would be expected across our view conditions. However, we found that heading-relative view condition was much more accurate than the other two viewing conditions. Although rotational acceleration within retinal flow could potentially allow accurate judgments of relative path, observers did not appear to utilize this information. 
In our experiments, observers judged their path relative to a target that was visible throughout the motion sequence. This task allows use of gaze-dependent strategies (Kim & Turvey, 1999; Wann & Land, 2000; Wann & Swapp, 2000), in contrast with other studies of circular path perception (Fajen & Kim, 2002; Kim & Turvey, 1998; Li & Cheng, 2011; Saunders, 2010; Warren, Blackwell et al., 1991; Warren, Mestre et al., 1991). Cutting et al. (1997) found that observers could make accurate judgments of linear heading relative to a fixated target, invariant to view rotation. We did not find analogous results for circular paths. Despite the fact that the target was visible and fixated, judgments of relative circular path showed large errors depending on view. 
For retinal flow to be equated across view conditions, observers had to successfully track the target with their gaze. One concern might be that fixation was not precisely maintained, in which case retinal flow would not be exactly matched across view conditions. However, variability due to pursuit could not explain why judgments in the target-relative and world-relative view conditions are much more biased than in the heading-relative condition. The target-relative condition requires the least pursuit eye movements to maintain fixation, because the target maintains a constant horizontal position. Variability due to pursuit would, therefore, be least in this condition, yet observers' performance was worst. In the other two view conditions, horizontal eye movements were required to maintain fixation, which might not have been accurate and precise. However, such variability would have been larger in the heading-relative condition, for which performance was most accurate. For paths perceived to be nearly intersecting (i.e., heading offset near the mean PSE), target motion on the screen was much slower in the world-relative view condition than in the heading-relative condition (see Figure 6). Thus, mismatched retinal flow due to pursuit errors cannot explain our main finding of large biases in the target-relative and world-relative view conditions. 
Over a longer span of time, rotational acceleration within retinal flow would have more cumulative effects, which might allow observers to better detect path errors in the target-relative and world-relative conditions. However, if reliable detection of rotational acceleration requires more than 1 s, this cue would have limited utility for real-time steering control. In Experiment 2, the time to passage of the target was 3 s at the start of a trial and reduced to 2 s by the end of a trial. Thus, observers were simulated to have traveled a third of the distance of the target. This is a significant proportion of the total movement toward the target, yet was not sufficient to detect large (>6°) path errors. 
Egocentric direction and motion of target
Perceptual biases in the target-relative and world-relative view conditions could potentially be explained by differences in the egocentric motion of the target rather than differences in global optic flow. When an egocentric reference frame is available, such as from the boundaries of the screen in our experiment, global optic flow is not necessary to determine path error. Path error could be computed based on the egocentric direction and motion of the target relative to the reference frame (Wann & Land, 2000). Wilkie and Wann (2002, 2003) observed that observers could successfully steer to a target without any global optic flow and that a reference frame can influence steering even when global optic flow is available. 
Our results can distinguish these possibilities because we varied the location of the heading direction on the screen. In normal driving, heading direction is fixed relative to the reference frame of the windshield. The heading direction is implicitly defined even if no optic flow is available, and convergence of the target and heading direction could be inferred from the position and motion of the target relative to the reference frame. In our conditions, however, heading was not in a constant position on the screen, so convergence of the target toward the heading was different from the convergence of the target relative to the screen. 
Specifically, if judgments were based on the motion of the target relative to the center of the screen, one would expect an effect of target eccentricity in our heading-relative condition. In a normal driving situation, more eccentric targets on an intersection path would drift toward the center of the screen at a faster rate than less eccentric targets ( Figure 11a). However, we varied heading while keeping curvature constant, so accurate intersection paths had the same target motion relative to the heading direction ( Figure 11b). Considering only the motion of the target relative to the screen, the more eccentric target might appear to be passing to the right of the observer and the less eccentric target to the left of the observer. If judgments were based on target motion alone, perceived intersection paths would then be biased, with heading offset changing as a function of target eccentricity ( Figure 11c). Contrary to this prediction, we found that perceived intersection paths converged toward the heading in similar way regardless of screen position. Figure 11d shows the motion of the target for perceived intersection paths from Experiment 2, plotted in screen coordinates, for targets with different initial eccentricity. Perceived intersection paths depended on the target motion relative to the heading (as in Figure 11b), not relative to the screen (as in Figure 11c). This suggests that the differences we observed between view conditions are due to global optic flow rather than solely the egocentric direction and motion of the target. 
Figure 11
 
(a) Motion of a target on an intersection path in normal driving conditions, for targets with initial eccentricity of either 7.2°, 12.4, or 17.7°. Heading is at the center of the screen, and paths differ by amount of curvature. Rectangles show the position of the target on the screen at times 0 s, 0.5 s, and 1 s. Gray curved lines indicate the future trajectory of the target if the observer had continued on the same path. The eccentric target moves toward the center at a faster speed. (b) Intersection paths toward the same target locations but with varied heading and constant curvature. Circles indicate headings for the three paths. The horizontal motion of the targets is the same. (c) Non-intersecting paths with constant curvature that have similar target motion as in (a). Headings are further from the center than for accurate intersection paths. Dotted lines and arrows show the predicted bias. (d) Perceived intersection paths from the heading-relative view condition of Experiment 2. For all target eccentricities, judgments were close to veridical (dotted lines).
Figure 11
 
(a) Motion of a target on an intersection path in normal driving conditions, for targets with initial eccentricity of either 7.2°, 12.4, or 17.7°. Heading is at the center of the screen, and paths differ by amount of curvature. Rectangles show the position of the target on the screen at times 0 s, 0.5 s, and 1 s. Gray curved lines indicate the future trajectory of the target if the observer had continued on the same path. The eccentric target moves toward the center at a faster speed. (b) Intersection paths toward the same target locations but with varied heading and constant curvature. Circles indicate headings for the three paths. The horizontal motion of the targets is the same. (c) Non-intersecting paths with constant curvature that have similar target motion as in (a). Headings are further from the center than for accurate intersection paths. Dotted lines and arrows show the predicted bias. (d) Perceived intersection paths from the heading-relative view condition of Experiment 2. For all target eccentricities, judgments were close to veridical (dotted lines).
This aspect of our findings appears to conflict with results from steering studies that have observed effects of a reference frame (Wilkie & Wann, 2002, 2003). Based on these studies, we had expected perceived self-motion to be biased toward the center of the display, which was not observed. We speculate that the reference frame effects in these studies may have been specific to a steering task (e.g., tendency to center a target) rather than reflecting ability to perceive future relative path. 
The egocentric motion of the target could potentially be used to judge relative path error without comparison to a reference frame. In normal driving, when an observer is on a circular path that will intersect a target, the target drifts horizontally at a constant rate in egocentric coordinates (Wann & Land, 2000). This could be the basis for judging relative path error. If the target accelerates in a leftward direction, the future path would pass to the left of the target, and vice versa. Use of this cue could potentially explain the difference between our heading-relative and target-relative view conditions. The heading-relative condition is analogous to normal driving, and horizontal acceleration of the target provides a valid cue for relative path error. In the target-relative condition, there was no horizontal drift of the target, so this cue was absent. On the other hand, this cue would also be valid in our world-relative view condition, for which observers showed large biases. In the world-relative condition, a target on the observer's future path also drifts horizontally at a constant rate, in the opposite direction but the same rate as in the heading-relative view condition. Horizontal acceleration of the target would, therefore, provide a valid cue in the world-relative view condition. However, when target drift was constant (accurate intersection path), observers perceived themselves to be understeering. For the paths that were perceived to intersect the target, target drift changed sign from leftward to rightward, or vice versa. This situation would make it easy to detect that horizontal motion of the target is not constant, but observers did not perceive the large path error. Thus, in the world-relative view condition, observers did not appear to utilize target drift as a direct cue to path error. We cannot rule out the possibility, though, that observers used a different strategy in the heading-relative view condition and that target drift contributed to accurate judgments. 
Role of extraretinal eye movement signals
Another factor that distinguishes our three view conditions is the relationship between extraretinal eye movement signals and the rotational component within retinal flow. In the world-relative view condition, all rotation within the retinal flow was due to pursuit eye movements. In the heading-relative view condition, there was a rotational component due to eye movements and also a constant rotational component due to simulated view rotation. In the target-relative view condition, rotation in retinal flow was due to a combination of eye movements and a variable amount of simulated view rotation. 
A concern might be that an unnatural conflict between extraretinal signals and rotation could interfere with normal processing of optic flow and, thereby, impair performance. However, this could not easily explain our results. In the world-relative view condition, there was no conflict between extraretinal signals and the amount of rotation in retinal flow, yet judgments were highly biased. Conversely, in the heading-relative view condition, there was a conflict between extraretinal signals and rotation, yet judgments were comparatively accurate. The presence of such a conflict does not directly correspond to poor performance in our experiments. 
Saunders and Neihorster (2010) have proposed an alternate interpretation of the conflict created by simulated rotation conditions, which takes into account path curvature as well as extraretinal signals. When traveling on a straight path, view rotation would usually be due to pursuit eye and head movements, so one would expect a correspondence between the rotational component of retinal flow and the rotation specified by extraretinal signals. Simulated view rotation without an accompanying extraretinal signal would, therefore, be an unnatural situation. However, when traveling on a curved path, the body typically rotates with change in heading, which contributes a rotational component that does not correspond to pursuit eye and head movements. The absence of simulated view rotation would then be an unnatural situation for travel on a curved path. By this interpretation, a discrepancy between the rotation specified by extraretinal signals and the rotation within retinal flow does not in itself present a cue conflict situation. Rather, an unnatural cue conflict arises when this discrepancy (i.e., the simulated view rotation) does not match path curvature. This interpretation is consistent with previous observations that simulated rotation can produce illusory path curvature (e.g., Banks et al., 1996; Ehrlich et al., 1998; Royden et al., 1992) and also that the absence of simulated rotation can make it difficult to perceive path curvature (Li & Cheng, 2011; Saunders, 2010). 
We propose that the rotation within egocentric optic flow was the crucial factor differentiating our view conditions and that observers were able to use extraretinal information to compensate for the effect of pursuit eye movements. A comparison with a recent study by Li and Cheng (2011) provides supporting evidence. Li and Cheng tested circular path judgments for displays similar to our three view conditions but had observers fixate a stationary point in the center of the screen. Because there were no eye movements, the retinal flow in their heading-relative and world-relative view conditions was different than in our study. However, their findings were consistent with ours: judgments were accurate in the heading-relative view condition and consistent with a straight path in the world-relative view condition. This suggests that perception of circular path is a function of egocentric optic flow and does not strongly depend on where an observer fixates during self-motion. 
An extraretinal eye movement signal could also provide a direct cue to whether an observer's current path will intersect a target. If observers fixate the target, then the rate of eye movement indicates the rate of target drift, which provides a cue to relative path error (Wann & Land, 2000; see above). As describe previously, use of this cue could potentially explain the difference between results in our heading-relative and target-relative conditions but cannot explain the larger biases observed in the world-relative view condition. 
Velocity field extrapolation
Previous evidence suggests that circular path perception is based on information available from instantaneous optic flow rather than optic flow over extended time (Li & Cheng, 2011; Saunders, 2010; Warren, Blackwell et al., 1991; Warren, Mestre et al., 1991). In this section, we describe two models that extrapolate instantaneous optic flow to estimate a future circular path and how this class of model could account for our results. 
Future path could be extrapolated from instantaneous observer translation and rotation, which are specified by a velocity field, by assuming that heading changes at the rate of rotation. In many situations, such as normal driving, change in heading is accompanied by rotation of the body, so rotation that is not due to pursuit eye and head movements would be a reliable cue for change in heading (Saunders, 2010; Saunders & Neihorster, 2010). Perceived future path would then be a circle that is tangent to the direction of heading and with curvature given by the ratio of observer speed and rotation rate. Li and Cheng (2011) proposed such a model. 
A velocity field could also be extrapolated directly, leading to similar results. If an observer's body remains in constant alignment with heading during travel on a circular path along the ground (as in our heading-relative condition), then the egocentric velocity field remains constant over time. Future visual trajectories of objects can, therefore, be extrapolated by integrating the velocity field to obtain a family of flow lines. Lee and Lishman (1977) define the locomotor flow line to be the unique flow line that intersects the observer and proposed that the locomotor flow line could be the basis for perception of future path. In the case of travel along a ground plane, the locomotor flow line corresponds to the circle obtained by extrapolating instantaneous observer translation and rotation. The difference between these models is whether the extrapolated path is reconstructed from estimates of heading and rotation derived from a velocity field or by directly integrating a velocity field. We will consider these models together. 
Extrapolation of a velocity field predicts accurate performance for our heading-relative view condition but not in the target-relative and world-relative view conditions. For such a method to be valid, the crucial requirement is that heading direction changes at the same rate as view rotation. If this is true, then rotation provides an accurate cue to path curvature, and the flow lines defined by the instantaneous velocity field accurately predict the future visual trajectories of objects. In the target-relative and world-relative conditions, view rotation does not match curvature, so extrapolation of the velocity field predicts an incorrect future path. 
To assess whether this could account for the perceptual biases observed in Experiments 1 and 2, we computed extrapolated paths using the heading and rotation parameters that were perceived to be intersection paths. We used the instantaneous heading and rotation at the final moment of a trial as the basis for extrapolation. Because results were invariant to target position, we consider only the 7° initial target condition, for which the correct intersection path has 0° initial heading. Figure 12 shows the resulting final target locations and extrapolated paths, as well as the actual future paths. In all cases, the extrapolated paths pass near target. In the target-relative and world-relative conditions, the extrapolated paths pass much closer to the target than the actual future paths. Thus, extrapolation of instantaneous optic flow provides a plausible explanation for our results. 
Figure 12
 
Locomotor flow lines of the perceived intersection paths from Experiments 1 and 2. The mean PSEs in each condition were used to estimate the perceived intersection paths. Arrows show the velocity field at the last moment of a trial, red lines show locomotor flow lines derived from the velocity field, and gray lines show actual future paths. In all cases, the locomotor flow lines pass near the target. In the target-relative and world-relative conditions, the locomotor flow lines pass much closer to the target than the actual future paths.
Figure 12
 
Locomotor flow lines of the perceived intersection paths from Experiments 1 and 2. The mean PSEs in each condition were used to estimate the perceived intersection paths. Arrows show the velocity field at the last moment of a trial, red lines show locomotor flow lines derived from the velocity field, and gray lines show actual future paths. In all cases, the locomotor flow lines pass near the target. In the target-relative and world-relative conditions, the locomotor flow lines pass much closer to the target than the actual future paths.
In the target-relative view condition, the perceived intersection paths estimated from judgments had little view rotation, so it is hard to distinguish this strategy from an alternate strategy based solely on instantaneous heading. Observers are capable of judging their heading relative to a target even in the presence of rotation (Li & Warren, 2000). If observers had simply judged whether their heading direction was left or right of the target in our target-relative view condition, ignoring rotation, the predicted biases would be very similar to a model that extrapolates based on both heading and rotation. Our JND results argue against this explanation. If responses in the target-relative view condition were based solely on the direction of heading relative to the target, then one would expect similar performance as in the world-relative condition, including similar precision. However, we observed much larger JNDs in the target-relative condition. A model that extrapolates based on observer heading and rotation could explain the difference in precision in our conditions. Figure 13 shows extrapolated paths in the target-relative and world-relative view conditions for various final heading directions. In the target-relative condition, the simulated view rotates to cancel the horizontal motion of the target. Because of this rotation, the extrapolated paths curve toward the target and, consequently, pass closer to the target than in the world-relative condition. One would, therefore, expected larger discrimination thresholds when heading direction is varied, consistent with our findings. 
Figure 13
 
Extrapolated paths based on instantaneous heading and rotation for (top) the world-relative and (bottom) the target-relative view conditions, for various heading directions. In the world-relative view condition, there is no rotation, so extrapolated paths are straight lines toward the heading. In the target-relative view condition, the displays had simulated view rotation in the direction of the target, so extrapolated paths are curved and pass closer to the target than a straight path. It would be comparatively harder to distinguish the relative path error in the target-relative view condition, which predicts larger JND thresholds.
Figure 13
 
Extrapolated paths based on instantaneous heading and rotation for (top) the world-relative and (bottom) the target-relative view conditions, for various heading directions. In the world-relative view condition, there is no rotation, so extrapolated paths are straight lines toward the heading. In the target-relative view condition, the displays had simulated view rotation in the direction of the target, so extrapolated paths are curved and pass closer to the target than a straight path. It would be comparatively harder to distinguish the relative path error in the target-relative view condition, which predicts larger JND thresholds.
Conclusion
Judgments of future circular path relative to a visible target were highly dependent on how the simulated view rotated during movement, which is inconsistent with use of retinal motion. The large perceptual biases observed in the target-relative and world-relative view conditions cannot be explained as inaccurate visual pursuit, and the invariance to target eccentricity suggests that judgments were based on global optic flow rather than egocentric direction and motion of the target. We found no evidence that observers are able to use acceleration within retinal flow to judge relative path, as proposed by Wann and Swapp (2000). Rather, our results are consistent with strategies based on instantaneous optic flow. 
We tested path judgments rather than steering performance in order to isolate the ability to perceive future circular path relative to a target. Steering to a target likely involves factors other than perceived path error. For example, observers might prefer to approach a target on a straight path and, therefore, oversteer initially to allow a later straight path. Consequently, steering performance may not provide a direct measure of perceptual capability. On the other hand, it is possible that the visual processing that underlies steering control has different capabilities, which are not fully evidenced by direct perceptual judgments. The question of whether our results generalize to steering control is an important problem for further work. 
Acknowledgments
This work was supported by a grant from the Hong Kong Research Grants Council, GRF HKU-750209H. We thank Diederick Niehorster and two anonymous reviewers for their helpful comments. 
Commercial relationships: none. 
Corresponding author: Jeff Saunders. 
Email: jsaun@hku.hk. 
Address: Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong SAR. 
References
Banks M. S. Ehrlich S. M. Backus B. T. Crowell J. A. (1996). Estimating heading during real and simulated eye movements. Vision Research, 36, 431–443. [CrossRef] [PubMed]
Crowell J. A. Banks M. S. Shenoy K. V. Andersen R. A. (1998). Visual self-motion perception during head turns. Nature Neuroscience, 1, 732–737. [CrossRef] [PubMed]
Cutting J. E. Vishton P. M. Fluckiger M. Baumberger B. Gerndt J. D. (1997). Heading and path information from retinal flow in naturalistic environment. Perception & Psychophysics, 59, 426–441. [CrossRef] [PubMed]
Ehrlich S. M. Beck D. M. Crowell J. A. Freeman T. C. A. Banks M. S. (1998). Depth information and perceived self-motion during simulated gaze rotation. Vision Research, 38, 3129–3145. [CrossRef] [PubMed]
Fajen B. R. (2001). Steering toward a goal by equalizing taus. Journal of Experimental Psychology: Human Perception and Performance, 27, 953–968. [CrossRef] [PubMed]
Fajen B. R. Kim N.-G. (2002). Perceiving curvilinear heading in the presence of moving objects. Journal of Experimental Psychology: Human Perception and Performance, 28, 1100–1119. [CrossRef] [PubMed]
Gibson J. J. (1958). Visually controlled locomotion and visual orientation in animals. British Journal of Psychology, 68, 372–385.
Kim N.-G. Turvey M. T. (1998). Visually perceiving heading on circular and elliptical paths. Journal of Experimental Psychology: Human Perception and Performance, 24, 1690–1704. [CrossRef] [PubMed]
Kim N.-G. Turvey M. T. (1999). Eye movements and a rule for perceiving direction of heading. Ecological Psychology, 11, 233–248. [CrossRef]
Lee D. N. Lishman R. (1977). Visual control of locomotion. Scandinavian Journal of Psychology, 18, 224–230. [CrossRef] [PubMed]
Li L. Cheng J. C. K. (2011). Perceiving path from optic flow. Journal of Vision, 11, (1):22, 1–15, http://www.journalofvision.org/content/11/1/22, doi:10.1167/11.1.22. [PubMed] [Article] [CrossRef] [PubMed]
Li L. Warren W. H. (2000). Perception of heading during rotation: Sufficiency of dense motion parallax and reference objects. Vision Research, 40, 3873–3894. [CrossRef] [PubMed]
Royden C. S. Banks M. S. Crowell J. A. (1992). The perception of heading during eye movements. Nature, 360, 583–585. [CrossRef] [PubMed]
Royden C. S. Cahill J. A. Conti D. M. (2006). Factors affecting curved versus straight path heading perception. Perception & Psychophysics, 68, 184–193. [CrossRef] [PubMed]
Royden C. S. Crowell J. A. Banks M. S. (1994). Estimating heading during eye movements. Vision Research, 34, 3197–3214. [CrossRef] [PubMed]
Saunders J. A. (2010). View rotation is used to perceive path curvature from optic flow. Journal of Vision, 10, (13):25, 1–15, http://www.journalofvision.org/content/10/13/25, doi:10.1167/10.13.25. [PubMed] [Article] [CrossRef] [PubMed]
Saunders J. A. Backus B. T. (2006). The accuracy and reliability of perceived depth from linear perspective as a function of image size. Journal of Vision, 6, (9):7, 933–954, http://www.journalofvision.org/content/6/9/7, doi:10.1167/6.9.7. [PubMed] [Article] [CrossRef]
Saunders J. A. Neihorster D. C. (2010). A Bayesian model for estimating observer translation and rotation from optic flow and extra-retinal input. Journal of Vision, 10, (10):7, 1–22, http://www.journalofvision.org/content/10/10/7, doi:10.1167/10.10.7. [PubMed] [Article] [CrossRef] [PubMed]
Turano K. A. Wang X. (1994). Visual discrimination between a curved and straight path of self motion: Effects of forward speed. Vision Research, 34, 107–114. [CrossRef] [PubMed]
Wann J. P. Land M. (2000). Steering with or without the flow: Is the retrieval of heading necessary? Trends in Cognitive Science, 4, 319–324. [CrossRef]
Wann J. P. Swapp D. K. (2000). Why you should look where you are going. Nature Neuroscience, 3, 647–648. [CrossRef] [PubMed]
Warren W. H. Blackwell A. W. Kurtz K. J. Hatsopoulos N. Kalish M. Griesar W. (1991). On the sufficiency of the optical velocity field for perception of heading. Biological Cybernetics, 65, 311–320. [CrossRef] [PubMed]
Warren W. H. Mestre D. R. Blackwell A. W. Morris M. W. (1991). Perception of circular heading from optical flow. Journal of Experimental Psychology: Human Perception and Performance, 17, 28–43. [CrossRef] [PubMed]
Wilkie R. M. Wann J. P. (2002). Driving as night falls: The contribution of retinal flow and visual direction to the control of steering. Current Biology, 12, 2014–2017. [CrossRef] [PubMed]
Wilkie R. M. Wann J. P. (2003). Controlling steering and judging heading: Retinal flow, visual direction, and extraretinal information. Journal of Experimental Psychology: Human Perception and Performance, 29, 363–378. [CrossRef] [PubMed]
Figure 1
 
Illustration of how retinal flow over time could be used to determine whether an observer's future path would pass to the left or right of a fixated target (Wann & Swapp, 2000). (a) If the observer is on path to intersect the fixated target, retinal trajectories of objects over time form straight lines. (b) If the current path will pass to the right of the target, retinal trajectories are curved, accelerating in a clockwise direction. (c) If the current path will pass to the left of the target, retinal trajectories are curved in the opposite direction, accelerating in a counterclockwise direction.
Figure 1
 
Illustration of how retinal flow over time could be used to determine whether an observer's future path would pass to the left or right of a fixated target (Wann & Swapp, 2000). (a) If the observer is on path to intersect the fixated target, retinal trajectories of objects over time form straight lines. (b) If the current path will pass to the right of the target, retinal trajectories are curved, accelerating in a clockwise direction. (c) If the current path will pass to the left of the target, retinal trajectories are curved in the opposite direction, accelerating in a counterclockwise direction.
Figure 2
 
The three view conditions. (a) In the heading-relative view condition, the simulated view direction rotated with heading, as in normal driving. (b) In the target-relative view condition, view direction was rotated to counter the horizontal drift of the target, so that the target maintained a fixed horizontal position. (c) In the world-relative view condition, the simulated view remained fixed relative to the environment. (d) All three conditions produce the same retinal flow when the target is fixated.
Figure 2
 
The three view conditions. (a) In the heading-relative view condition, the simulated view direction rotated with heading, as in normal driving. (b) In the target-relative view condition, view direction was rotated to counter the horizontal drift of the target, so that the target maintained a fixed horizontal position. (c) In the world-relative view condition, the simulated view remained fixed relative to the environment. (d) All three conditions produce the same retinal flow when the target is fixated.
Figure 3
 
Static view of a sample stimuli.
Figure 3
 
Static view of a sample stimuli.
Figure 4
 
Mean PSEs from Experiment 1, expressed as heading offsets. Heading offset is defined as the difference between the initial heading of a simulated path and the initial heading corresponding to an intersection path. Accurate performance would correspond to a PSE of 0°. Positive heading offsets correspond to biases in the direction of curvature. Error bars depict ±1 standard error.
Figure 4
 
Mean PSEs from Experiment 1, expressed as heading offsets. Heading offset is defined as the difference between the initial heading of a simulated path and the initial heading corresponding to an intersection path. Accurate performance would correspond to a PSE of 0°. Positive heading offsets correspond to biases in the direction of curvature. Error bars depict ±1 standard error.
Figure 5
 
Mean JND thresholds from Experiment 1. Error bars depict ±1 standard error.
Figure 5
 
Mean JND thresholds from Experiment 1. Error bars depict ±1 standard error.
Figure 6
 
Estimated perceived intersection paths for the three view conditions based on the PSEs observed in Experiment 1. Middle panels show a top-down view. Red lines show the PSE paths, gray regions depict PSE ± JND, and the dashed lines show the correct intersection path. Open circles show the target position, and filled circles show the final position of the observer. Right panels illustrate the optic flow for the PSE paths. The red line shows the future path. The final heading is shown by a black circle, and gray circles show the progression of heading over the course of the trial for the target-relative and world-relative conditions.
Figure 6
 
Estimated perceived intersection paths for the three view conditions based on the PSEs observed in Experiment 1. Middle panels show a top-down view. Red lines show the PSE paths, gray regions depict PSE ± JND, and the dashed lines show the correct intersection path. Open circles show the target position, and filled circles show the final position of the observer. Right panels illustrate the optic flow for the PSE paths. The red line shows the future path. The final heading is shown by a black circle, and gray circles show the progression of heading over the course of the trial for the target-relative and world-relative conditions.
Figure 7
 
Psychometric functions for two observers (top and bottom) for whom PSEs and JNDs could not be fit in the target-aligned condition. Dashed lines show psychometric functions consistent with mean PSEs and JNDs from the other nine observers.
Figure 7
 
Psychometric functions for two observers (top and bottom) for whom PSEs and JNDs could not be fit in the target-aligned condition. Dashed lines show psychometric functions consistent with mean PSEs and JNDs from the other nine observers.
Figure 8
 
Mean PSEs from Experiment 2, expressed as heading offsets. Accurate performance would correspond to a PSE of 0°. Positive heading offsets correspond to biases in the direction of curvature. Error bars depict ±1 standard error.
Figure 8
 
Mean PSEs from Experiment 2, expressed as heading offsets. Accurate performance would correspond to a PSE of 0°. Positive heading offsets correspond to biases in the direction of curvature. Error bars depict ±1 standard error.
Figure 9
 
Mean JND thresholds from Experiment 2. Error bars depict ±1 standard error.
Figure 9
 
Mean JND thresholds from Experiment 2. Error bars depict ±1 standard error.
Figure 10
 
Estimated perceived intersection paths based on the PSEs observed in Experiment 1, illustrated in the same way as Figure 6. Middle panels show a top-down view. Red lines show the PSE paths, gray regions show the PSE ± JND, dashed lines show the correct intersection path, and circles show the position of the target and observer. Right panels illustrate the optic flow for the PSE paths. The red line shows the future path, and progression of headings is shown by circles.
Figure 10
 
Estimated perceived intersection paths based on the PSEs observed in Experiment 1, illustrated in the same way as Figure 6. Middle panels show a top-down view. Red lines show the PSE paths, gray regions show the PSE ± JND, dashed lines show the correct intersection path, and circles show the position of the target and observer. Right panels illustrate the optic flow for the PSE paths. The red line shows the future path, and progression of headings is shown by circles.
Figure 11
 
(a) Motion of a target on an intersection path in normal driving conditions, for targets with initial eccentricity of either 7.2°, 12.4, or 17.7°. Heading is at the center of the screen, and paths differ by amount of curvature. Rectangles show the position of the target on the screen at times 0 s, 0.5 s, and 1 s. Gray curved lines indicate the future trajectory of the target if the observer had continued on the same path. The eccentric target moves toward the center at a faster speed. (b) Intersection paths toward the same target locations but with varied heading and constant curvature. Circles indicate headings for the three paths. The horizontal motion of the targets is the same. (c) Non-intersecting paths with constant curvature that have similar target motion as in (a). Headings are further from the center than for accurate intersection paths. Dotted lines and arrows show the predicted bias. (d) Perceived intersection paths from the heading-relative view condition of Experiment 2. For all target eccentricities, judgments were close to veridical (dotted lines).
Figure 11
 
(a) Motion of a target on an intersection path in normal driving conditions, for targets with initial eccentricity of either 7.2°, 12.4, or 17.7°. Heading is at the center of the screen, and paths differ by amount of curvature. Rectangles show the position of the target on the screen at times 0 s, 0.5 s, and 1 s. Gray curved lines indicate the future trajectory of the target if the observer had continued on the same path. The eccentric target moves toward the center at a faster speed. (b) Intersection paths toward the same target locations but with varied heading and constant curvature. Circles indicate headings for the three paths. The horizontal motion of the targets is the same. (c) Non-intersecting paths with constant curvature that have similar target motion as in (a). Headings are further from the center than for accurate intersection paths. Dotted lines and arrows show the predicted bias. (d) Perceived intersection paths from the heading-relative view condition of Experiment 2. For all target eccentricities, judgments were close to veridical (dotted lines).
Figure 12
 
Locomotor flow lines of the perceived intersection paths from Experiments 1 and 2. The mean PSEs in each condition were used to estimate the perceived intersection paths. Arrows show the velocity field at the last moment of a trial, red lines show locomotor flow lines derived from the velocity field, and gray lines show actual future paths. In all cases, the locomotor flow lines pass near the target. In the target-relative and world-relative conditions, the locomotor flow lines pass much closer to the target than the actual future paths.
Figure 12
 
Locomotor flow lines of the perceived intersection paths from Experiments 1 and 2. The mean PSEs in each condition were used to estimate the perceived intersection paths. Arrows show the velocity field at the last moment of a trial, red lines show locomotor flow lines derived from the velocity field, and gray lines show actual future paths. In all cases, the locomotor flow lines pass near the target. In the target-relative and world-relative conditions, the locomotor flow lines pass much closer to the target than the actual future paths.
Figure 13
 
Extrapolated paths based on instantaneous heading and rotation for (top) the world-relative and (bottom) the target-relative view conditions, for various heading directions. In the world-relative view condition, there is no rotation, so extrapolated paths are straight lines toward the heading. In the target-relative view condition, the displays had simulated view rotation in the direction of the target, so extrapolated paths are curved and pass closer to the target than a straight path. It would be comparatively harder to distinguish the relative path error in the target-relative view condition, which predicts larger JND thresholds.
Figure 13
 
Extrapolated paths based on instantaneous heading and rotation for (top) the world-relative and (bottom) the target-relative view conditions, for various heading directions. In the world-relative view condition, there is no rotation, so extrapolated paths are straight lines toward the heading. In the target-relative view condition, the displays had simulated view rotation in the direction of the target, so extrapolated paths are curved and pass closer to the target than a straight path. It would be comparatively harder to distinguish the relative path error in the target-relative view condition, which predicts larger JND thresholds.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×