Free
Article  |   March 2012
Effects of reference objects and extra-retinal information about pursuit eye movements on curvilinear path perception from retinal flow
Author Affiliations
Journal of Vision March 2012, Vol.12, 12. doi:10.1167/12.3.12
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Joseph C. K. Cheng, Li Li; Effects of reference objects and extra-retinal information about pursuit eye movements on curvilinear path perception from retinal flow. Journal of Vision 2012;12(3):12. doi: 10.1167/12.3.12.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We have previously shown that when traveling on a circular path, observers use the rotation in the retinal velocity field for path curvature estimation and recover their path of forward travel relative to their perceived instantaneous heading (L. Li, & J. C. K. Cheng, 2011). Here, we examined the contribution of reference objects and extra-retinal information about pursuit eye movements to curvilinear path perception. In Experiment 1, the display simulated an observer traveling on a circular path over a textured ground with and without tall posts while looking at a fixed target on the future path, along heading, or along a fixed axis in the world. We found that reference objects did not help path perception. In Experiment 2, extra-retinal signals about pursuit eye movements were introduced in two viewing conditions: one that corresponded to the natural case of traveling on a circular path when the body orientation is aligned with the instantaneous heading and one that corresponded to the unnatural case of traveling when the body orientation is fixed relative to the world. We found that extra-retinal signals support accurate path perception only for the natural case of self-motion when the body orientation is aligned with heading such that pursuit compensation helps stabilize the heading in the body-centric coordinate system.

Introduction
It is essential for human survival that humans are able to accurately perceive and control their self-motion in the world. As the projected retinal image of objects in the environment undergoes a lawful transformation when one moves in the world (i.e., optic flow), it has long been proposed that humans use information from optic flow to perceive and control self-motion (Gibson, 1950). The future path and the instantaneous direction of travel (i.e., heading) are two key components of one's self-motion in the world. They coincide when one travels on a straight path but diverge when one follows a curved path. In the latter case, heading is along the tangent of the path of travel. Research over the last three decades has almost exclusively focused on examining how people perceive heading from optic flow (e.g., Crowell & Banks, 1993; Cutting, Vishton, Flückiger, Baumberger, & Gerndt, 1997; Grigo & Lappe, 1999; Li & Warren, 2000, 2004; Li, Chen, & Peng, 2009; Li, Sweet, & Stone, 2006; Royden, Banks, & Crowell, 1992; Stone & Perrone, 1997; van den Berg & Brenner, 1994a, 1994b; van den Berg, 1992; Warren, Morris, & Kalish, 1988). Very few studies have examined how people perceive their path of travel from optic flow (e.g., Li & Cheng, 2011; van den Berg, Beintema, & Frens, 2001; Warren, Mestre, Blackwell, & Morris, 1991), and even fewer studies have examined the relationship between heading and path perception (e.g., Li & Cheng, 2011; Wilkie & Wann, 2006). In fact, many studies have confused heading with path perception and used a task in which participants were asked to judge their future path of locomotion to measure heading perception (e.g., Cutting et al., 1997; Saunders, 2010; van den Berg, 1996; van den Berg et al., 2001; Warren, Blackwell, Kurtz, Hatsopoulos, & Kalish, 1991). 
Several theories have been proposed that argue for direct path perception from optic flow independent of heading perception. Specifically, it has been proposed that observers can use the locomotor flow line (Lee & Lishman, 1977) in the velocity field, the motion trajectories of environmental points (Kim & Turvey, 1999; Wann & Swapp, 2000), vector normals of velocity vectors on the ground, or the reversal boundary in the flow field (Warren, Mestre et al., 1991) to recover their path of forward travel from optic flow without perceiving heading (see Li & Cheng, 2011, for a detailed review). Nevertheless, using displays that simulated an observer traveling on a circular path with different viewing directions, Li and Cheng (2011) found that accurate path perception required the simulated observer viewing direction (i.e., the “camera” in the display) to be aligned with heading such that the rotational flow on the retina was all due to the path rotation. In contrast to path perception, Li and Cheng found that heading perception was accurate even when the rotational flow was due to combined path and simulated eye rotations. Furthermore, path perception with a textured ground display was comparable to that with a dynamic random-dot ground display in which dots were periodically redrawn to remove motion trajectories of points on the ground and self-displacement information in the environment. Based on these findings, Li and Cheng proposed that observers use the rotation in the retinal velocity field to estimate path curvature and recover their path of travel using their perceived heading as the reference direction to anchor the curving trajectory (Figure 1a). 
Figure 1
 
Illustrations of the theories for curvilinear path perception: (a) Estimating path curvature from the rotation and translation components in retinal flow and then recovering the curvilinear path relative to heading, (b) estimating path curvature from the change of heading relative to a fixed axis in the world (e.g., the Z-axis) and then recovering the curvilinear path relative to that axis, (c) recovering the curvilinear path by updating the heading with respect to a reference object in the world, and (d) recovering the curvilinear path by updating self-displacements with respect to a reference object in the world.
Figure 1
 
Illustrations of the theories for curvilinear path perception: (a) Estimating path curvature from the rotation and translation components in retinal flow and then recovering the curvilinear path relative to heading, (b) estimating path curvature from the change of heading relative to a fixed axis in the world (e.g., the Z-axis) and then recovering the curvilinear path relative to that axis, (c) recovering the curvilinear path by updating the heading with respect to a reference object in the world, and (d) recovering the curvilinear path by updating self-displacements with respect to a reference object in the world.
The use of rotation in the velocity field for path curvature estimation is also supported by the path judgment performance when the simulated observer viewing direction is fixed relative to the world while traveling on a circular path. In this case, there is no observer rotation in the display and the flow field is radial and does not contain any rotational component, which is similar to traveling on a straight path (see 1 for the mathematical analysis). However, unlike traveling on a straight path in which the focus of expansion (FOE) is fixed on the image plane due to the fixed heading direction in the world, the FOE on the image plane drifts due to the changing heading direction in the world when traveling on a circular path (Figure 1b). Although the path rotation can still be estimated from the drift of the FOE over time, Li and Cheng (2011) found that observers perceive traveling on a straight rather than a circular path in this case, showing that observers rely on the rotation in the velocity field for path curvature estimation. This finding was replicated by Saunders (2010). However, Saunders interpreted the results as an indication of the use of the locomotor flow line in the velocity field (Lee & Lishman, 1977) for path perception, a strategy that has been empirically tested and shown not to be supported by behavioral data (Li & Cheng, 2011; Warren, Mestre et al., 1991; see details in the General discussion section). 
Optic flow vs. retinal flow
In most cases of travel in the world, one's body orientation is aligned with one's heading that rotates with the path rotation. Assuming no eye/head movement, the rotation in the optic flow field is thus all due to the path rotation. However, optic flow is normally detected by a moving eye in a moving head. Eye/head rotation causes the rotational flow on the retina not to correspond to the path rotation any more. The question arises as to how the visual system disentangles the sources of rotation to accurately perceive the path of travel from a compound retinal flow pattern. 
For traveling on a straight path, eye/head rotations alter the radial flow field and shift the FOE away from the true heading direction. Accurate heading perception in this case requires the removal of rotation in retinal flow (Regan & Beverly, 1982), a computation that is mathematically possible using the information from a single 2D retinal velocity field (e.g., Bruss & Horn, 1983; Cutting, 1996; Fermuller & Aloimonos, 1995; Heeger & Jepson, 1990; Hildreth, 1992; Longuet-Higgins & Prazdny, 1980; Rieger & Lawton, 1985). Indeed, psychophysical studies show that for translation on a straight path with simulated eye rotation, observers can accurately estimate heading using information solely from retinal flow (e.g., Cutting et al., 1997; Grigo & Lappe, 1999; Li & Warren, 2000, 2004; van den Berg, 1992). 
In contrast, for traveling on a curved path, eye/head rotations alter the rotational flow on the retina. When observers use the rotation in the retinal velocity field for the estimation of path curvature and perceive their path of forward travel relative to their perceived heading, the perceived path would not be accurate. This is indeed what Li and Cheng (2011) found for traveling on a circular path when the simulated observer viewing direction pointed to a fixed target on the future path such that the rotation in the retinal velocity field was due to the simulated eye rotation to maintain the gaze on the target but not the path rotation. It has also been reported that for traveling on a circular path with simulated observer body rotation, participants wrongly attributed the simulated observer body rotation to the path rotation and misperceived the curvature of their traveled path (Bertin & Israel, 2005; Bertin, Israel, & Lappe, 2000). As a single retinal velocity field does not provide sufficient information to specify or disambiguate the sources of rotational flow on the retina (Royden, 1994), to accurately perceive path in this case, information beyond a single retinal velocity field is needed. 
Reference objects and extra-retinal information
Reference objects are a possible source of information that can help path perception when path rotation is not clearly defined in retinal flow. Li and Warren (2000) proposed that by updating the heading in the retino-centric coordinate frame with respect to a reference object in the world, observers can recover their path of travel in the world-centric coordinate system (Figure 1c). Li et al. (2006) later proposed that the world-centric path can also be recovered by updating self-displacement relative to a reference object in the world (Figure 1d). In both cases, observers do not have to use the rotation in the retinal velocity field for the estimation of path curvature. 
The use of reference objects in the perception and control of self-motion has been supported by previous studies that showed improved performance with displays that contained salient reference objects such as trees or tall posts (e.g., Andersen & Enriquez, 2006; Cutting et al., 1997; Li & Warren, 2000, 2002). Theoretically, any rigid point in the environment can serve as a reference object. However, given the similar path judgment performance for the textured ground display and the dynamic random-dot ground display that does not contain any rigid environmental points (Li & Cheng, 2011), the visual system might not be able to use points on the ground plane as reference objects. It remains in question whether salient reference objects (such as trees and tall posts) can be used for the recovery of the path of travel in the world. 
In addition to reference objects, extra-retinal information about pursuit eye movements normally matches the eye movement-induced rotational flow on the retina (Mack & Herman, 1978) and thus can help disambiguate the sources of rotation in retinal flow. Banks, Ehrlich, Backus, and Crowell (1996) showed that when traveling on a straight path, observers attributed the amount of rotational flow not accompanied by actual eye movements to path rotation and perceived themselves traveling on a curved path. This is consistent with the fact that in the natural case of traveling on a curved path when the observer's body orientation is aligned with the instantaneous heading, the rotational flow on the retina is all due to the path rotation in the absence of eye/head movements. Nevertheless, how extra-retinal information about pursuit eye movements contributes to path perception for traveling on a curvilinear path has not been systematically studied. 
Present study
In the current study, we examined the contribution of salient reference objects and extra-retinal signals about pursuit eye movements to path perception for traveling on a circular path. Specifically, in Experiment 1, we examined whether the presence of tall posts in the scene helped path perception when the rotation in the retinal velocity field did not correspond to the path rotation. In Experiment 2, we examined how extra-retinal information about pursuit eye movements helped disambiguate different sources of rotation in retinal flow for accurate path perception. The goal was to extend the earlier findings from the study by Li and Cheng (2011) to provide a comprehensive understanding of how people perceive their curvilinear path of travel from retinal flow. 
Experiment 1: Reference objects
In this experiment, we presented participants with displays that simulated an observer traveling on a circular path over a textured ground with and without tall posts (Figure 2). At the end of the trial, a probe appeared on the ground at the distance of 10 m away from the final position of the observer, and participants were asked to use the mouse to place the probe on their perceived future path. 
Figure 2
 
Illustrations of the display conditions in Experiment 1: (a) a textured ground and (b) a textured ground with 20 posts in the depth range of 6–20 m.
Figure 2
 
Illustrations of the display conditions in Experiment 1: (a) a textured ground and (b) a textured ground with 20 posts in the depth range of 6–20 m.
We tested three viewing conditions in which the simulated observer gaze direction (i.e., the “camera” in the display) was (1) on a fixed target at eye level on the future path, (2) along the instantaneous heading, or (3) fixed and parallel to the Z-axis in the world (see the bird's-eye view in Figure 3). When the simulated observer gaze direction pointed to a fixed target on the future path, the rotational flow on the retina was due to the simulated eye rotation in the display to maintain the gaze on target (Figure 3, top row), which was mathematically given as half of the path rotation (see Rotational components in the retinal velocity fields for Experiment 1 section in 1). When the simulated observer gaze direction was along the instantaneous heading, the rotational flow on the retina was due to the path rotation, as in the natural case of the traveling on a circular path when the observer's body orientation is aligned with the instantaneous heading that rotates with the path rotation (Figure 3, middle row). When the simulated observer gaze direction was fixed and parallel to the Z-axis in the world, the retinal flow field was radial and did not contain any rotation, as in the unnatural case of traveling on a circular path when the observer's body orientation was fixed relative to the world. However, unlike traveling on a straight path, the path rotation caused the heading to drift on the retina, resulting in curved dot motion trajectories over time in the flow field (Figure 3, bottom row). 
Figure 3
 
The flow field and the mid-trial velocity field on the image screen, and the bird's-eye view of the experimental setup for the target viewing condition (top row), the heading viewing condition (middle row), and the Z-axis viewing condition (bottom row), respectively (path rotation = −6°/s). The cross indicates the participant's gaze direction throughout the trial. The light and dark circles in the flow field indicate the heading direction at the beginning and end of the trial. Each blue line in the flow field represents the positions of an environmental point over the course of the trial, and each blue line in the velocity field corresponds to a velocity vector associated with an environmental point. The black curve indicates the simulated path of travel.
Figure 3
 
The flow field and the mid-trial velocity field on the image screen, and the bird's-eye view of the experimental setup for the target viewing condition (top row), the heading viewing condition (middle row), and the Z-axis viewing condition (bottom row), respectively (path rotation = −6°/s). The cross indicates the participant's gaze direction throughout the trial. The light and dark circles in the flow field indicate the heading direction at the beginning and end of the trial. Each blue line in the flow field represents the positions of an environmental point over the course of the trial, and each blue line in the velocity field corresponds to a velocity vector associated with an environmental point. The black curve indicates the simulated path of travel.
The logic of the experiment was given as follows. As the rotation in the retinal velocity field was not due to the path rotation for the target viewing condition and was absent for the Z-axis viewing condition, if participants used the rotation in the retinal velocity field for path curvature estimation and recovered their path of forward travel relative to their perceived heading as reported by Li and Cheng (2011), path judgment should not be accurate for these two viewing conditions. Furthermore, path performance should not be different across the textured ground with and without tall post displays. In contrast, given that the rotational flow on the retina was all due to the path rotation for the heading viewing condition, path performance should be accurate for this condition. However, if participants were able to update their heading or self-displacement relative to salient reference objects in the scene and recover their path of travel in the world-centric coordinate system, path performance should be accurate for the textured ground with tall post display but not for the textured ground display across all three viewing conditions. 
Methods
Participants
Eight students and staff (all naive as to the specific goals of the study; 5 females, 3 males) between the age of 20 and 30 at the University of Hong Kong participated in the experiment. All had normal or corrected-to-normal vision and provided informed consent in accordance with the guidelines from the University of Hong Kong Human Research Ethics Committee. 
Visual stimuli
The display simulated an observer traveling on a circular path at a running speed of 3 m/s (path rotation = ±3°/s, ±4.5°/s, or ±6°/s; path curvature = ±0.018 m−1, ±0.026 m−1, or ±0.035 m−1, negative values indicate leftward path rotation/curvature and positive values rightward rotation/curvature) over a ground plane (depth range: 1.41–50 m). The three path curvatures were well above the threshold for participants to detect their traveling on a circular path (Turano & Wang, 1994). Two types of displays were tested: (1) a textured ground in which the ground plane was mapped with a multi-scale green texture with a power spectrum of 1/f (maximum luminance contrast +99%), thus providing a dense flow field that can support accurate heading perception during observer rotation (Figure 2a), and (2) a textured ground on which 20 planar posts (1.6 m height × 0.2 m width) were placed in the depth range of 6–20 m (Figure 2b). This display thus provided salient reference objects in addition to a dense flow field. The depth range of the tall posts was similar to that in the study by Li and Warren (2002) who reported that salient reference objects improved active control performance to steer toward a target during observer rotation. To maximize the amount of motion parallax information and to make sure that the nearby parts of the display contained an adequate number of posts, the posts were placed such that about the same number of posts was displayed at each distance in depth in each frame. The posts were mapped with a granite texture to provide extra flow and motion parallax information. The number of visible posts per frame was kept relatively constant throughout the trial, i.e., if a certain number of posts moved outside of the screen in one frame, the same number of posts was regenerated in the display with an algorithm to keep the same distribution in depth. The background sky was black in both display conditions. 
The two display conditions were crossed with three viewing conditions in which the simulated observer gaze direction (i.e., the “camera” in the display) pointed (1) to a fixed target at eye level on the future path at 30° away from the initial heading (Figure 3, top row), (2) along the instantaneous heading (Figure 3, middle row), or (3) along an axis parallel to the Z-axis in the world (Figure 3, bottom row). Given that the same initial target-heading angle resulted in different target distances for different path rotation rates, the initial target viewing distances were 57.3 m, 38.2 m, and 28.7 m for the path rotation rates of 3°/s, 4.5°/s, and 6°/s, respectively. The rotation in the retinal velocity field was due to the simulated observer eye rotation to maintain the gaze on the target in (1), the path rotation in (2), and was absent in (3). 
The visual stimuli were generated on a Dell Studio XPS Desktop 435T/9000 with an NVIDIA GeForce GTS 240 graphics card at the frame rate of 60 Hz. They were rear-projected on a large screen (110°H × 94°V) with an Epson EMP-9300 LCD projector (native resolution: 1400 × 1050 pixels, refresh rate: 60 Hz) in a light-excluded viewing booth. The screen edges were covered in matte black cloth to minimize the availability of an artificial frame of reference. Participants viewed the visual stimuli monocularly with their dominant eye from a chin rest. Before the experiment started, the participant's cyclopean eye was calibrated to be aligned with the center of the screen. The simulated observer eye height in the display was 1.51 m, corresponding to the eye height of a participant sitting on a high chair at 0.56 m away from the screen. 
Procedure
At the beginning of each trial, participants were asked to fixate on a cross at the center of the screen corresponding to the simulated observer gaze direction in the display. The cross disappeared when participants clicked a mouse button to start each 1.5-s trial to remove any extraneous relative motion in the display during the course of the trial. Participants were, however, instructed to maintain their fixation at the center of the screen (i.e., along the simulated observer gaze direction) throughout the trial. Before the experiment started, we monitored the eye movements of one participant using the textured ground display with the largest path rotation rate of ±6°/s for 20 randomized trials including all three viewing conditions. The mean maximum deviation from the center of the screen averaged across the 20 trials was 1.43° (SD: 0.87°). Given the size of the display (110°H × 94°V), the observed eye position drift from the center of the screen was insignificant and showed that participants in general were able to maintain their fixation at the center of the screen throughout the trial even after the cross disappeared. As there was negligible eye movement, extra-retinal information did not specify significant eye rotation in any of the three viewing conditions. 
At the end of the trial, a blue probe (5.6°V) appeared on the ground at the distance of 10 m in a random position within ±20° from the center of the screen (negative values are to the left and positive values to the right of the center of the screen). Participants used the mouse to place the probe on their perceived future path (so that they would hit it if they continued traveling on the current path). For the textured ground with post display, the probe was always displayed in front of the posts to make sure that the posts would not affect the participant's path judgment. The angle between the perceived and the actual path position at 10 m, defined as path error, was measured. 
Each participant viewed both the textured ground and the textured ground with post displays in the three viewing conditions in which the simulated observer gaze direction was pointed toward a target on the future path, along the instantaneous heading, or along the Z-axis in the world (i.e., within-subject design). The experiment was composed of two sessions, with each session containing 180 randomized trials (10 trials × 3 viewing conditions × 6 path rotation rates) for one display condition. The testing order of the two sessions (i.e., display condition) was counterbalanced between participants. Before each session started, participants received 36 randomized practice trials (2 trials × 3 viewing conditions × 6 path rotation rates) to make sure that they understood the instructions and were able to perform the task. No feedback was given during the practice or experimental trials. The whole experiment lasted about 30 min. 
Results
Performance was essentially the same for left and right path rotation rates. We thus collapsed the path error data across left and right path rotation rates to generate measures of path error as a function of path rotation. Figure 4 plots the mean path errors averaged across the eight participants as a function of path rotation for the two display conditions and the three viewing conditions. Positive path errors indicate an overestimation of path curvature and negative errors indicate an underestimation of path curvature. A flat function at 0° indicates perfect path judgment performance not affected by path rotation. 
Figure 4
 
Data from Experiment 1. Mean path error as a function of path rotation for the textured ground and the textured ground with post displays and for (a) the target, (b) the heading, and (c) the Z-axis viewing conditions. The dotted horizontal line at 0° indicates perfect performance. The dashed black lines indicate the predicted path errors assuming that observers use the rotation in retinal flow for path curvature estimation, and the dotted black line indicates the predicted path errors assuming that observers perceive traveling on a straight path. Error bars are SEs across eight participants.
Figure 4
 
Data from Experiment 1. Mean path error as a function of path rotation for the textured ground and the textured ground with post displays and for (a) the target, (b) the heading, and (c) the Z-axis viewing conditions. The dotted horizontal line at 0° indicates perfect performance. The dashed black lines indicate the predicted path errors assuming that observers use the rotation in retinal flow for path curvature estimation, and the dotted black line indicates the predicted path errors assuming that observers perceive traveling on a straight path. Error bars are SEs across eight participants.
A 2 (display type) × 3 (path rotation) repeated-measures ANOVA was conducted for each of the three viewing conditions. For both the target and Z-axis viewing conditions, only the main effect of path rotation was significant (F(2,14) = 36.01, p << 0.0001 and F(2,14) = 48.09.37, p << 0.0001, respectively). For the heading viewing condition, the ANOVA did not reveal any significant effect. The lack of significant main effect of display type for all three viewing conditions indicates that salient reference objects such as the tall posts on the textured ground did not affect path judgment. 
For the target viewing condition, the dashed line in Figure 4a plots the predicted path errors assuming that participants used the rotation in the retinal velocity field for path curvature estimation and perceived their path of forward travel relative to their heading (Li & Cheng, 2011). As the rotational flow in this condition was due to the simulated observer eye rotation in the display to maintain the gaze on the target, which was mathematically given as half of the path rotation (see Rotational components in the retinal velocity fields for Experiment 1 section in 1), when participants used the rotation in the retinal velocity field for path curvature estimation, they would increasingly underestimate path curvature as path rotation increased. Consistent with the predicted path errors, Tukey HSD tests showed that the mean path error averaged across the two display conditions was significantly different from each other at all three path rotation rates tested (p < 0.05), indicating that participants indeed increasingly underestimated path curvature as path rotation increased. However, separate t tests revealed that while the mean path error averaged across the two display conditions was not statistically different from the predicted path error at the two high path rotation rates tested (−1.48° vs. −3.77° for 4.5°/s and −4.17° vs. −5.05° for 6°/s), the mean path error was significantly differently from the predicted value at the lowest path rotation rate of 3°/s (0.06° vs. −2.51°, t(7) = 2.88, p < 0.05). 
For the heading viewing condition, the dashed line in Figure 4b also plots the predicted path errors assuming that participants used the rotation in the retinal velocity field for path curvature estimation (Li & Cheng, 2011). As the rotational flow on the retina is all due to the path rotation in this condition, using the rotation in the retinal velocity field for path curvature estimation would lead to accurate path perception corresponding to the flat function at 0°. Indeed, path errors were small and not affected by path rotation or display type. The mean path error averaged across display type and path rotation (mean ± SE across observers: −0.03 ± 0.68) was not significantly different from zero (t(7) = −0.05, p = 0.97). This shows that consistent with the accurate path performance prediction, participants could accurately perceive their path of forward travel in this viewing condition. 
For the Z-axis viewing condition, there is no rotation in the retinal flow field, so the dotted line in Figure 4c plots the predicted path errors assuming that participants perceived traveling on a straight path with the path direction specified by the FOE in the radial flow field in the middle of the trial. Separate t tests revealed that the mean path error averaged across the two display conditions was not statistically different from the straight path prediction at all three path rotation rates tested (−7.89° vs. −7.26° for 3°/s, −11.25° vs. −10.9° for 4.5°/s, and −14.18° vs. −14.55° for 6°/s, t(7) < 1.12, p > 0.30), indicating that participants indeed perceived traveling on a straight path when the retinal flow field did not contain any rotation. 
Discussion
In summary, the above results are consistent with the findings of Li and Cheng (2011) in that path performance was accurate in the heading viewing condition when the rotation in the retinal velocity field was all due to the path rotation. In the target viewing condition, the rotation in the retinal velocity field was reduced by half (see Rotational components in the retinal velocity fields for Experiment 1 section in 1), and participants increasingly underestimated path curvature as path rotation increased. In the Z-axis viewing condition, the retinal flow field did not contain any rotation, and participants displayed path performance consistent with perceiving traveling on a straight path. The data thus provide further support for the theory proposed by Li and Cheng that for traveling on a curvilinear path, observers use the rotation in the retinal velocity field for path curvature estimation and recover their path of forward travel relative to their heading in the retino-centric coordinate system. 
Despite the different path judgment performance for the three viewing conditions, the performance does not differ between the textured ground displays with and without tall posts across all three viewing conditions. This indicates that participants did not use reference objects to update their heading or self-displacement to recover their path of forward travel from retinal flow in the world-centric coordinate system. This appears to be inconsistent with previous studies that showed improved perception and control of self-motion when the display contained salient reference objects (e.g., Andersen & Enriquez, 2006; Cutting et al., 1997; Li & Warren, 2000, 2002). However, these studies did not examine curvilinear path perception from retinal flow. For example, Li and Warren (2000) examined path perception for traveling on a straight path with simulated eye rotation and found improved path judgment when the display contained salient reference objects. While the heading is fixed in the world for traveling on a straight path, it drifts in the world for traveling on a curved path. It might be easier to update a fixed rather than a drifting heading with respect to a reference object to recover the world-centric path of travel. 
A related study by Bertin and Israel (2005) examined the effect of reference objects on the recovery of the trajectory of the traveled circular path and found improved perception of path rotation when the display contained one salient reference object. However, the improved performance was mainly observed when the reference object moved on the image screen, thus introducing pursuit eye movements. Accordingly, the findings from their study are inconclusive because the effect of the reference object is confounded with that of extra-retinal information about pursuit eye movements. In the next experiment, we systematically examined how extra-retinal information about pursuit eye movements contributes to accurate path perception when the rotation in the retinal velocity field is due to combined real eye rotations and path rotation. 
Note that for the target viewing condition, the mean path error is significantly different from the predicted value at the lowest path rotation rate (3°/s). We hypothesize that participants might have overestimated the rotation in retinal flow at this path rotation rate, which led to the smaller than the predicted underestimation of path curvature. To test this hypothesis, we conducted a separate experiment in which we asked five observers to discriminate the simulated observer rotation (i.e., the rotational flow) in the display of the target viewing condition from that of the heading viewing condition using the textured ground display. We used a 2IFC adaptive procedure (Kontsevich & Tyler, 1999) to find a point of subjective equality (PSE) representing the path rotation rate in the display of the heading viewing condition (the comparison stimulus) that participants perceived to contain the same rotational flow in the display of the target viewing condition (the standard stimulus). The display of the heading viewing condition was used as the comparison stimulus due to the fact that the rotational flow in the display was all due to the path rotation in this case. In each trial, the stimulus presentation time was 1.5 s and the interstimulus interval was 500 ms. Observers were told that the displays simulated an observer traveling over a ground plane and that their task was to indicate in which of the two intervals of a trial the display contained larger simulated observer rotation that could be caused by the observer eye rotation, head rotation, and/or body rotation. The staircases for leftward and rightward path rotations were interleaved. Note that the rotational flow in the display of the target viewing condition is half of that in the display of the heading viewing condition (see Rotational components in the retinal velocity fields for Experiment 1 section in 1 for the mathematical analysis), i.e., the rotational flow of the standard stimulus at the path rotation rates of 3°/s, 4.5°/s, and 6°/s corresponds to that of the comparison stimulus at the path rotation rates of 1.5°/s, 2.25°/s, and 3°/s, respectively. 
Figure 5 plots the data for each observer and the best-fitting psychometric function (cumulative Gaussian) of each individual observer's data averaged across the five observers. Three separate t tests revealed that only when the standard stimulus was at the path rotation rate of 3°/s (i.e., when the rotational flow of the standard stimulus was mathematically equal to that of the comparison stimulus at the path rotation rate of 1.5°/s; Figure 5a), the mean PSE averaged across five observers (mean ± SE: 1.86°/s ± 0.07°/s) was statistically different from 1.5°/s (t(4) = 5.3, p < 0.01). The result shows that at the lowest path rotation rate of 3°/s, participants indeed overestimated the simulated observer rotation (i.e., the rotational flow) in the display of the target viewing condition. This result is also consistent with the previous findings showing that for traveling on a circular path, when the observer's body orientation is not aligned with the instantaneous heading, the accuracy of the perception of path rotation degrades especially at low path rotation rates (Bertin & Israel, 2005). 
Figure 5
 
Proportion of trials in which the comparison stimulus was perceived to contain larger rotational flow than the standard is plotted as a function of the path rotation rate of the comparison stimulus (i.e., comparison path rotation rate) for each observer for the standard stimulus at the path rotation rates of (a) 3°/s, (b) 4.5°/s, and (c) 6°/s. The black line indicates the best-fitting psychometric function (cumulative Gaussian) averaged across the five observers.
Figure 5
 
Proportion of trials in which the comparison stimulus was perceived to contain larger rotational flow than the standard is plotted as a function of the path rotation rate of the comparison stimulus (i.e., comparison path rotation rate) for each observer for the standard stimulus at the path rotation rates of (a) 3°/s, (b) 4.5°/s, and (c) 6°/s. The black line indicates the best-fitting psychometric function (cumulative Gaussian) averaged across the five observers.
Experiment 2: Extra-retinal information about pursuit eye movements
Some previous studies have reported that extra-retinal information about pursuit eye movements contributes to path perception, but the findings are mixed. Specifically, it has been reported that for traveling on a straight path while pursuit eye movements occur, observers use extra-retinal eye-velocity information to compensate for the eye movement-induced rotational flow for accurate path perception (e.g., Li & Warren, 2000, 2004). Without such accompanying extra-retinal information, observers attribute the rotational flow to a path rotation and perceive traveling on a curved path (Ehrlich, Beck, Crowell, Freeman, & Banks, 1998). Banks et al. (1996) further showed that the effect of extra-retinal eye-velocity signals on self-motion estimation is consistent with the “extra-retinal” model that pursuit compensation is entirely driven by extra-retinal signals. 
However, Banks et al. (1996) only tested combined real and simulated pursuit eye movements in the same direction for traveling on a straight path. Crowell and Andersen (2001) examined path perception for traveling on a straight path when real and simulated eye rotations were in the same and opposite directions. They found that while their data were qualitatively consistent with the findings from Banks et al. when real and simulated eye rotations were in the same direction, there was apparently no pursuit compensation when real and simulated eye rotations of a similar magnitude were in opposite directions. Likewise, van den Berg et al. (2001) reported that for traveling on a curved path, when the real eye rotation was in the same direction of but smaller than the simulated eye rotation in the scene, the perceived direction of a point on the future path was approximately a linear function of the difference between the real and the simulated eye rotations, supporting the extra-retinal model of pursuit compensation proposed by Banks et al. However, when the real eye rotation was in the opposite direction of the simulated eye rotation, extra-retinal eye-velocity signals appeared not to affect the perceived point direction or path curvature. The authors of both studies thus concluded that the effect of extra-retinal signals on the perception of self-motion is modulated by the information in retinal flow. However, to the best of our knowledge, no attempt has been made so far to understand the purpose for such a modulation and the reason behind the asymmetric contribution of extra-retinal eye-velocity signals to path perception. 
We hypothesize that the asymmetric effect of extra-retinal signals is related to an assumption that the visual system has for self-motion in the world. That is, in most cases, one's body orientation is aligned with one's instantaneous direction of travel (i.e., heading). The visual system thus assumes that one's heading is fixed in the body-centric coordinate system, as indicated by the fixed heading direction on the retina when there is no eye/head rotation. In the presence of eye/head rotations, the heading direction is no longer fixed but drifts on the retina (Figure 6, upper panel). Nevertheless, extra-retinal signals about eye/head movement can transform the retino-centric heading into the body-centric heading and help stabilize the heading relative to the body. We propose that whether extra-retinal signals are used for path perception depends on whether they help reduce the heading drift in the body-centric coordinate system as compared to that on the retina. The lower panel in Figure 6 illustrates how our hypothesis explains the previous findings regarding path perception for traveling on a straight path. That is, when real and simulated eye rotations are in the same direction, using extra-retinal eye-velocity signals to compensate for the eye movement-induced rotational flow reduces the perceived heading drift relative to the body, and thus extra-retinal signals are used for path perception. In contrast, when real and simulated eye rotations are of opposite directions, real eye rotations stabilize the heading on the retina. Pursuit compensation in this case would not stabilize the heading relative to the body. Instead, it would make the heading drift relative to the body unlike in most cases of self-motion in the world, and thus, the visual system is unlikely to use extra-retinal eye-velocity signals for path perception. 
Figure 6
 
Illustrations of the velocity fields showing how eye rotations make heading drift on the retina (upper panel) and how pursuit compensation for the real eye rotation in the same direction of the simulated eye rotation reduces the drift of heading relative to the body while pursuit compensation for the real eye rotation in the opposite direction of the simulated eye rotation would make heading drift relative to the body (lower panel). The illustrated real and simulated eye rotations are of the same magnitude as indicated by the arrows in the figure.
Figure 6
 
Illustrations of the velocity fields showing how eye rotations make heading drift on the retina (upper panel) and how pursuit compensation for the real eye rotation in the same direction of the simulated eye rotation reduces the drift of heading relative to the body while pursuit compensation for the real eye rotation in the opposite direction of the simulated eye rotation would make heading drift relative to the body (lower panel). The illustrated real and simulated eye rotations are of the same magnitude as indicated by the arrows in the figure.
To test our hypothesis, in this experiment, we examined how extra-retinal information about pursuit eye movements contributed to path perception for traveling on a circular path in both natural and unnatural viewing conditions. Different from the previous studies that used displays that contained both simulated and real eye rotations, the displays used in the current experiment did not contain any simulated eye rotation. In the natural viewing condition, the simulated observer body orientation (i.e., the “camera” in the display) was along the instantaneous heading and rotated with the path rotation as in the natural case of traveling on a circular path. The rotation in the velocity field on the image screen thus corresponded to the path rotation (Figure 7, top row). In the unnatural viewing condition, the simulated observer body orientation was fixed relative to the world (i.e., the “camera” in the display was fixed and parallel to the Z-axis in the world). There was thus no rotation in the velocity field on the image screen in this viewing condition (Figure 7, middle row). In both viewing conditions, participants were asked to track a fixed target at eye level on the future path with pursuit eye movements. Given that the target moved horizontally on the screen at the same speed but in opposite directions in the natural and unnatural viewing conditions (Figure 7, top and middle rows; see Rotational components in the retinal velocity fields for Experiment 2 section in 1 for the mathematical analysis), the pursuit yaw eye movements in both viewing conditions were of the same magnitude but in opposite directions. As the speed of the target motion on the screen is half of the path rotation rate in the display (see Rotational components in the retinal velocity fields for Experiment 2 section in 1 for the mathematical analysis), the flow fields on the observer's retina of these two viewing conditions are identical and the same as that of the target viewing condition in Experiment 1 (Figure 7, bottom row). 
Figure 7
 
The flow field and the mid-trial velocity field on the image screen, and the bird's-eye view of the experimental setup for the natural (top row) and unnatural (middle row) viewing conditions, respectively (path rotation = −6°/s). The retinal projections of these two viewing conditions are identical (bottom row). The light and dark red circles in the flow field indicate the target direction at the beginning and end of the trial, and the light and dark blue circles indicate the heading direction at the beginning and end of the trial. Each blue line in the flow field represents the positions of an environmental point over the course of the trial, and each blue line in the velocity field corresponds to a velocity vector associated with an environmental point. The black curve indicates the simulated path of travel.
Figure 7
 
The flow field and the mid-trial velocity field on the image screen, and the bird's-eye view of the experimental setup for the natural (top row) and unnatural (middle row) viewing conditions, respectively (path rotation = −6°/s). The retinal projections of these two viewing conditions are identical (bottom row). The light and dark red circles in the flow field indicate the target direction at the beginning and end of the trial, and the light and dark blue circles indicate the heading direction at the beginning and end of the trial. Each blue line in the flow field represents the positions of an environmental point over the course of the trial, and each blue line in the velocity field corresponds to a velocity vector associated with an environmental point. The black curve indicates the simulated path of travel.
For the natural viewing condition, using extra-retinal eye-velocity signals to drive pursuit compensation would change the flow field on the retina shown in the bottom row in Figure 7 into the flow field relative to the body shown in the top row in Figure 7, which stabilizes heading relative to the body. We thus expected that pursuit compensation would occur, and as a consequence, participants could correctly perceive path rotation for accurate path perception in this condition. Note that in our natural viewing condition, the real eye rotation and the path curvature were in opposite directions, in contrast to previous studies that reported pursuit compensation for path perception when real and the simulated eye rotations were in the same direction. For the unnatural viewing condition, using extra-retinal signals to drive pursuit compensation would change the flow field on the retina shown in the bottom row in Figure 7 into the flow field relative to the body shown in the middle row in Figure 7, which increases the heading drift relative to the body. We thus expected that the visual system would discount extra-retinal signals and no pursuit compensation would occur in this condition. As a result, participants' path performance would be similar to that for the target viewing condition in Experiment 1. Note also that in the unnatural viewing condition, the real eye rotation and the path curvature were in the same direction, again in contrast to previous studies that reported no pursuit compensation for path perception when real and simulated eye rotations were in the same direction. 
Methods
Participants
Ten students and staff (all naive as to the specific goals of the study; 5 females, 5 males) between the age of 20 and 30 at the University of Hong Kong participated in the experiment. Among them, seven (3 females, 4 males) had also participated in Experiment 1. All had normal or corrected-to-normal vision and provided informed consent in accordance with the guidelines from the University of Hong Kong Human Research Ethics Committee. One student (female) showed about three times larger standard deviation in her path judgment performance than the other participants and was excluded from further data analyses. 
Visual stimuli
The textured ground display in Experiment 1 was used in the current experiment. As in the target viewing condition in Experiment 1, a fixed target at eye level was placed on the future path at 30° away from the initial heading. However, unlike Experiment 1, the target (a red dot, 1.4° in diameter) appeared at the center of the screen at the beginning of a trial and was displayed throughout the trial. Two viewing conditions were tested: (1) the natural viewing condition in which the simulated observer body orientation (i.e., the “camera” in the display) was along the instantaneous heading direction as in the natural case of traveling on a circular path (Figure 7, top row) and (2) the unnatural viewing condition in which the simulated observer body orientation (i.e., the “camera” in the display) was fixed and parallel to the Z-axis in the world (Figure 7, middle row). In both viewing conditions, as the “camera” in the display did not point at the target, the target moved horizontally on the screen during the course of the trial, and participants were instructed to maintain their gaze on the target by tracking the target motion on the screen with real pursuit eye movements throughout the trial. 
The retinal velocity fields of the two viewing conditions are the same and identical to that of the target viewing condition in Experiment 1 (Figure 7, bottom row; see Rotational components in the retinal velocity fields for Experiment 2 section in 1 for the mathematical analysis). However, different from the target viewing condition in Experiment 1 in which the observer's gaze rotation was simulated in the display and extra-retinal signals specified negligible eye rotation, participants generated real eye rotations to track the target motion in both of the two viewing conditions in the current experiment, thus extra-retinal signals specified actual eye rotations. Note that while the rotation in the retinal velocity field is due to combined path and eye yaw rotations of opposite directions for the natural viewing condition (Figure 7, top row), it is solely due to pursuit eye yaw rotations for the unnatural viewing condition (Figure 7, middle row; see also Rotational components in the retinal velocity fields for Experiment 2 section in 1). 
Procedure
The procedure was the same as that in Experiment 1 except that at the beginning of each trial, participants were asked to fixate on the red dot target that appeared at the center of the screen. Once participants clicked a mouse button to start each 1.5-s trial, the red dot started moving horizontally on the screen. Participants were instructed to track the target motion with pursuit eye movements throughout the trial. To make sure that participants followed the instructions, we measured the eye movements of two participants before the experiment started. We tested 60 trials (10 trials × 6 path rotation rates) for each of the two viewing conditions. We computed the speed of the pursuit eye movements excluding the data in the first 150 ms of a trial due to the pursuit eye movement latency (e.g., Rashbass, 1961). For the natural viewing condition, the speeds of the pursuit eye movements averaged across the two participants were 1.42°/s, 2.05°/s, and 2.75°/s for the path rotation rates of 3°/s, 4.5°/s, and 6°/s, respectively. For the unnatural viewing condition, they were 1.5°/s, 2.05°/s, and 2.63°/s for the path rotation rates of 3°/s, 4.5°/s, and 6°/s, respectively. As the target movement speed on the screen is half of the path rotation for both viewing conditions (see Rotational components in the retinal velocity fields for Experiment 2 section in 1 for the mathematical analysis), the gains of the pursuit eye movements averaged across the three path rotation rates for the two participants were 0.87 and 0.91 for the natural viewing condition and were 0.85 and 0.94 for the unnatural viewing condition. This indicates that the two participants were able to track the target motion on the screen with pursuit eye movements equally well in both viewing conditions. 
Each participant completed both the natural and unnatural viewing conditions. The experiment was composed of two sessions, with each session containing 60 randomized trials (10 trials × 6 path rotation rates) for one viewing condition. The testing order of the two sessions (i.e., viewing condition) was counterbalanced between participants. Before each session started, participants received 12 randomized practice trials (2 trials × 6 path rotation rates) to make sure that they understood the instructions and were able to perform the task. No feedback was given during the practice or experimental trials. The whole experiment lasted about 20 min. 
Results
As in Experiment 1, performance was essentially the same for left and right path rotation rates, and we collapsed the path errors across left and right path rotations to generate measures of path error as a function of path rotation. Figure 8a plots the mean path error averaged across the nine participants as a function of path rotation for the two viewing conditions. Positive path errors indicate an overestimation of path curvature and negative errors indicate an underestimation of path curvature. Due to the fact that the retinal velocity fields of the natural and unnatural viewing conditions are the same and identical to that of the target viewing condition in Experiment 1 (Figure 7, bottom row), if participants did not use extra-retinal signals and relied solely on the rotational flow on the retina for path curvature estimation, the path performance for the two viewing conditions would be the same as that for the target viewing condition in Experiment 1. However, the data showed that while path errors were small at all three path rotation rates for the natural viewing condition, they decreased with path rotation rate (i.e., indicating an underestimation of path curvature with the increase of path rotation) for the unnatural viewing condition. Indeed, a 2 (viewing condition) × 3 (path rotation) repeated-measures ANOVA on the path errors revealed that the main effects of viewing condition and path rotation as well as their interaction effect were all significant (F(1,8) = 7.16, p < 0.05, F(2, 16) = 19.39, p < 0.0001, and F(2,16) = 6.73, p < 0.01, respectively). The significant interaction effect prompted us to conduct two separate one-way repeated-measures ANOVAs for the two viewing conditions. The ANOVAs revealed a significant main effect of path rotation for the unnatural viewing condition but not for the natural viewing condition (F(2,16) = 40.28, p<<0.0001 and F(2,16) = 0.26, p = 0.77, respectively). For the natural viewing condition, a separate t test revealed that the mean path error averaged across the three path rotation rates (mean ± SE across participants: 1.27° ± 1.91°) was not significantly different from zero (t(8) = 0.66, p = 0.53), indicating that although the retinal velocity field of this viewing condition is identical to that of the target viewing condition in Experiment 1, the path performance for this viewing condition is accurate unlike that for the target viewing condition in Experiment 1
Figure 8
 
Mean path error as a function of path rotation for (a) the natural and unnatural viewing conditions in Experiment 2 and for (b) the unnatural viewing condition in Experiment 2 and the target and Z-axis viewing conditions with the textured ground display in Experiment 1. The dotted horizontal line at 0° indicates perfect performance. Error bars are SEs across nine participants for the data from Experiment 2 and across eight participants for the data from Experiment 1.
Figure 8
 
Mean path error as a function of path rotation for (a) the natural and unnatural viewing conditions in Experiment 2 and for (b) the unnatural viewing condition in Experiment 2 and the target and Z-axis viewing conditions with the textured ground display in Experiment 1. The dotted horizontal line at 0° indicates perfect performance. Error bars are SEs across nine participants for the data from Experiment 2 and across eight participants for the data from Experiment 1.
To further examine the path performance for the unnatural viewing condition, Figure 8b plots the mean path errors from this condition along with the data from the target and Z-axis viewing conditions with the textured ground display in Experiment 1. If participants used extra-retinal eye-velocity signals for pursuit compensation in the unnatural viewing condition, the instantaneous velocity field after the compensation for eye movements should be the same as that of the Z-axis viewing condition in Experiment 1 and not contain any rotation (see the velocity field in the middle row of Figure 7). Accordingly, participants should perceive traveling on a straight path as did participants in the Z-axis viewing condition in Experiment 1. A 2 (experiment) × 3 (path rotation) mixed-design ANOVA on the path errors from the unnatural viewing condition and the Z-axis viewing condition in Experiment 1 revealed that both the main effects of experiment and path rotation were significant (F(1,15) = 13.84, p < 0.01 and F(2,30) = 88.51, p << 0.0001, respectively). Their interaction effect was not significant (F(2,30) = 1.83, p = 0.18). Although participants increasingly underestimated path curvature as path rotation increased in the unnatural viewing condition, the absolute path errors of this condition were significantly smaller than those of the Z-axis viewing condition in Experiment 1, indicating that participants did not perceive traveling on a straight path as did participants in the Z-axis viewing condition in Experiment 1. In fact, after the experiment, participants in the unnatural viewing condition reported perceiving themselves traveling on a curved path. 
We then compared the path errors from the unnatural viewing condition with those from the target viewing condition in Experiment 1 to examine whether participants in the unnatural viewing condition ignored extra-retinal signals and relied on the rotation in the retinal velocity field for path curvature estimation. A 2 (experiment) × 3 (path rotation) mixed-design ANOVA on the path errors for the unnatural viewing condition and the target viewing condition in Experiment 1 revealed that while the main effect of experiment was not significant (F(1,15) = 2.12, p = 0.17), both the main effect of path rotation and the interaction effect of experiment and path rotation were significant (F(2,30) = 58.66, p << 0.0001 and F(2,30) = 5.83, p < 0.01). However, Tukey HSD tests showed that the mean path errors of the unnatural viewing condition were not significantly different from those of the target viewing condition at all three path rotation rates tested (p > 0.70). This shows that participants increasingly underestimated path curvature as path rotation increased in the unnatural viewing condition, as did participants in the target viewing condition in Experiment 1
Discussion
In summary, despite the identical retinal velocity fields in both the natural and unnatural viewing conditions, path judgments are accurate for the natural viewing condition but are significantly biased for the unnatural viewing condition. In fact, the path performance for the unnatural viewing condition is similar to that for the target viewing condition in Experiment 1 that has the same retinal velocity field but has extra-retinal signals specifying negligible eye rotation. These results show an asymmetric effect of extra-retinal eye-velocity signals on path perception and support our hypothesis that extra-retinal signals drive pursuit compensation only for the natural case of traveling when they help stabilize the heading in the body-centric coordinate system. 
Freeman, Banks, and Crowell (2000) reported that when traveling on a straight path while making sinusoidal pursuit eye movements, observers perceived their path oscillating left and right at the same frequency as the pursuit eye movements, which they termed the “slalom illusion.” Freeman et al. interpreted the slalom illusion as being caused by a mismatch between retinal and extra-retinal speed estimation during pursuit eye movements. In fact, there can be errors in both retinal motion estimation and using extra-retinal signals to estimate eye velocity, which together lead to the perceptual effects such as the slalom illusion, the Filehne illusion, and the Aubert–Fleischl phenomenon (e.g., Freeman, 1999; Freeman & Banks, 1998; Haarmeier, Bunjes, Lindner, Berret, & Their, 2001). Nevertheless, the purpose of the current experiment was not to examine the mismatch between or the errors in retinal and extra-retinal speed estimation but to address the natural constraints on the use of extra-retinal signals for the perception of self-motion. 
General discussion
The two experiments in this paper addressed the questions of whether people are able to use reference objects to perceive their curvilinear path of travel in the world when the rotation in the retinal velocity field does not correspond to the path rotation and how extra-retinal information about pursuit eye movements helps path perception in this circumstance. The findings show that salient reference objects do not help path perception. Extra-retinal eye-velocity signals support accurate path perception only for the natural case of traveling when such signals help stabilize the heading relative to the body. 
Some of our findings appear to be different from those reported by Saunders and Ma (2011). They tested three viewing conditions similar to our target viewing condition in Experiment 1 and our natural and unnatural viewing conditions in Experiment 2 but asked participants to judge whether they would pass to the left or right of an environmental target to measure their perceived future path. Saunders and Ma reported that participants failed to perceive the path rotation and perceived traveling on a straight path in both the target and unnatural viewing conditions, in contrast to what we observed in the current study. However, despite their conclusion, the shown derived paths at the only one path rotation rate they tested for these two conditions are curved rather than straight (Figure 6 in Saunders & Ma, 2011). In fact, the judgment task used by Saunders and Ma cannot differentiate whether participants used their perceived heading or future path to perform the task (heading is along the tangent of a circular path), and measuring path judgment at only one path rotation rate does not provide sufficient data to show that participants perceived traveling on a straight path. As a consequence, the findings of their study do not provide definite answers to the question of whether people use dot motion trajectories for path perception (Kim & Turvey, 1999; Wann & Swapp, 2000) as they claimed (see, however, Li & Cheng, 2011). 
The findings from the current study provide further support for our previously proposed theory of curvilinear path perception from retinal flow (Li & Cheng, 2011). That is, observers use the rotation in the retinal velocity field for path curvature estimation and perceive their path of forward travel relative to their heading in the retino-centric coordinate frame. To accurately perceive path, the rotational flow on the retina needs to correspond to the path rotation as in the natural case of traveling on a curved path when one's body orientation is aligned with one's heading that rotates with the path rotation. With eye/head movements, the rotational flow on the retina does not correspond to the path rotation any more. As the retinal velocity field does not provide sufficient information to specify different sources of rotational flow, extra-retinal signals about eye/head movements can be used to separate the path rotation from other sources of rotation in retinal flow. Based on the findings from the current study, we propose that when traveling on a curved path while making pursuit eye movements, the visual system assumes that the heading is fixed with respect to the body as in the natural case of traveling. Extra-retinal eye-velocity signals are used to drive pursuit compensation for accurate path perception when they help stabilize the heading relative to the body. 
Saunders and Ma (2011) stated that our previously proposed theory of curvilinear path perception is consistent with the locomotor flow line strategy proposed by Lee and Lishman (1977), which the authors used to interpret their findings as well as those from Saunders (2010). Different from our theory that observers use the rotation in the retinal velocity field for path curvature estimation and recover their path of forward travel relative to their perceived heading, Lee and Lishman (1977) postulated that observers recover their path of travel by spatially integrating velocity vectors in the instantaneous velocity field to identify the locomotor flow line that passes directly beneath them. We do not believe such a strategy is viable for path perception for the following two reasons. First, Figure 9 illustrates the instantaneous velocity field of the display used by Saunders. The dotted red line indicates the actual simulated path of forward travel, and the blue lines show some of the many possible ways to spatially integrate velocity vectors to form other locomotor flow lines that would also appear to pass directly beneath the observer. Thus, using such a strategy would not give accurate and precise path judgment in any circumstance and it still remains in question how the visual system identifies the locomotor flow line that corresponds to the actual path of travel. Second, even if we assume that the visual system can do such a job, a dense flow field that contains elements on or near the actual path of travel is required to accurately recover the locomotor flow line. However, both Li and Cheng (2011) and Warren, Mestre et al. (1991) showed that observers were able to accurately perceive their path of travel with a random-dot ground display that contained only a few environmental points near the actual path, indicating that the use of the locomotor flow line strategy for path perception is not empirically supported by human data. 
Figure 9
 
An illustration of the instantaneous velocity field of the display used by Saunders (2010). The green cross indicates the simulated observer gaze direction that is aligned with the instantaneous heading. The dotted red line indicates the actual simulated path of forward travel, and the blue lines show some of the many possible ways to spatially integrate velocity vectors to form other locomotor flow lines that would also appear to pass directly beneath the observer.
Figure 9
 
An illustration of the instantaneous velocity field of the display used by Saunders (2010). The green cross indicates the simulated observer gaze direction that is aligned with the instantaneous heading. The dotted red line indicates the actual simulated path of forward travel, and the blue lines show some of the many possible ways to spatially integrate velocity vectors to form other locomotor flow lines that would also appear to pass directly beneath the observer.
Appendix A
Image velocity field generation
Consider an environmental point P = (X p , Y p , Z p ) T in a Cartesian coordinate system XYZ that is centered on the observer's eye and moves with the observer (Figure A1). Let T = (T X , T Y , T Z ) T denote the observer translation in the environment, and R = (R X , R Y , R Z ) T denote the observer rotation that combines the observer's body, head, and eye rotations. Without losing generality, we set the image plane at unit distance along the Z-axis, i.e., the eye has a focal length of one. The projected position of P on the image plane, p, is then given as 
p = ( x p y p ) = ( X P Z P Y P Z P ) .
(A1)
 
Figure A1
 
A Cartesian coordinate system XYZ that is centered on the observer's eye and moves with the observer. p(x, y) indicates the projected position of an environment point P(X, Y, Z) on the image plane that is at unit distance along the Z-axis (i.e., the eye has a focal length of one). v represents the image velocity of p, with v x and v y indicating the velocity components along the x- and y-axes in the image plane.
Figure A1
 
A Cartesian coordinate system XYZ that is centered on the observer's eye and moves with the observer. p(x, y) indicates the projected position of an environment point P(X, Y, Z) on the image plane that is at unit distance along the Z-axis (i.e., the eye has a focal length of one). v represents the image velocity of p, with v x and v y indicating the velocity components along the x- and y-axes in the image plane.
Let v = (v x , v y ) T denote the image velocity of p. As shown in previous studies (e.g., Longuet-Higgins & Prazdny, 1980; Rieger & Lawton, 1985), v is given as 
v = ( v x v y ) = ( ( x p T Z T X ) Z P + x p y p R X ( x p 2 + 1 ) R Y + y p R Z ( y p T Z T Y ) Z P + ( y p 2 + 1 ) R X x p y p R Y x p R Z ) .
(A2)
 
In the current study, the observer did not undergo vertical translation or pitch or roll rotations, thus R X = R Z = T Y = 0. For our purposes, Equation A2 can thus be simplified into 
v = ( ( x p T Z T X ) Z P ( x p 2 + 1 ) R Y y p T Z Z P x p y p R Y ) .
(A3)
 
Note that v has a translational component v T that is independent of the observer rotation R, and a rotational component v R that is independent of the observer translation T, with 
v T = ( v T x v T y ) = ( ( x p T Z T X ) Z P y p T Z Z P ) a n d v R = ( v R x v R y ) = R Y ( ( x p 2 + 1 ) x p y p ) , r e s p e c t i v e l y .
(A4)
 
Rotational components in the retinal velocity fields for Experiment 1
To determine the rotational components in the retinal velocity fields for the three viewing conditions in Experiment 1, we first show how the observer rotation R Y in the eye-centered coordinate system is computed. As participants were instructed to maintain their fixation at the center of the screen throughout the trial, R Y is determined by the rotation of the simulated observer gaze direction (i.e., the “camera”) in the display only. 
Consider a world coordinate system XYZ′ that is centered at the center of a circular path on the XZ′ plane. Let h eye denote the observer's eye height. Assuming that the initial observer's eye position O 0′ in the world is (r, h eyes, 0) T , with r indicating the radius of the circular path, and the observer undergoes counterclockwise circular motion on the path at a constant speed S and an angular rotation rate ω (Figure A2), the observer's eye position O′ in the world at time t is given as 
O = ( X O Y O Z O ) = ( r cos ω t h e y e r sin ω t ) .
(A5)
 
Figure A2
 
The bird's-eye view of an observer traveling on a circular path in the XZ′ plane of the world coordinate system. The origin of the world coordinate system is at the center of the circular path. The simulated observer gaze direction (a) points toward a fixed target on the future path, (b) is aligned with the instantaneous heading, and (c) is fixed and parallel to the Z′-axis in the world.
Figure A2
 
The bird's-eye view of an observer traveling on a circular path in the XZ′ plane of the world coordinate system. The origin of the world coordinate system is at the center of the circular path. The simulated observer gaze direction (a) points toward a fixed target on the future path, (b) is aligned with the instantaneous heading, and (c) is fixed and parallel to the Z′-axis in the world.
For the target viewing condition, the simulated observer gaze direction (i.e., the “camera” in the display) points to a fixed target at eye level on the future path at an angle θ away from the observer's initial heading (Figure A2a); the target position in the world, P target′, is thus given as 
P t a r g e t = ( X t a r g e t Y t a r g e t Z t a r g e t ) = ( r cos 2 θ h e y e r sin 2 θ ) .
(A6)
 
To maintain fixation on the target, the simulated observer gaze direction relative to the Z′ axis in the world α at time t is given as 
α = tan 1 ( Δ X Δ Z ) = tan 1 ( X O X t a r g e t Z t a r g e t Z O ) .
(A7)
 
Substituting Equations A5 and A6 into Equation A7, we have 
α = tan 1 ( r cos ω t r cos 2 θ r sin 2 θ r sin ω t ) .
(A8)
 
Differentiating α with respect to time gives the rotation rate of the simulated observer gaze direction, i.e., the observer rotation R Y in the eye-centered coordinate system: 
R Y = d α d t = d d t tan 1 ( r cos ω t r cos 2 θ r sin 2 θ r sin ω t ) = d d t tan 1 ( 2 sin 2 θ + ω t 2 sin 2 θ ω t 2 2 cos 2 θ + ω t 2 sin 2 θ ω t 2 ) = d d t tan 1 ( sin 2 θ + ω t 2 cos 2 θ + ω t 2 ) = d ( 2 θ + ω t 2 ) d t = ω 2 ,
(A9)
which is half of the path rotation rate ω
The rotational component v R in the retinal velocity field for the target viewing condition is then given as 
v R = R Y ( ( x 2 + 1 ) x y ) = ω 2 ( ( x 2 + 1 ) x y ) .
(A10)
 
For the heading viewing condition, the simulated observer gaze direction (i.e., the “camera” in the display) is aligned with the instantaneous heading that rotates with the circular path at ω (Figure A2b). R Y is thus equal to ω, and the rotational component v R in the retinal velocity field in this condition is given as 
v R = R Y ( ( x 2 + 1 ) x y ) = ω ( ( x 2 + 1 ) x y ) .
(A11)
For the Z-axis viewing condition, the simulated observer gaze direction (i.e., the “camera” in the display) is fixed and parallel to the Z-axis in the world (Figure A2c), thus R Y = 0. Therefore, the rotational component v R in the retinal velocity field is also zero. 
Rotational components in the retinal velocity fields for Experiment 2
We now show how the observer rotation R Y in the eye-centered coordinate system is computed for the natural and unnatural viewing conditions in Experiment 2. Unlike Experiment 1, participants in Experiment 2 were instructed to track the target motion on the image screen with pursuit eye movements; R Y is thus specified by both the participant's real eye rotation R eye as well as the rotation of the simulated observer body orientation R body (i.e., the “camera”) in the display: 
R Y = R e y e + R b o d y .
(A12)
 
For the natural viewing condition, the simulated observer body orientation (i.e., the “camera” in the display) is aligned with the observer's instantaneous heading (Figure 7, top row). As mentioned above, the heading direction rotates with the circular path at ω, thus R body = ω, and R eye is given by the target angular velocity on the image screen τ (i.e., R eye = τ). 
We now show the step-by-step computation of τ. Consider a coordinate system XYZ that is centered at the camera with its Y-axis aligned with the Y-axis of the world coordinate system XYZ′ and its Z-axis aligned with the simulated observer body orientation. Given the target position P target′ and the observer's eye position O′ in the world, the target position in the camera-centered coordinate system P target is given as 
P t a r g e t = A * ( P t a r g e t O ) ,
(A13)
where A is a 3 × 3 rotation matrix, 
A = [ cos φ 0 sin φ 0 1 0 sin φ 0 cos φ ] ,
(A14)
with φ denoting the simulated observer body orientation in the world coordinate system XYZ′ given as 
φ = R b o d y d t = ω t .
(A15)
A can then be rewritten as 
A = [ cos ω t 0 sin ω t 0 1 0 sin ω t 0 cos ω t ] .
(A16)
 
Substituting Equations A5, A6, and A16 into Equation A13, we get 
P t a r g e t = [ cos ω t 0 sin ω t 0 1 0 sin ω t 0 cos ω t ] ( r cos 2 θ r cos ω t h e y e h e y e r sin 2 θ r sin ω t ) = r ( cos 2 θ cos ω t + sin 2 θ sin ω t ( cos 2 ω t + sin 2 ω t ) 0 sin 2 θ cos ω t cos 2 θ sin ω t + sin ω t cos ω t sin ω t cos ω t ) = r ( cos ( 2 θ ω t ) 1 0 sin ( 2 θ ω t ) ) .
(A17)
 
According to Equation A1, the projected position of the target on the image plane, p target, is given as 
P t a r g e t = ( x t a r g e t y t a r g e t ) = ( cos ( 2 θ ω t ) 1 sin ( 2 θ ω t ) 0 ) .
(A18)
 
The angular velocity of the target on the image screen τ is thus given as 
τ = d d t tan 1 ( x t a r g e t ) = d d t tan 1 ( cos ( 2 θ ω t ) 1 sin ( 2 θ ω t ) ) = ω ( 1 1 + ( cos ( 2 θ ω t ) 1 sin ( 2 θ ω t ) ) 2 ) ( sin 2 ( 2 θ ω t ) ( cos 2 ( 2 θ ω t ) cos ( 2 θ ω t ) ) sin 2 ( 2 θ ω t ) ) = ω ( cos ( 2 θ ω t ) 1 2 2 cos ( 2 θ ω t ) ) = ω 2 .
(A19)
 
Hence, the observer rotation R Y in the eye-centered coordinate system for the natural viewing condition is 
R Y = R e y e + R b o d y = τ + ω = ω 2 .
(A20)
 
For the unnatural viewing condition, the simulated observer body orientation (i.e., the “camera” in the display) is fixed and parallel to the Z-axis in the world (Figure 7, middle row), thus R body = 0. According to Equation A16, the rotation matrix in this viewing condition becomes 
A = [ 1 0 0 0 1 0 0 0 1 ] .
(A21)
 
Substituting Equations A5, A6, and A21 into Equation 13, the target position in the camera-centered coordinate system P target becomes 
P t a r g e t = [ 1 0 0 0 1 0 0 0 1 ] ( r cos 2 θ r cos ω t 0 r sin 2 θ r sin ω t ) = r ( cos 2 θ cos ω t 0 sin 2 θ sin ω t ) .
(A22)
 
The projected position of the target on the image plane, p target, is then given as 
P t a r g e t = ( x t a r g e t y t a r g e t ) = ( cos 2 θ cos ω t sin 2 θ sin ω t 0 ) ,
(A23)
and the angular velocity of the target on the image screen τ is given as 
τ = d d t tan 1 ( x t a r g e t ) = d d t tan 1 ( cos 2 θ cos ω t sin 2 θ sin ω t ) = d d t tan 1 ( 2 sin 2 θ + ω t 2 sin 2 θ ω t 2 2 cos 2 θ + ω t 2 sin 2 θ ω t 2 ) = d d t tan 1 ( sin 2 θ + ω t 2 cos 2 θ + ω t 2 ) = d ( 2 θ + ω t 2 ) d t = ω 2 .
(A24)
 
Hence, the observer rotation R Y in the eye-centered coordinate system for the unnatural viewing condition is 
R Y = R e y e + R b o d y = τ + 0 = ω 2 .
(A25)
 
That is, the observer rotation R Y in the eye-centered coordinate system for the natural and unnatural viewing conditions in Experiment 2 is the same and identical to that for the target viewing condition in Experiment 1 (see Equation A9). Accordingly, the rotational component v R in the retinal velocity field of these two conditions is the same and identical to that of the target viewing condition in Experiment 1 (see Equation A10). 
Acknowledgments
This study was supported by a grant from the Research Grants Council of Hong Kong (HKU 7480/10H) to L. Li. We thank Diederick Niehorster for his help with data collection and analyses. We also thank Diederick Niehorster, Marty Banks, and two anonymous reviewers for their helpful comments on a previous draft of this article. 
Commercial relationships: none. 
Corresponding author: Li Li. 
Email: lili@hku.hk. 
Address: Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong SAR. 
References
Andersen G. J. Enriquez A. (2006). Use of landmarks and allocentric reference frames for the control of locomotion. Visual Cognition, 13, 119–128. [CrossRef]
Banks M. S. Ehrlich S. M. Backus B. T. Crowell J. A. (1996). Estimating heading during real and simulated eye movements. Vision Research, 36, 431–443. [CrossRef] [PubMed]
Bertin R. J. V. Israel I. (2005). Optic-flow-based perception of two-dimensional trajectories and the effects of a single landmark. Perception, 34, 453–475. [CrossRef] [PubMed]
Bertin R. J. V. Israel I. Lappe M. (2000). Perception of two-dimensional, simulated ego-motion trajectories from optic flow. Vision Research, 40, 2951–2971. [CrossRef] [PubMed]
Bruss A. R. Horn B. P. (1983). Passive navigation. Computer Vision, Graphics, and Image Processing, 21, 3–20. [CrossRef]
Crowell J. A. Andersen R. A. (2001). Pursuit compensation during self-motion. Perception, 30, 1465–1488. [CrossRef] [PubMed]
Crowell J. A. Banks M. S. (1993). Perceiving heading with different retinal regions and types of optic flow. Perception & Psychophysics, 53, 325–337. [CrossRef] [PubMed]
Cutting J. E. (1996). Wayfinding from multiple sources of local information in retinal flow. Journal of Experimental Psychology: Human Perception and Performance, 22, 1299–1313. [CrossRef]
Cutting J. E. Vishton P. M. Fluckiger M. Baumberger B. Gerndt J. D. (1997). Heading and path information from retinal flow in naturalistic environments. Perception and Psychophysics, 59, 426–441. [CrossRef] [PubMed]
Ehrlich S. M. Beck D. M. Crowell J. A. Freeman T. C. Banks M. S. (1998). Depth information and perceived self-motion during simulated gaze rotations. Vision Research, 38, 3129–3145. [CrossRef] [PubMed]
Fermuller C. Aloimonos Y. (1995). Direct perception of three-dimensional motion from patterns of visual motion. Science, 270, 1973–1976. [CrossRef] [PubMed]
Freeman T. C. (1999). Path perception and Filehne illusion compared: Model and data. Vision Research, 39, 2659–2667. [CrossRef] [PubMed]
Freeman T. C. Banks M. S. (1998). Perceived head-centric speed is affected by both extra-retinal and retinal errors. Vision Research, 38, 941–945. [CrossRef] [PubMed]
Freeman T. C. Banks M. S. Crowell J. A. (2000). Extraretinal and retinal amplitude and phase errors during Filehne illusion and path perception. Perception and Psychophysics, 62, 900–909. [CrossRef] [PubMed]
Gibson J. J. (1950). The perception of the visual world. Boston: Houghton Mifflin.
Grigo A. Lappe M. (1999). Dynamical use of different sources of information in heading judgments from retinal flow. Journal of the Optical Society of America A, 16, 2079–2091. [CrossRef]
Haarmeier T. Bunjes F. Lindner A. Berret E. Their P. (2001). Optimizing visual motion perception during eye movements. Neuron, 32, 527–535. [CrossRef] [PubMed]
Heeger D. J. Jepson A. D. (1990). Visual perception of three-dimensional motion. Neural Computation, 2, 129–137. [CrossRef]
Hildreth E. (1992). Recovering heading from visually-guided navigation. Vision Research, 32, 1177–1192. [CrossRef] [PubMed]
Kim N. G. Turvey M. T. (1999). Eye movements and a rule for perceiving direction of heading. Ecological Psychology, 11, 233–248. [CrossRef]
Kontsevich L. L. Tyler C. W. (1999). Bayesian adaptive estimation of psychometric slope and threshold. Vision Research, 39, 2729–2737. [CrossRef] [PubMed]
Lee D. N. Lishman R. (1977). Visual control of locomotion. Scandinavian Journal of Psychology, 18, 224–230. [CrossRef] [PubMed]
Li L. Chen J. Peng X. (2009). Influence of visual path information on human heading perception during rotation. Journal of Vision, 9(3):29, 1–14, http://www.journalofvision.org/content/9/3/29, doi:10.1167/9.3.29. [PubMed] [Article] [CrossRef] [PubMed]
Li L. Cheng J. C. K. (2011). Perceiving path from optic flow. Journal of Vision, 11(1):22, 1–15, http://www.journalofvision.org/content/11/1/22, doi:10.1167/11.1.22. [PubMed] [Article] [CrossRef] [PubMed]
Li L. Sweet B. T. Stone L. S. (2006). Humans can perceive heading without visual path information. Journal of Vision, 6(9):2, 874–881, http://www.journalofvision.org/content/6/9/2, doi:10.1167/6.9.2. [PubMed] [Article] [CrossRef]
Li L. Warren W. H. (2000). Perception of heading during rotation: Sufficiency of dense motion parallax and reference objects. Vision Research, 40, 3873–3894. [CrossRef] [PubMed]
Li L. Warren W. H. (2002). Retinal flow is sufficient for steering during observer rotation. Psychological Science, 13, 485–491. [CrossRef] [PubMed]
Li L. Warren W. H. (2004). Path perception during rotation: Influence of instructions, depth range and dot density. Vision Research, 44, 1879–1889. [CrossRef] [PubMed]
Longuet-Higgins H. C. Prazdny K. (1980). The interpretation of a moving retinal image. Proceedings of the Royal Society of London, B, 208, 385–397. [CrossRef]
Mack A. Herman E. (1978). The loss of position constancy during pursuit eye movements. Vision Research, 18, 55–62. [CrossRef] [PubMed]
Rashbass C. (1961). The relationship between saccadic and smooth tracking eye movements. The Journal of Physiology, 159, 326–338. [CrossRef] [PubMed]
Regan D. Beverly K. I. (1982). How do we avoid confounding the direction we are looking and the direction we are moving? Science, 215, 194–196. [CrossRef] [PubMed]
Rieger J. H. Lawton D. T. (1985). Processing differential image motion. Journal of the Optical Society of America, A, 2, 354–360. [CrossRef]
Royden C. S. (1994). Analysis of misperceived observer motion during simulated eye rotations. Vision Research, 34, 3215–3222. [CrossRef] [PubMed]
Royden C. S. Banks M. S. Crowell J. A. (1992). The perception of heading during eye movements. Nature, 360, 583–585. [CrossRef] [PubMed]
Saunders J. A. (2010). View rotation is used to perceive path curvature from optic flow. Journal of vision, 10(13):25, 1–15, http://www.journalofvision.org/content/10/13/25, doi:10.1167/10.13.25. [PubMed] [Article] [CrossRef] [PubMed]
Saunders J. A. Ma K. Y. (2011). Can observers judge future circular path relative to a target from retinal flow? Journal of Vision, 11(7):16, 1–17, http://www.journalofvision.org/content/11/7/16, doi:10.1167/11.7.16. [PubMed] [Article] [CrossRef] [PubMed]
Stone L. S. Perrone J. A. (1997). Human heading estimation during visually simulated curvilinear motion. Vision Research, 37, 573–590. [CrossRef] [PubMed]
Turano K. Wang X. (1994). Visual discrimination between a curved and straight path of self-motion: Effects of forward speed. Vision Research, 34, 107–114. [CrossRef] [PubMed]
van den Berg A. V. (1992). Robustness of perception of heading from optic flow. Vision Research, 32, 1285–1296. [CrossRef] [PubMed]
van den Berg A. V. (1996). Judgments of heading. Vision Research, 36, 2337–2350. [CrossRef] [PubMed]
van den Berg A. V. Brenner E. (1994a). Humans combine the optic flow with static depth cues for robust perception of heading. Vision Research, 34, 2153–2167. [CrossRef]
van den Berg A. V. Brenner E. (1994b). Why two eyes are better than one for judgments of heading. Nature, 371, 700–702. [CrossRef]
van den Berg A. Beintema J. A. Frens M. A. (2001). Heading and path percepts from visual flow and eye pursuit signals. Vision Research, 41, 3467–3486. [CrossRef] [PubMed]
Wann J. P. Swapp D. K. (2000). Why you should look where you are going. Nature Neuroscience, 3, 647–648. [CrossRef] [PubMed]
Warren W. Blackwell A. Kurtz K. Hatsopoulos N. Kalish M. (1991). On the sufficiency of the velocity field for perception of heading. Biological Cybernetics, 65, 770–774. [CrossRef]
Warren W. H. Mestre D. R. Blackwell A. W. Morris M. W. (1991). Perception of circular heading from optical flow. Journal of Experimental Psychology: Human Perception and Performance, 17, 28–43. [CrossRef] [PubMed]
Warren W. H. Morris M. W. Kalish M. (1988). Perception of translation heading from optic flow. Journal of Experimental Psychology: Human Perception and Performance, 14, 646–660. [CrossRef] [PubMed]
Wilkie R. M. Wann J. P. (2006). Judgments of path, not heading, guide locomotion. Journal of Experimental Psychology: Human Perception and Performance, 32, 88–96. [CrossRef] [PubMed]
Figure 1
 
Illustrations of the theories for curvilinear path perception: (a) Estimating path curvature from the rotation and translation components in retinal flow and then recovering the curvilinear path relative to heading, (b) estimating path curvature from the change of heading relative to a fixed axis in the world (e.g., the Z-axis) and then recovering the curvilinear path relative to that axis, (c) recovering the curvilinear path by updating the heading with respect to a reference object in the world, and (d) recovering the curvilinear path by updating self-displacements with respect to a reference object in the world.
Figure 1
 
Illustrations of the theories for curvilinear path perception: (a) Estimating path curvature from the rotation and translation components in retinal flow and then recovering the curvilinear path relative to heading, (b) estimating path curvature from the change of heading relative to a fixed axis in the world (e.g., the Z-axis) and then recovering the curvilinear path relative to that axis, (c) recovering the curvilinear path by updating the heading with respect to a reference object in the world, and (d) recovering the curvilinear path by updating self-displacements with respect to a reference object in the world.
Figure 2
 
Illustrations of the display conditions in Experiment 1: (a) a textured ground and (b) a textured ground with 20 posts in the depth range of 6–20 m.
Figure 2
 
Illustrations of the display conditions in Experiment 1: (a) a textured ground and (b) a textured ground with 20 posts in the depth range of 6–20 m.
Figure 3
 
The flow field and the mid-trial velocity field on the image screen, and the bird's-eye view of the experimental setup for the target viewing condition (top row), the heading viewing condition (middle row), and the Z-axis viewing condition (bottom row), respectively (path rotation = −6°/s). The cross indicates the participant's gaze direction throughout the trial. The light and dark circles in the flow field indicate the heading direction at the beginning and end of the trial. Each blue line in the flow field represents the positions of an environmental point over the course of the trial, and each blue line in the velocity field corresponds to a velocity vector associated with an environmental point. The black curve indicates the simulated path of travel.
Figure 3
 
The flow field and the mid-trial velocity field on the image screen, and the bird's-eye view of the experimental setup for the target viewing condition (top row), the heading viewing condition (middle row), and the Z-axis viewing condition (bottom row), respectively (path rotation = −6°/s). The cross indicates the participant's gaze direction throughout the trial. The light and dark circles in the flow field indicate the heading direction at the beginning and end of the trial. Each blue line in the flow field represents the positions of an environmental point over the course of the trial, and each blue line in the velocity field corresponds to a velocity vector associated with an environmental point. The black curve indicates the simulated path of travel.
Figure 4
 
Data from Experiment 1. Mean path error as a function of path rotation for the textured ground and the textured ground with post displays and for (a) the target, (b) the heading, and (c) the Z-axis viewing conditions. The dotted horizontal line at 0° indicates perfect performance. The dashed black lines indicate the predicted path errors assuming that observers use the rotation in retinal flow for path curvature estimation, and the dotted black line indicates the predicted path errors assuming that observers perceive traveling on a straight path. Error bars are SEs across eight participants.
Figure 4
 
Data from Experiment 1. Mean path error as a function of path rotation for the textured ground and the textured ground with post displays and for (a) the target, (b) the heading, and (c) the Z-axis viewing conditions. The dotted horizontal line at 0° indicates perfect performance. The dashed black lines indicate the predicted path errors assuming that observers use the rotation in retinal flow for path curvature estimation, and the dotted black line indicates the predicted path errors assuming that observers perceive traveling on a straight path. Error bars are SEs across eight participants.
Figure 5
 
Proportion of trials in which the comparison stimulus was perceived to contain larger rotational flow than the standard is plotted as a function of the path rotation rate of the comparison stimulus (i.e., comparison path rotation rate) for each observer for the standard stimulus at the path rotation rates of (a) 3°/s, (b) 4.5°/s, and (c) 6°/s. The black line indicates the best-fitting psychometric function (cumulative Gaussian) averaged across the five observers.
Figure 5
 
Proportion of trials in which the comparison stimulus was perceived to contain larger rotational flow than the standard is plotted as a function of the path rotation rate of the comparison stimulus (i.e., comparison path rotation rate) for each observer for the standard stimulus at the path rotation rates of (a) 3°/s, (b) 4.5°/s, and (c) 6°/s. The black line indicates the best-fitting psychometric function (cumulative Gaussian) averaged across the five observers.
Figure 6
 
Illustrations of the velocity fields showing how eye rotations make heading drift on the retina (upper panel) and how pursuit compensation for the real eye rotation in the same direction of the simulated eye rotation reduces the drift of heading relative to the body while pursuit compensation for the real eye rotation in the opposite direction of the simulated eye rotation would make heading drift relative to the body (lower panel). The illustrated real and simulated eye rotations are of the same magnitude as indicated by the arrows in the figure.
Figure 6
 
Illustrations of the velocity fields showing how eye rotations make heading drift on the retina (upper panel) and how pursuit compensation for the real eye rotation in the same direction of the simulated eye rotation reduces the drift of heading relative to the body while pursuit compensation for the real eye rotation in the opposite direction of the simulated eye rotation would make heading drift relative to the body (lower panel). The illustrated real and simulated eye rotations are of the same magnitude as indicated by the arrows in the figure.
Figure 7
 
The flow field and the mid-trial velocity field on the image screen, and the bird's-eye view of the experimental setup for the natural (top row) and unnatural (middle row) viewing conditions, respectively (path rotation = −6°/s). The retinal projections of these two viewing conditions are identical (bottom row). The light and dark red circles in the flow field indicate the target direction at the beginning and end of the trial, and the light and dark blue circles indicate the heading direction at the beginning and end of the trial. Each blue line in the flow field represents the positions of an environmental point over the course of the trial, and each blue line in the velocity field corresponds to a velocity vector associated with an environmental point. The black curve indicates the simulated path of travel.
Figure 7
 
The flow field and the mid-trial velocity field on the image screen, and the bird's-eye view of the experimental setup for the natural (top row) and unnatural (middle row) viewing conditions, respectively (path rotation = −6°/s). The retinal projections of these two viewing conditions are identical (bottom row). The light and dark red circles in the flow field indicate the target direction at the beginning and end of the trial, and the light and dark blue circles indicate the heading direction at the beginning and end of the trial. Each blue line in the flow field represents the positions of an environmental point over the course of the trial, and each blue line in the velocity field corresponds to a velocity vector associated with an environmental point. The black curve indicates the simulated path of travel.
Figure 8
 
Mean path error as a function of path rotation for (a) the natural and unnatural viewing conditions in Experiment 2 and for (b) the unnatural viewing condition in Experiment 2 and the target and Z-axis viewing conditions with the textured ground display in Experiment 1. The dotted horizontal line at 0° indicates perfect performance. Error bars are SEs across nine participants for the data from Experiment 2 and across eight participants for the data from Experiment 1.
Figure 8
 
Mean path error as a function of path rotation for (a) the natural and unnatural viewing conditions in Experiment 2 and for (b) the unnatural viewing condition in Experiment 2 and the target and Z-axis viewing conditions with the textured ground display in Experiment 1. The dotted horizontal line at 0° indicates perfect performance. Error bars are SEs across nine participants for the data from Experiment 2 and across eight participants for the data from Experiment 1.
Figure 9
 
An illustration of the instantaneous velocity field of the display used by Saunders (2010). The green cross indicates the simulated observer gaze direction that is aligned with the instantaneous heading. The dotted red line indicates the actual simulated path of forward travel, and the blue lines show some of the many possible ways to spatially integrate velocity vectors to form other locomotor flow lines that would also appear to pass directly beneath the observer.
Figure 9
 
An illustration of the instantaneous velocity field of the display used by Saunders (2010). The green cross indicates the simulated observer gaze direction that is aligned with the instantaneous heading. The dotted red line indicates the actual simulated path of forward travel, and the blue lines show some of the many possible ways to spatially integrate velocity vectors to form other locomotor flow lines that would also appear to pass directly beneath the observer.
Figure A1
 
A Cartesian coordinate system XYZ that is centered on the observer's eye and moves with the observer. p(x, y) indicates the projected position of an environment point P(X, Y, Z) on the image plane that is at unit distance along the Z-axis (i.e., the eye has a focal length of one). v represents the image velocity of p, with v x and v y indicating the velocity components along the x- and y-axes in the image plane.
Figure A1
 
A Cartesian coordinate system XYZ that is centered on the observer's eye and moves with the observer. p(x, y) indicates the projected position of an environment point P(X, Y, Z) on the image plane that is at unit distance along the Z-axis (i.e., the eye has a focal length of one). v represents the image velocity of p, with v x and v y indicating the velocity components along the x- and y-axes in the image plane.
Figure A2
 
The bird's-eye view of an observer traveling on a circular path in the XZ′ plane of the world coordinate system. The origin of the world coordinate system is at the center of the circular path. The simulated observer gaze direction (a) points toward a fixed target on the future path, (b) is aligned with the instantaneous heading, and (c) is fixed and parallel to the Z′-axis in the world.
Figure A2
 
The bird's-eye view of an observer traveling on a circular path in the XZ′ plane of the world coordinate system. The origin of the world coordinate system is at the center of the circular path. The simulated observer gaze direction (a) points toward a fixed target on the future path, (b) is aligned with the instantaneous heading, and (c) is fixed and parallel to the Z′-axis in the world.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×