Open Access
Article  |   October 2022
Perception of object motion during self-motion: Correlated biases in judgments of heading direction and object motion
Author Affiliations
  • Xing Xing
    Department of Psychology, University of Hong Kong, Hong Kong
    xingxing@connect.hku.hk
  • Jeffrey A. Saunders
    Department of Psychology, University of Hong Kong, Hong Kong
    jsaun@hku.hk
Journal of Vision October 2022, Vol.22, 8. doi:https://doi.org/10.1167/jov.22.11.8
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xing Xing, Jeffrey A. Saunders; Perception of object motion during self-motion: Correlated biases in judgments of heading direction and object motion. Journal of Vision 2022;22(11):8. https://doi.org/10.1167/jov.22.11.8.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

This study investigated the relationship between perceived heading direction and perceived motion of an independently moving object during self-motion. Using a dual task paradigm, we tested whether object motion judgments showed biases consistent with heading perception, both across conditions and from trial to trial. Subjects viewed simulated self-motion and estimated their heading direction (Experiment 1), or walked toward a target in virtual reality with conflicting physical and visual cues (Experiment 2). During self-motion, an independently moving object briefly appeared, with varied horizontal velocity, and observers judged whether the object was moving leftward or rightward. In Experiment 1, heading estimates showed an expected center bias, and object motion judgments showed corresponding biases. Trial-to-trial variations were also correlated: on trials with a more rightward heading bias, object motion judgments were consistent with a more rightward heading, and vice versa. In Experiment 2, we estimated the relative weighting of visual and physical cues in control of walking and object motion judgments. Both were strongly influenced by nonvisual cues, with less weighting for object motion (86% vs. 63%). There were also trial-to-trial correlations between biases in walking direction and object motion judgments. The results provide evidence that shared mechanisms contribute to heading perception and perception of object motion.

Introduction
Perceiving the motion of independently moving objects during self-motion is potentially challenging. The problem is easy when the observer is stationary because the movement of the object is the only source of retinal motion. When the observer is also moving, however, the retinal motion is a combination of optic flow due to self-motion and object motion (Figure 1). To detect the presence of the independent moving object and identify its motion in world coordinates, the visual system has to distinguish the retinal motion that is not due to self-motion. 
Figure 1.
 
The problem of determining the motion of independently moving object during self-motion. Retinal motion is a combination of optic flow due to self-motion and object motion. To identify object motion in world coordinates, the visual system has to distinguish the component of retinal motion that is not due to self-motion.
Figure 1.
 
The problem of determining the motion of independently moving object during self-motion. Retinal motion is a combination of optic flow due to self-motion and object motion. To identify object motion in world coordinates, the visual system has to distinguish the component of retinal motion that is not due to self-motion.
One solution proposed by Warren, Rushton and colleagues is flow parsing, which takes advantage of the fact that optic flow due to self-motion has a global pattern (Foulkes, Rushton, & Warren, 2013b; Rushton, Bradshaw, & Warren 2007; Rushton & Warren, 2005, 2007, 2008, 2009a). They suggest that the motion field can be separated into a global radial pattern due to self-motion and additional motion due to object motion. Many studies have found that observers can judge heading direction from optic flow alone (e.g., Warren, Blackwell, Kurtz, Hatsopoulos, & Kalish, 1991; Warren, Morris, & Kalish, 1988), and there is evidence for specialized neural mechanisms for analysis of global optic flow (Warren & Hannon, 1988; Vaina, Beardsley, & Rushton, 2004). If the visual system is able to identify the global pattern of optic flow due to self-motion, then any remaining motion can be attributed to independently moving objects. There is evidence that local motion contrast cannot fully account for this process (Warren & Rushton, 2008, 2009a). 
Some studies have compared heading perception and perception of object motion to test whether they share common mechanisms and found that some similar factors influenced both tasks. Foulkes, Rushton, and Warren (2013a) and Foulkes et al. (2013b) tested heading judgments and object motion judgments in conditions with varying amounts of noise and number of dots, and found the profiles of accuracy and precision were similar for the two tasks. They suggest that there is a common third process influenced by optic flow quality and quantity that provides input to both heading perception and perception of object motion, or else that heading perception relies on flow parsing. Other research has observed that nonvisual cues about self-motion are important in perception of object motion (Dupin & Wexler, 2013; Dyde & Harris, 2008; Fajen, Parade, & Matthis, 2013; MacNeilage, Zhang, DeAngelis, & Angelaki, 2012; Niehorster & Li, 2017; Xie, Niehorster, Lappe, & Li, 2020), similar to how nonvisual cues affect perceived heading. 
However, some other studies have observed differences between perception of heading direction from optic flow and perception of object motion. Some factors that affect heading perception do not seem to have corresponding effects on perception of object motion. Superimposing a translating pattern of optic flow causes perceived heading to be biased (Duffy & Wurtz, 1993), but does not bias the perception of object motion (Warren et al., 2012). Static radial patterns have been found to influence perceived heading from optic flow (Niehorster, Cheng, & Li, 2010), but Rushton, Niehorster, Warren, and Li (2018a) found no effect of static radial patterns on object motion judgments. Rushton et al. (2018a) found that the reverse phi effect (Anstis, 1970) did not produce biases in object motion judgments. Rushton et al. (2018b) found different effects on the precision of heading judgments and object motion judgments: simulated observer rotation decreased the precision of heading estimates, but not the precision of object motion judgments. These results suggest that perception of object motion involves mechanisms that are not fully shared with mechanisms that underlie heading perception. 
In this study, we used a dual task paradigm to investigate whether object motion judgments show biases that are consistent with biases in heading judgments. In Experiment 1, we tested conditions that produce systematic center biases in heading judgments, and tested whether object motion judgments are consistent with the biased heading judgments. In Experiment 2, we tested conditions where visual and nonvisual self-motion cues provide conflicting information and tested whether nonvisual cues contribute in similar ways. In both experiments, we also tested whether trial-to-trial variability in heading perception covaries with biases in object motion judgments. 
Center bias
Heading judgments often show a bias toward the center, which has been observed in multiple studies (Ehrlich, Beck, Crowell, Freeman, & Banks, 1998; Hanada & Ejima, 2000; Johnston, White, & Cumming, 1973; Saunders, 2010, 2014; Warren & Saunders, 1995; Xing & Saunders, 2016). No previous study has tested whether perception of object motion shows biases consistent with center bias in heading perception. 
If perception of object motion were based on an inaccurate heading, systematic biases would be expected. Figure 2 illustrates how object motion would be perceived if it were determined by an unbiased or biased estimate of heading direction. If object motion were determined based on the heading direction specified by optic flow (unbiased, top panels), objects would be perceived to be stationary when the retinal motion is consistent with the global pattern. If object motion were determined based on a biased estimate of heading direction (bottom panels), objects that are stationary relative to the background may be perceived as moving, and objects perceived as stationary would have some motion relative to the background. If perceived heading were biased in a leftward direction, as in the example shown in Figure 2, the object would have to be moving rightward relative to the background to have retinal motion that is consistent with the biased heading direction. 
Figure 2.
 
Predicted object motion judgments when perceived heading is unbiased (top) or biased in a leftward direction (bottom). The white circle indicates the true heading and the grey circle is the perceived heading. The solid arrow indicates the retinal motion of object and the dotted line shows the direction of retinal motion that would be consistent with the perceived heading. If perception of object motion was based on accurate perception of the heading, the object should appeared stationary when its retinal motion aligns with the radial direction from the FOE (top row). If perception of heading was biased and object motion was determined based on the biased perceived heading, the retinal motion of the object would have to be different to be perceived as stationary (bottom row). If the perceived heading was biased to the left, as in this example, an object would have to be moving rightward relative to the environment to be perceived as stationary.
Figure 2.
 
Predicted object motion judgments when perceived heading is unbiased (top) or biased in a leftward direction (bottom). The white circle indicates the true heading and the grey circle is the perceived heading. The solid arrow indicates the retinal motion of object and the dotted line shows the direction of retinal motion that would be consistent with the perceived heading. If perception of object motion was based on accurate perception of the heading, the object should appeared stationary when its retinal motion aligns with the radial direction from the FOE (top row). If perception of heading was biased and object motion was determined based on the biased perceived heading, the retinal motion of the object would have to be different to be perceived as stationary (bottom row). If the perceived heading was biased to the left, as in this example, an object would have to be moving rightward relative to the environment to be perceived as stationary.
In Experiment 1, subjects viewed simulated self-motion in directions ±15° from the center and judged both their heading direction and whether an independently moving object was moving leftward or rightward (Figure 3a). Judgments of heading were expected to show a bias toward the center. If perception of object motion judgment relies on some processes shared with heading perception, then object motion judgments should show biases consistent with center biases in heading perception. 
Figure 3.
 
(a) Illustration of the task and conditions in Experiment 1. Subjects viewed 3.5 seconds of simulated self-motion in virtual reality while seated. An independent moving object appeared 1 second after the onset of self-motion and remained visible for 1 second. Subjects estimated their heading direction and judged whether the object was moving leftward or rightward in world coordinates. The heading of the simulated self-motion was either +15° or −15°. Filler trials with random heading direction were also included. (b) Illustration of the task and conditions in Experiment 2. Subjects walked toward a virtual target in virtual reality. An independently moving object appeared after the subject had moved 1m and remained visible for 1 second. At 2.5 m, the subjects were cued to stop, and they judged whether the object was moving leftward or rightward. The visual direction of self-motion differed from the physical direction of self-motion by −5° or +5°.
Figure 3.
 
(a) Illustration of the task and conditions in Experiment 1. Subjects viewed 3.5 seconds of simulated self-motion in virtual reality while seated. An independent moving object appeared 1 second after the onset of self-motion and remained visible for 1 second. Subjects estimated their heading direction and judged whether the object was moving leftward or rightward in world coordinates. The heading of the simulated self-motion was either +15° or −15°. Filler trials with random heading direction were also included. (b) Illustration of the task and conditions in Experiment 2. Subjects walked toward a virtual target in virtual reality. An independently moving object appeared after the subject had moved 1m and remained visible for 1 second. At 2.5 m, the subjects were cued to stop, and they judged whether the object was moving leftward or rightward. The visual direction of self-motion differed from the physical direction of self-motion by −5° or +5°.
Visual and nonvisual self-motion cues
Another way to investigate the consistency of heading perception and object motion perception would be to compare the effects of nonvisual cues. A number of previous studies have found evidence that nonvisual cues contribute to perception of object motion (Dokka, MacNeilage, DeAngelis, & Angelaki, 2015; Dokka, Park, Jansen, DeAngelis, & Angelaki, 2019; Dupin & Wexler, 2013; Dyde & Harris, 2008; Fajen et al., 2013; Fajen & Matthis, 2013; MacNeilage et al., 2012; Niehorster & Li, 2017; Xie et al., 2020). However, most of these did not directly compare perception of self-motion and object motion. The only previous study that used a dual task method was Dokka et al. (2019), but their focus was on how the interpretation of object motion modulates the effect of object motion on perceived heading. No previous studies have tested whether the relative influence of visual and nonvisual cues on perceived heading is consistent with the relative influence on perception of object motion. 
In Experiment 2, subjects performed object motion judgments while walking in virtual reality with conflicting visual and physical self-motion cues (Figure 3b). We used conditions in which the visual heading direction was offset from the physical direction by ±5°, which is similar to the method used in a number of previous studies (Bruggeman & Warren, 2010; Bruggeman, Zosh, & Warren, 2007; Saunders, 2014; Saunders & Durgin, 2011; Warren, Kay, Zosh, Duchon, & Sahuc, 2001). Subjects walked to a distant target and made judgments about moving direction of an independent moving object that appeared briefly during walking. For control of walking, consistency of performance with visual or nonvisual information is measured as an indication of relative weighting of each cue. For judgment of object motion, percentage of each cue used is found by finding the direction where observers perceive the independent moving object as stationary. 
Based on Saunders (2014), we expected that walking trajectories in our conditions would be strongly influenced by nonvisual cues (70%–80%). If an integrated perception of self-motion is used to identify an independent moving object and perceive its motion, then one would expect a similar influence of nonvisual information. On the other hand, if object motion judgment depends on separate mechanisms, it might not be influenced by nonvisual information in the same way as perceived heading for control of walking. 
Trial-to-trial variation
Our dual task paradigm allowed us to test whether trial-to-trial variations in object motion judgments are related to trial-to-trial variations in heading perception. There is variability across trials in the heading estimation, some of which is likely due to errors in perceived self-motion. For example, if subjects walk in a direction to the left of the target direction, this could be due to a rightward bias in perceived heading. If this biased perception of self-motion were used to interpret the motion of the object, then subjects should have a greater chance to judge the object as moving leftward. We would therefore expect some trial-to-trial correction between the biases in heading estimation and the biases in judgments of object motion if there are shared mechanisms. 
Experiment 1: Object motion and heading judgments
Experiment 1 tested whether center biases in heading perception produces corresponding biases in object motion judgments, and whether there is a relationship between heading biases and object motion judgments from trial to trial. Subjects viewed simulated self-motion in directions ±15° from the center and judged both their heading direction and whether an independently moving object was moving leftward or rightward. Corresponding biases for the two tasks would suggest that perception of object motion shares some common process with heading perception. 
Methods
Participants
Twenty-four naïve subjects from the University of Hong Kong were paid to participate in the experiment. Subjects were required to have normal or corrected-to-normal vision. Individuals that have a prior sensitivity to motion sickness were excluded. The procedures were approved by Human Research Ethics Committee for Non-Clinical Faculties (HRECNCF) at the University of Hong Kong. 
The sample size was chosen to be larger than required to detect a center bias effect. Center bias effects on heading estimates tend to be large. In a previous study testing heading judgments with similar stimuli (Xing & Saunders, 2016), we observed center biases with an effect size of dz = 1.9. Such effects could be easily detected with a small sample size; for example, eight subjects would be sufficient for 99% power. We chose a larger sample size to ensure that our results were robust and that we could analyze individual differences. 
Apparatus and stimuli
The virtual environment was presented by an HTC Vive head-mounted display (HMD). The HMD had a horizontal field of view (FOV) of 110° (approximately 90° per eye) and vertical FOV of 110°, a total resolution of 2,160 × 1,200 (1,080 × 1,200 per eye) and a refresh rate of 60 Hz. The environment was rendered by NVIDIA GeForce GTX 970 graphic card with OpenGL. Noise-cancelling earphones (Etymotic Research MC5) were used to eliminate possible auditory position cues. 
The displays stimulated movement of the observer along a texture ground plane in direction that was either 15° or −15° relative to the center (or a random heading direction on filler trials, see Procedure). The forward speed of the observer was 1 m/s. The texture on the ground plane was a Voronoi pattern of gray tiles with varied brightness. The tiles were colored so that the texture had spatial frequency energy at multiple spatial scales, thereby providing strong visual motion signals in both near and far space. The moving object was a yellow dot with diameter of 0.06 m. The initial position of the moving object was 4 m in front of the observer and 0.9 m above the ground, with random horizontal position in the interval (−0.5 m, 0.5 m) around the path of the observer. 
The horizontal velocity of the object relative to the environment was varied. The motion of the object was defined in terms of the difference between the object motion focus of expansion (FOE) and the background FOE. The set of possible object FOEs depended on the heading direction condition. In the 15° heading direction condition, the FOE of object motion differed by −6.25°, −5°, −3.75°, −2.5°, −1.25°, 0°, or 1.25° (i.e., object FOE varied from 8.75° to 16.25°). In −15° heading direction condition, the FOE of object motion differed by −1.25°, 0°, 1.25°, 2.5°, 3.75°, 5°, and 6.25° (i.e., object FOE varied from −16.25° to −8.75°). Informal pilot testing suggested that perception of object motion would be consistent with a heading biased toward the center, so the object movement perceived to be stationary would be within these two ranges. 
Procedure
Participants had to perform two tasks during simulated forward translation self-motion. The two tasks were to judge the heading direction of the simulated movement and to identify the motion of an independently moving object that briefly appeared during the simulated movement. After 1 m of simulated movement, an independent moving object appeared and remained visible for 1 second. Subjects were asked to report whether the object was moving to the left or right in world coordinates. The object motion response was not speeded and could be made any time after the object appears. The simulated movement ended after 3.75 m of travel. After the movement stopped, subjects indicated the heading direction they had experienced by moving a cursor along a horizontal line at the horizon. 
The trials in an experiment block consisted of 80% experimental trials and 20% filler trials. In experiment trials, simulated direction of self-motion was either 15° or −15° from the center. In filler trials, simulated self-motion was a random direction between −25° and 25° relative to center. The variations in heading direction on the filler trials were included to discourage subjects from making categorical responses. Each subject completed 350 trials, which consisted of 280 experiment trials and 70 filler trials. One-half of the experimental trials had a 15° heading and one-half had a −15° heading. Trials with different combinations of simulated heading direction and object movement direction were each repeated 10 times. The order was fully randomized within blocks. Before the experiment, subjects performed practice trials to become familiar with the task. In practice trials, feedback about the true heading direction was provided by a small circle presented after subjects made their judgments. No feedback was provided during the main testing blocks. 
Analysis
Object motion judgment
To measure bias in judgments of object motion, we estimated the object motion that would be perceived as stationary. To do so, we fit the probability of judging object motion to be rightward as a function of object motion FOE. The object motion FOE was varied across trials. Object motion FOEs that are more leftward correspond to objects that are moving more rightward relative to background, and vice versa. Therefore, the probability of judging object motion to be rightward would systematically vary with object motion FOE. We fitted a cumulative Gaussian function to the object motion responses to estimate a point of subjective equality (PSE) for the 15° heading and −15° heading trials. The PSEs provide estimates of the object motion FOEs that would be perceived as stationary given a subjects' responses. Our model included separate PSEs for the 15° and −15° trials, but the just noticeable difference (JND) was assumed to be the same for trials with positive and negative headings. 
Trial-to-trial relationships between heading judgments and object motion judgments
To test trial-to-trial relationships between heading judgments and object motion judgments, a multinomial probit regression was performed. Three predictors were included in the fitting model: the FOE of the object motion relative to the simulated self-motion, the heading condition, and the residual errors in heading estimates from individual trials. Without the residual heading errors, the analysis would be equivalent to fitting two PSEs and a common JND to the responses from the two conditions. Including the residual errors allows us to assess whether any additional variability is predicted by trial-to-trial variations. 
Results and discussion
Center bias in heading judgment
Figure 4 plots the histograms of heading judgments from two sample subjects and the mean judged heading across all subjects for the 15° and −15° conditions. The results show the expected center bias: the distribution of responses from trials with 15° heading are smaller than 15° and the distribution of responses from trials with −15° heading are larger than −15°. The left sample subject showed a large bias toward the center and the right sample subject showed a smaller center bias. The mean across subjects also showed the expected center bias. 
Figure 4.
 
(a, b) Histograms of heading judgments from two sample subjects. The distribution of responses from trials with 15° heading are shown in blue, and the distribution from trials with −15° heading are shown in red. The left sample subject showed a large bias toward the center and the right sample subject showed a smaller center bias. (c) Results from all subjects. The horizontal lines plot the mean judged heading averaged across subjects for 15° heading (blue) and −15° heading conditions (red). Shaded regions indicate ±1 standard error. The small diamonds plot the mean heading judgments from individual subjects.
Figure 4.
 
(a, b) Histograms of heading judgments from two sample subjects. The distribution of responses from trials with 15° heading are shown in blue, and the distribution from trials with −15° heading are shown in red. The left sample subject showed a large bias toward the center and the right sample subject showed a smaller center bias. (c) Results from all subjects. The horizontal lines plot the mean judged heading averaged across subjects for 15° heading (blue) and −15° heading conditions (red). Shaded regions indicate ±1 standard error. The small diamonds plot the mean heading judgments from individual subjects.
For analysis, we computed a measure of the proportional center bias for each subject using results from all test trials. The average center bias was 9.31%, which was significantly greater than zero, t(23) = −4.204, p < .001; dz = −0.858. 
We also analyzed filler trials to check that performance on the test trials and filler trials was consistent. We performed a regression analysis to estimate the proportional center bias on filler trials and found that the mean bias was comparable, 9.55%, and was not significantly different from the bias observed on test trials with ±15° heading, t(23) = −0.145, p = .886; dz = −0.030. These results suggest that subjects were not making categorical responses on test trials. 
Center bias in object motion judgment
To measure bias in judgments of object motion, we estimated the object motion that would be perceived as stationary. If object motion judgments were consistent with the correct simulated heading direction (no bias), then observers should perceive the object as stationary when the FOE from object motion is consistent with the background FOE caused by self-motion. If a biased perception of heading was used to interpret object motion, then observers should perceive the object as stationary when the FOE from object motion is consistent with the biased perceived heading. The PSEs of object motion judgments would then be shifted away from the background FOE. 
To estimate the object motion that is perceived to be stationary, we fit the probability of judging object motion to be rightward as a function of object motion FOE (see Methods). Figures 5a and 5b show the psychometric functions from the object motion judgments of two sample subjects. The left subject shows a large center bias whereas the right subject shows a smaller center bias. For the left subject, the PSE for the 15° heading condition is shifted in the negative direction and the PSE for the −15° condition is shifted in the positive direction. These PSEs indicate that objects would be perceived as stationary when the object motion is consistent with a heading closer to the center. For the right subject, the PSEs were close to 15° and −15°, indicating that objects were perceived as stationary when the object motion was consistent with heading that was close to the true simulated heading. 
Figure 5.
 
(a, b) Psychometric functions from the object motion judgments of two sample subjects. The graphs plot the mean percentage of trials on which the object was judged to be moving rightward as a function of the object motion FOE for the 15° heading condition (blue dots) and the −15° heading condition (red dots). The curves show the estimated psychometric functions and the asterisks indicate the PSE for each curve. The PSEs correspond to the object motion FOE that would be perceived as stationary on average. Sample subject A shows a large center bias while sample subject C shows a smaller center bias. (c) Object motion PSEs for all subjects. The horizontal lines plot the mean PSEs averaged across subjects for the 15° heading (blue) and −15° heading conditions (red). Shaded regions depict ±1 standard error. The small diamonds plot results from individual subjects.
Figure 5.
 
(a, b) Psychometric functions from the object motion judgments of two sample subjects. The graphs plot the mean percentage of trials on which the object was judged to be moving rightward as a function of the object motion FOE for the 15° heading condition (blue dots) and the −15° heading condition (red dots). The curves show the estimated psychometric functions and the asterisks indicate the PSE for each curve. The PSEs correspond to the object motion FOE that would be perceived as stationary on average. Sample subject A shows a large center bias while sample subject C shows a smaller center bias. (c) Object motion PSEs for all subjects. The horizontal lines plot the mean PSEs averaged across subjects for the 15° heading (blue) and −15° heading conditions (red). Shaded regions depict ±1 standard error. The small diamonds plot results from individual subjects.
Figure 5c shows objection motion PSEs averaged across all subjects. The mean PSE on trials with 15° heading was biased in the negative direction and the mean PSE on trials with −15° heading was biased in the positive direction, consistent with a center bias. The mean proportional center bias was 10.68%, and the mean JND was 2.58. A one-sample t-test confirmed that the mean center bias was significantly larger than zero, t(23) = −5.453, p < .001; dz = −1.113. The averaged results indicated that object motion judgments were biased in a way that is consistent with a center bias. 
A possible concern is that the range of object FOEs tested in each condition might have caused a bias in responses, unrelated to the biases in perceived heading. We choose the object FOEs to sample around the expected PSEs: 8.75° to 16.25° in the +15° heading condition, and −16.25° to −8.75° in the −15° heading condition. Because the ranges for each condition were asymmetric around the true heading, a bias toward the center of the ranges would produce a shift in the PSEs, and the shift would be in the same direction as a center bias. However, within an experimental block, the overall distribution of object motion was symmetric because the positive and negative heading conditions were randomly intermixed. For this reason, we think that it is unlikely that the observed biases were due to range effects. 
Comparison of biases
We compared the center biases from the heading judgments and object motion judgments (Figures 6a and 6b). The average magnitudes of these biases was equivalent for the two tasks, t(23) = −0.480, p = .636; dz = 0.098. However, there was no significant correlation across individual's center biases in the two tasks, r = 0.067, p = .756. 
Figure 6.
 
Comparison of center biases in heading judgment and center biases in object motion judgment. (a) Mean center bias for heading judgment task and object motion judgment task. Error bars depict ± standard error. (b) Individual results of center bias in object motion judgments as a function of their center bias in heading judgments. (c) Individual results of constant bias in object motion judgments as a function of their constant bias in heading judgments.
Figure 6.
 
Comparison of center biases in heading judgment and center biases in object motion judgment. (a) Mean center bias for heading judgment task and object motion judgment task. Error bars depict ± standard error. (b) Individual results of center bias in object motion judgments as a function of their center bias in heading judgments. (c) Individual results of constant bias in object motion judgments as a function of their constant bias in heading judgments.
We also compared constant biases from the two tasks (Figure 6c). Subjects showed constant biases in their overall responses in addition to center biases; for example, all the heading judgments might be shifted to the left. We estimated the constant biases from the heading judgments and object motion judgments and found no significant correlate across the two tasks (r = 0.026; p = .904). 
Trial-to-trial variations
We further analyzed trial-to-trial variations in heading judgments and object motion judgments. If perceived object motion depends on the output from heading estimation, then the randomly varying biases in perceived heading on each individual trial would cause corresponding biases in judgments of object motion. Suppose that the heading estimate on a trial is more rightward than the mean estimate. If perceived object motion were based on this biased perceived heading, then the object motion judgment on that trial would be consistent with a more rightward heading. Similarly, on trials that show a more leftward bias, the object motion be consistent with a more leftward heading. Rather than being fixed, the PSE of object motion judgments would shift with the trial-to-trial variations in heading bias. 
We performed a multinomial probit regression analysis to test whether residual heading errors were a predictor for object motion judgments (see Methods). We found that the average coefficient representing the influence of residual heading errors was significantly different from zero, t(23) = −3.017, p = .006; dz = 0.623. This finding indicates that there was, on average, a relationship between variations in heading judgments and object motion judgments. 
Figures 7a–c illustrates the trial-to-trial relationship between heading estimates and object motion judgments for some sample subjects. The fit lines show the object motion PSEs as a function of residual errors. If there was no correlation between heading estimates and object motion judgments, the best fitting lines would be vertical (i.e., would not depend on trial-to-trial variations). For sample subjects D and E, there was a strong relationship between heading estimates and object motion judgments. The transition point between perceived object as moving rightward versus leftward was highly dependent on whether the heading estimate for a given trial was biased in the positive or negative direction. Sample subject F show a weaker relationship and no detectable relationship. Figure 7d plots the average change in PSE per change in heading estimate. 
Figure 7.
 
(a–c) Plots of the trial-to-trial relationship between heading estimates and object motion judgments for some sample subjects. Each point corresponds to a trial. The x values are the FOEs of object motion and the y values are the heading estimates. The dark points are trials where the subjects judged the object to be moving rightward, and the light points are the trials where the subject judged the object to be moving leftward. Dashed lines show the best-fitting transition boundary based on a multivariate probit analysis. If object motion judgments were independent of walking error, the transition lines would be vertical. For the top subjects, one can see that the trial-to-trial differences in heading estimates are predictive of object motion response. (d) Mean change in PSE per change in heading judgment averaged across subjects. Error bars depict ± standard error.
Figure 7.
 
(a–c) Plots of the trial-to-trial relationship between heading estimates and object motion judgments for some sample subjects. Each point corresponds to a trial. The x values are the FOEs of object motion and the y values are the heading estimates. The dark points are trials where the subjects judged the object to be moving rightward, and the light points are the trials where the subject judged the object to be moving leftward. Dashed lines show the best-fitting transition boundary based on a multivariate probit analysis. If object motion judgments were independent of walking error, the transition lines would be vertical. For the top subjects, one can see that the trial-to-trial differences in heading estimates are predictive of object motion response. (d) Mean change in PSE per change in heading judgment averaged across subjects. Error bars depict ± standard error.
Summary
In Experiment 1 we observed that heading judgments were biased toward the center and that object motion judgements also showed biases that were consistent with a heading that was biased toward the center. The magnitude of the biases in the two tasks were similar. However, we did not observe a significant correlation between individual's center biases or overall biases between the two tasks. 
We also observed a trial-to-trial correlation between variations in heading judgments and object motion judgments. Some of the residual variations in object motion judgments can be predicted by variations in heading judgments. Although a range effect could potentially account for an overall center bias in object motion judgments, this could not explain the observed trial-to-trial correlation. 
Experiment 2: Object motion judgments during walking
Experiment 2 tested 1) whether visual and nonvisual information contributes similarly to control of walking direction and perception of object motion, and 2) whether trial-to-trial variations in walking direction are related to biases in object motion judgments. Subjects walked toward a target in virtual reality with conflicting visual and physical self-motion cues and made judgments about the motion of an independent moving object. The independent object moved horizontally relative to the environment with varied lateral speed, and the object motion perceived to be stationary was estimated as in the previous experiment. We compared the average weighting of visual and nonvisual information for the two tasks and tested whether residual variation in the two tasks was correlated. Similar visual weightings or corresponding biases for the two tasks would suggest that perception of object motion shares some common process with heading perception. 
Methods
Participants
Twenty-nine naïve subjects from the University of Hong Kong were paid to participate in the experiment, and 26 of these subjects were included in the data analysis. Subjects were required to have normal or corrected-to-normal vision. Individuals that have a prior sensitivity to motion sickness were excluded. The procedures conformed to and were approved by the HRECNCF at the University of Hong Kong. 
Our target sample size was 24 subjects. After collecting data from 24 subjects, we found that the object motion judgments from three subjects could not be fit, suggesting that they did not understand the task. We collected data from additional subjects to replace these excluded subjects. At the point when we had 24 subjects with usable data, two more subjects had already completed their first sessions, so we finished data collection for these two subjects. Therefore, the final sample size for analysis was 26. The additional two subjects did not change the qualitative results. 
The target sample size was larger than required to detect an influence of nonvisual cues on walking performance. In similar conditions, Saunders (2014) found that the influence of visual cues on walking performance had an effect size of dz = 1.2, and the influence of nonvisual was more than three times as large. Therefore, a nonvisual influence on walking performance could be easily detected, even with a very small sample size. We chose a larger sample size to ensure that our results were robust and to be able to analyze individual differences. 
Apparatus and stimuli
Participants walked in a virtual environment presented by nVisor SX111 HMD. The HMD had a total resolution of 2,560 × 1,024 (1,280 × 1,024 per eye) and total FOV of 110° H × 64° V (76° H × 64° V per eye). As in the previous experiment, black cloth was used to cover the gaps between the HMD and the subject's head, and noise-canceling earphones were used to eliminate auditory cues. The environment was rendered by an NVIDIA Quadro FX 580 graphic card with OpenGL. An Intersense IS-1200 inertial tracking system was used to record head position and orientation as the trajectory of walking. The tracking data were updated at 60 Hz. 
Subjects walked along a textured ground plane toward an indefinitely distant target, which had a constant egocentric direction regardless of how the subject moved. The initial position of the moving object appeared was 4 m in front of the observer, 0.9 m above the ground, with a random horizontal position chosen within (−0.5 m, 0.5 m) relative to the direction of the target. 
The simulated direction of heading differed from the physical direction of movement by either 5° or −5°. The conflict was created by adding a ±5° visual rotation of the ground plane and target, equivalent to wearing displacement prisms. The sign of the conflict was randomly varied across trials to prevent adaptation. 
The object moved horizontally relative to the environment with varied lateral speed. The motion of the object was defined in terms of the difference between the object motion FOE and the physical heading direction. When difference between the visual heading and the physical heading was 5°, the FOE of object motion differed from the physical heading direction by −1.25°, 0°, 1.25°, 2.5°, 3.75°, and 5° (i.e., object FOE varied from −1.25° to 5°). When difference between the visual heading and the physical heading was −5°, the FOE of object motion differed from the physical heading direction by −5°, −3.75°, −2.5°, −1.25°, 0°, or 1.25° (i.e., object FOE varied from −5° to 1.25°). Based on informal pilot testing, we expected that the object would be perceived as stationary when the object motion was consistent with a heading direction between the visual heading and physical heading, so the PSEs would be within these ranges. 
Procedure
Participants performed two tasks: walking toward a target and identifying the motion of an independently moving object that briefly appears during walking. At the start of a trial, a target direction for walking was presented. Subjects were instructed to try to walk straight toward the target at a natural speed. As soon as they started moving, the target disappeared. After 1 m of walking, an independently moving object appeared and remained visible for 1 second. Subjects were asked to report whether the object was moving to the left or right in the environment. The object motion judgments were made verbally while they were still walking toward the target and recorded by the experimenter. Subjects were instructed to base the judgments on perceived motion of the object relative to the world, as opposed to the motion of the object relative to themselves. After 2.5 m of walking, a stop sign appeared, cueing the subjects to stop walking and turn around to begin the next trial. To ensure safety, an experimenter walked behind the subject holding the cables of the HMD throughout the walking. 
Two experimental sessions were completed on separate days, each consisting of two identical blocks of 120 trials, yielding a total of 240 trials per subject. In each block, one-half of the trials had a +5° heading conflicts and one-half had a −5° heading conflict. The 12 combinations of heading conflict and object motion were fully randomized across trials. Before the first block in a session, subjects performed 24 practice trials to become familiar with the task. In practice trials, participants who were walking slowly were encouraged to walk in natural speed. Each block lasts for approximately 10 minutes, and subjects were allowed a 10-minute rest between blocks. 
Analysis
Estimation of walking goal direction
We estimated the goal direction of walking based on the initial visual heading error and the amount of heading adjustment over the course of the trial. Because subjects only walked approximately 2.5 m, they may not have had enough time to fully steer their perceived heading toward the target. However, the heading goal can be inferred from the initial visual heading error and the heading adjustment. If a subject perceived their initial heading to be to the right of target, they would be expected to steer leftward, and vice versa. If no adjustment was made, that would suggest that subjects perceived themselves to be headed toward the target. 
Figure 8 illustrates how the variables used for analysis were computed from the walking trajectories. For each walking trial, the initial heading direction (H0) was estimated as the direction from the initial subject position to the subject position at 1m of walking on the horizontal plane. The initial target direction (T) was estimated as the direction from the initial subject position to the target position. The initial visual heading error (H0-T) was the difference between the initial heading direction and the initial target direction. The final heading direction (H1) was estimated as the direction from the subject position at 1 m of walking to the subject position at 2.5 m of walking on the horizontal plane. The heading adjustment (ΔH) was the difference between the final heading direction (H1) and the initial heading direction (H0). 
Figure 8.
 
(a) Illustration of the variables computed from walking trajectories. We computed vectors representing walking direction in the horizontal plane from the first 1 m of walking (H0) and from 1 m to 2.5 m (H1). The initial visual heading error (H0-T) was the difference between H0 and the target direction (T). The difference between H0 and H1 was used as a measure of walking adjustment on a trial (DH = H1 – H0). The figure shows the measures for three hypothetical walking trials. (b) Expected relationship between initial visual heading error (H0-T) and walking adjustment (DH) if subjects steered to aim their perceived heading toward the target. The heading direction for which no adjustment is predicted (DH = 0) would be the average heading that is perceived to be toward the target.
Figure 8.
 
(a) Illustration of the variables computed from walking trajectories. We computed vectors representing walking direction in the horizontal plane from the first 1 m of walking (H0) and from 1 m to 2.5 m (H1). The initial visual heading error (H0-T) was the difference between H0 and the target direction (T). The difference between H0 and H1 was used as a measure of walking adjustment on a trial (DH = H1 – H0). The figure shows the measures for three hypothetical walking trials. (b) Expected relationship between initial visual heading error (H0-T) and walking adjustment (DH) if subjects steered to aim their perceived heading toward the target. The heading direction for which no adjustment is predicted (DH = 0) would be the average heading that is perceived to be toward the target.
We performed a linear regression analysis to estimate the walking direction that would result in no adjustment on average. The initial visual heading error (H0-T) was used a predictor for the heading adjustment (ΔH). We assumed that the walking direction that would result in no adjustment on average was the average walking direction that was perceived as aligned with the target. This corresponds to the x-intercept of the regression, that is, the value of (H0-T) that predicts that ΔH = 0. We used the x-intercept as our estimate of the goal direction of walking. The data were fit with RMA regression rather than standard regression because both variables were estimates with measurement errors. Different heading conflict conditions were fit together and the same slope was assumed. 
Figure 9 plots heading adjustment as a function of initial visual heading error for two sample subjects. One can see that there is a systematic relationship between initial visual heading error and the heading adjustment. The points where the regression lines intersect the x-axis correspond to the heading directions that would be expected to produce no steering adjustment. These average goal directions were compared with the predictions from using visual cues alone or physical cues alone to estimate the cue weighting. 
Figure 9.
 
(a, b) Examples of how the walking goal direction was estimated from walking trajectories for two sample subjects. The graphs plot the heading adjustment between initial heading direction and final heading direction (H1 – H0) as a function of the initial visual heading error (H0 – T). Blue points show trials with +5° conflict between visual and physical heading and red points show trials with –5° conflict. Overall, subjects tended to adjust their heading in a negative direction when initial heading error was positive and vice versa. No adjustment would suggest that a subject perceived themselves to be headed toward the target. For each subject and condition, we performed regression fits to estimate the visual heading error that would be expected to result in no adjustment (solid diamonds), and used this as an estimate of the walking goal. Steering to reduce visual error would result in a walking goal of 0°, while steering to align physical direction with the target would result in walking goals of ±5°. (c) The mean walking goals, averaged across subjects, for the conditions with positive (blue) and negative (red) heading conflicts. Shaded regions depict ±1 standard error. Small diamonds plot means from individual subjects.
Figure 9.
 
(a, b) Examples of how the walking goal direction was estimated from walking trajectories for two sample subjects. The graphs plot the heading adjustment between initial heading direction and final heading direction (H1 – H0) as a function of the initial visual heading error (H0 – T). Blue points show trials with +5° conflict between visual and physical heading and red points show trials with –5° conflict. Overall, subjects tended to adjust their heading in a negative direction when initial heading error was positive and vice versa. No adjustment would suggest that a subject perceived themselves to be headed toward the target. For each subject and condition, we performed regression fits to estimate the visual heading error that would be expected to result in no adjustment (solid diamonds), and used this as an estimate of the walking goal. Steering to reduce visual error would result in a walking goal of 0°, while steering to align physical direction with the target would result in walking goals of ±5°. (c) The mean walking goals, averaged across subjects, for the conditions with positive (blue) and negative (red) heading conflicts. Shaded regions depict ±1 standard error. Small diamonds plot means from individual subjects.
Results and discussion
Cue weighting in heading judgment
We used the walking performance to estimate the relative contributions of visual and nonvisual cues to perceived heading. If subjects relied entirely on visual cues, they would steer to align their visual heading with the target direction, and if they relied entirely on nonvisual cues, they would steer to align their physical with the target direction. Steering to align an intermediate heading would suggest that their perceived heading was based on both visual and nonvisual cues. The relative weighting can be inferred based on how close the steering goal is to the predictions from visual cues alone or physical cues alone. For example, if they steered so that the visual heading error was +3° and the physical heading error was −2°, it would imply a 40% contribution from visual cues and a 60% contribution from nonvisual cues. The average steering goal from each subject and condition was estimated using the initial headings and steering adjustments from a set of trials (see Methods). 
Figure 9 plots measures of walking performance on individual trials for two sample subjects. For the subject in Figure 9a, the estimated goal had a visual heading error of 6.0° in the +5° conflict condition (blue) and a visual heading error of −1.6° in the −5° conflict condition (red). The difference between the goal heading errors, 7.6°, corresponds to a visual cue weighting of 23%. For the subject in Figure 9b, the difference between the goal heading errors was larger (9.6°), corresponding to a smaller weighting of visual cues (4%). 
Overall, perceived heading during walking was strongly influenced by nonvisual cues. Figure 9c shows the mean visual heading for the +5° and −5° conflict conditions. On average, nonvisual cues contributed 85.7% and visual cues contributed 14.3%, with a standard deviation across subjects of 14.6%. The average weighting of both visual and nonvisual cues was significantly larger than zero, visual: t(25) = 5.04, p < .001, dz = .99; nonvisual: t(25) = 29.8, p < .001, dz = 5.85. 
Cue weighting in object motion judgment
We analyzed the contribution of visual cues and physical cues in object motion perception by estimating the object motion that would be perceived as stationary, similar to our analyses in Experiment 1. Depending on the weighting of visual cues and physical cues in heading perception, perceived heading would vary from being consistent with the visual heading to being consistent with the physical heading. If perception of object motion and perception of heading share common processing, the object will be perceived as stationary when its motion is consistent with the perceived heading. 
We estimated the object motion perceived to be stationary in the same way as Experiment 1. We fit a cumulative Gaussian to the probability of responding rightward as a function of object motion FOE to estimate a PSE and JND. The positive and negative conflict conditions were fit together assuming different PSEs but a common JND. 
Object motion judgments were strongly influenced by nonvisual cues. On average, nonvisual cues contribute to 62.7% and visual cues contribute to 37.3% of heading judgment. The mean JND was 2.53. Figure 10 shows the psychometric functions from the object motion judgments of two sample subjects and the mean object motion PSEs averaged across all subjects. If subjects relied entirely on visual information, the PSEs would be ±5°, whereas if they relied entirely on nonvisual information, the PSEs would be zero. The results from left subject are consistent with a small visual weight, and the results from the right subject are consistent with a larger visual weight. 
Figure 10.
 
(a, b) Psychometric functions from the object motion judgments of two sample subjects in Experiment 2. The graphs plot the mean percentage of trials on which the object was judged to be moving rightward as a function of the object motion FOE for the conditions with 5° cue conflicts (blue) and −5° cue conflicts. The curves show the estimated psychometric functions and the asterisks indicate the PSE for each curve. The PSEs correspond to the object motion FOE that would be perceived as stationary on average. Sample subject F shows a large of conflicting visual heading while sample subject G shows a smaller effect. (c) Objection motion PSEs for all subjects. The horizontal lines plot the mean PSEs averaged across subjects for the 5° conflict (blue) and −5° conflict (red) conditions. Shaded regions depict ± 1 standard error. The small diamonds plot results from individual subjects.
Figure 10.
 
(a, b) Psychometric functions from the object motion judgments of two sample subjects in Experiment 2. The graphs plot the mean percentage of trials on which the object was judged to be moving rightward as a function of the object motion FOE for the conditions with 5° cue conflicts (blue) and −5° cue conflicts. The curves show the estimated psychometric functions and the asterisks indicate the PSE for each curve. The PSEs correspond to the object motion FOE that would be perceived as stationary on average. Sample subject F shows a large of conflicting visual heading while sample subject G shows a smaller effect. (c) Objection motion PSEs for all subjects. The horizontal lines plot the mean PSEs averaged across subjects for the 5° conflict (blue) and −5° conflict (red) conditions. Shaded regions depict ± 1 standard error. The small diamonds plot results from individual subjects.
The biases in object motion judgments are unlikely to be due to range effects. Although the sampling of object FOEs was asymmetric, the overall range of object FOEs in an experimental block was symmetric because positive and negative conflict conditions were randomly intermixed. Unlike in Experiment 1, subjects could not easily determine the condition on any given trial. From the perspective of a subject, the distribution of leftward versus rightward object motion would have appeared symmetric. 
Comparison of cue weighting and biases
Both walking performance and object motion judgment were strongly influenced by nonvisual cues, but the overall influence of nonvisual cues was significantly lower for object motion judgments, t(25) = −6.35, p < .001; dz = −1.25 (Figure 11a). We further analyzed the individual differences in cue weights (Figure 11b). There was no significant correlation between the cue weights from walking and object motion judgments (r = 0.320; p = .111), although there was a trend toward a positive relationship. 
Figure 11.
 
Comparison of center biases in heading judgment and center biases in object motion judgment. (a) Mean visual cue weights for walking control task and object motion judgment task. Error bars depict ± standard error. For walking control, steering to align the visual heading from optic flow with the target direction would result in a visual cue weight of 1, and steering to aim the physical heading toward the target would result in a visual cue weight of 0. For object motion, judgments based solely on the visual motion of the target relative to the visual motion of the background would result in a visual cue weight of 1, and judgment that is consistent with the physical direction of motion would result in a visual cue weight of 0. (b) Individual results of visual weighting in walking control as a function of visual heading in object motion judgments. (c) Individual results of constant bias in walking control as a function of constant bias in object judgments.
Figure 11.
 
Comparison of center biases in heading judgment and center biases in object motion judgment. (a) Mean visual cue weights for walking control task and object motion judgment task. Error bars depict ± standard error. For walking control, steering to align the visual heading from optic flow with the target direction would result in a visual cue weight of 1, and steering to aim the physical heading toward the target would result in a visual cue weight of 0. For object motion, judgments based solely on the visual motion of the target relative to the visual motion of the background would result in a visual cue weight of 1, and judgment that is consistent with the physical direction of motion would result in a visual cue weight of 0. (b) Individual results of visual weighting in walking control as a function of visual heading in object motion judgments. (c) Individual results of constant bias in walking control as a function of constant bias in object judgments.
We also compared the constant biases from the two tasks (Figure 11c). In Experiment 2, there was a significant positive correlation between constant biases (r = 0.692; p < .001). Some individuals had overall biases in their walking direction. The observed correlation indicates that object motion judgments tended to be biased in a consistent way. 
Trial-to-trial variations
We analyzed trial-to-trial variations in walking performance and object motion judgments in the same way as the previous experiment. In Experiment 2, we used residual errors in walking direction as the additional predictor of objects motion judgments, analogous to how we used residual errors in heading estimates for the previous experiment. On individual trials, when subjects walked more to the left than the averaged trajectory, it would suggest that the perceived heading was more biased to the right, and vice versa. If judgments of object motion were based on the perceived heading, the PSEs of object motion judgments would be expected to shift as a function of the trial-to-trial differences in walking biases. 
We performed multinomial probit regression to fit object motion responses using three predictors: the FOE of object motion relative to the walking direction, the heading conflict condition, and the residual errors in walking direction from individual trials. The sign of the residual errors was reversed because errors in perceived heading would cause walking to be biased in the opposite direction. We found that residual walking errors were a predictor for object motion judgments: the average coefficient representing the influence of walking errors was significantly different from zero, t(25) = 6.11, p < .001; dz = 1.20. 
Figure 12 illustrates the trial-to-trial relationship between visual heading error and object motion judgments for some sample subjects (a) and the probability of responding rightward as a function of the difference between the object FOE and the visual heading (ignoring walking performance) (b). If object motion judgments were independent of trial-to-trial variations in walking direction, the fit lines would vertical. Fit lines that are not vertical indicate that the PSE covaried with walking errors. For the left sample subject, one can see that object motion judgments were strongly related to walking performance. The other sample subjects showed a weak relation (middle) or no relation (right). 
Figure 12.
 
(a) Plots of the trial-to-trial relationship between visual heading error and object motion judgments for some sample subjects. Each point corresponds to a trial. The x values are the differences between object FOE and visual heading, and the y values are the visual heading errors of the walking direction. The dark points are trials where the subjects judged the object to be moving rightward, and the light points are the trials where the subject judged the object to be moving leftward. Dashed lines show the best-fitting transition boundary based on a multivariate probit analysis. If object motion judgments were independent of walking error, the transition lines would be vertical. For the two subjects on the left, one can see that the walking error is also predictive of object motion response. (b) Probability of responding “rightward” as a function of the difference between the object FOE and the visual heading (ignoring walking performance).
Figure 12.
 
(a) Plots of the trial-to-trial relationship between visual heading error and object motion judgments for some sample subjects. Each point corresponds to a trial. The x values are the differences between object FOE and visual heading, and the y values are the visual heading errors of the walking direction. The dark points are trials where the subjects judged the object to be moving rightward, and the light points are the trials where the subject judged the object to be moving leftward. Dashed lines show the best-fitting transition boundary based on a multivariate probit analysis. If object motion judgments were independent of walking error, the transition lines would be vertical. For the two subjects on the left, one can see that the walking error is also predictive of object motion response. (b) Probability of responding “rightward” as a function of the difference between the object FOE and the visual heading (ignoring walking performance).
Summary
In Experiment 2, we observed both control of walking and object motion judgments relied on visual cues and nonvisual cues. The contribution of nonvisual cues to control of walking is significantly larger than the contribution of nonvisual cues to object motion judgments. Overall biases between the two tasks were significantly correlated across individuals, but no correlation was observed between cue weights across individuals. 
We also observed trial-to-trial correlation between variations in heading judgments and object motion judgments. The random variations in object motion judgments can be predicted by variations in heading judgments. 
General discussion
In two experiments, we measured perception of object motion while measuring perception of self-motion at the same time. Experiment 1 found that both heading perception and object motion perception were consistently biased toward the center, and there were trial-to-trial correlations between biases in heading judgments and object motion judgments. Experiment 2 found that visual and nonvisual information about self-motion contributed to both tasks, and walking errors on individual trials were correlated with the object motion judgments on the trials. These results are consistent with the idea that a common mechanism is used for perception of heading and perception of object motion. The common mechanism is prone to bias toward the center and is based on both visual and nonvisual self-motion information. 
Center bias
We observed the expected center biases in heading judgments. Center biases have been reported in a number of previous studies of perceived heading from optic flow (Ehrlich et al., 1998; Hanada & Ejima, 2000; Johnston et al., 1973; Saunders, 2010, 2014; Warren & Saunders, 1995; Xie et al., 2020; Xing & Saunders, 2016). The magnitude of center bias was smaller than observed by Xing and Saunders (2016) in a similar condition (9% vs. 20%). This difference might be due to the FOV: the FOV was larger in the current study (110° × 110°), and center bias is smaller with a larger FOV (Xing & Saunders, 2016). 
Object motion judgments were also biased in a way that is consistent with a center bias in perceived heading, which has not been tested previously. The amount of center bias in heading perception is similar to that in object motion, which suggests that they share a common mechanism. On the other hand, we did not observe a correlation between individual's center biases or overall biases between the two tasks, which seems to be in conflict with a common mechanism. However, the power in detecting a correlation between biases in the two tasks may have been low because of the limited range of variability across individuals. For the overall biases in Experiment 2, which had more variability, we did detect a correlation. Although we did not observe a significant correlation across individual differences in center bias, the mean biases were generally consistent for the two tasks. 
Visual and nonvisual self-motion cues
Our finding that nonvisual self-motion cues contribute to perception of object motion is consistent with findings from some previous studies. Dupin and Wexler (2013) had subjects judge the rotation of a foreground object and observed partial influence from both rotation of a background object and self-motion. Dyde and Harris (2008) had subjects make judgments about object motion relative to self-motion and found a difference between dark or lit conditions, and between active motion or passive motion conditions. MacNeilage et al. (2012) had subjects judge if object is moving upward or downward and observed object motion discrimination thresholds improved when nonvisual cues are added. Xie et al. (2020) used a nulling method to test object motion and observed incomplete nulling in visual only or nonvisual only conditions, but complete nulling when both cues were available. Fajen et al. (2013) observed a significant difference in percent passable when having subjects judge if they would pass in front of or behind an independent moving object during active self-motion under conditions with different perceived self-motion. Niehorster and Li (2017) tested whether self-motion component can be completely nulled from retinal object motion and observed incomplete nulling when only visual cues were available. This incomplete nulling might be due to missing nonvisual self-motion cues. Although the methods differ in these studies, the results all suggest that nonvisual cues to self-motion play a role in perception of object motion. 
For control of walking, subjects relied primarily on nonvisual cues, but there was also a detectable influence of visual heading cues. Subjects tended to aim their visual heading toward the target, but made some steering adjustments to bring the visual heading direction closer to the target. The weighting of nonvisual cues to walking in our study was 86.6%. Saunders (2014) observed a similar weighting of nonvisual cues (84%) for control of walking performance in a ground-only environment like that used in Experiment 2. Warren et al. (2001) observed visual weighting of 50% for control of walking in ground only condition, but they tested longer walking trajectories of 9 m. 
The weighting of nonvisual cues we observed in object motion judgment (62.3%) is smaller than the weighting for control of walking (86.6%). This difference could be due to the use of other information, besides perceived self-motion, for the object motion task. Local motion contrast is a potential cue that could be used to distinguish movement of objects from the background. There is previous evidence that local motion contrast contributes to perception of object motion. Although there is evidence that global optic flow contributes to flow parsing, a number of studies have found that global optic flow does not entirely account for perceived motion of moving objects (Niehorster & Li, 2017; Warren & Rushton, 2007, 2008, 2009a, 2009b; Warren et al., 2012). Some studies had subjects perform flow parsing when optic flow within the vicinity of the moving object were removed and found local motion contrast also play an important role in flow parsing (Niehorster & Li, 2017; Warren & Rushton, 2009a). 
It is also possible that our method underestimated the role of visual cues in perceived heading during walking. In our task, subjects tended to start with their physical heading toward the visual target and had limited opportunity to adjust their trajectory (approximately 2.5 m). Our analysis attempted to estimate the goal state of walking using the relationship between initial heading error and path curvature, but might have underestimated the difference between the initial state and the goal state. 
Although nonvisual cues had less influence on object motion judgments compared with walking, our results suggest more contribution from nonvisual cues than visual cues even for object motion judgments. Dupin and Wexler (2013) also found more contribution of nonvisual cues than visual cues when subjects judged rotation of a foreground object with background rotation and self-motion. However, some other studies observed more visual contribution (Dokka et al., 2015; Dyde & Harris, 2008; Xie et al, 2020). In Xie et al. (2020), subjects judged object motion in visual-only and nonvisual-only conditions and found that there was more compensation for self-motion in the visual-only condition, from which they inferred that visual information was more important. However, the contribution of visual and nonvisual cues might be different if they were tested in condition with both cues available. Another difference compared with our study is that the moving object lied on the ground in Xie et al. (2020), whereas in our study the object was 0.9 m above the ground. The local motion contrast in their conditions may have increased the visual contribution. Dokka et al. (2015) also observed more visual contribution, but this may have been due to the very slow self-motion speed, as suggested by Xie et al. (2020). Dyde and Harris (2008) used a very different paradigm, so the results are hard to compare. Compared with previous studies, we observed more influence of nonvisual cues on object motion judgments, but this could be explained by differences in stimuli and conditions. 
We observed a significant correlation between the overall biases in heading judgments and object motion judgments across individuals, but no significant correlation for cue weights. Again, failing to observe correlation in cue weighting might be due to small variances. The cue weights we measured across individuals for both tasks were clustered in a small range. The correlation might be obscured by the small variance. 
Integrated heading estimate contributes to object motion perception
Our results provide evidence that an integrated estimate of heading contributes to both perceived heading and perception of object motion. Previous studies have observed similar effects of quality and quantity of optic flow on perceived heading and object motion (Foulkes et al., 2013a; Foulkes et al., 2013b) and contribution of nonvisual cues to both tasks (Dupin & Wexler, 2013; Dyde & Harris, 2008; Fajen et al., 2013; MacNeilage et al., 2012; Niehorster & Li, 2017; Xie et al., 2020), consistent with common mechanisms. Our study further found trial-to-trial correlations between heading errors and judgment of object motion. Across trials with identical self-motion cues, object motion judgment varied in a way that was consistent with bias in heading judgment in the trial. We also found that object motion judgments were influenced by nonvisual information and center bias, which suggest that object motion perception depends on the estimated heading after integration of visual cues with nonvisual cues, and after integration with a Bayesian prior toward the center. These findings all suggest integrated heading contributes to object motion perception. 
However, perception of the relative motion of moving objects is not entirely based on perceived heading. Previous studies have found differential effects on heading judgments and object motion judgments (Rushton et al., 2018a; Warren et al., 2012), and the precision of object motion judgments can exceed the precision of heading judgments (Rushton, Chen, & Li, 2018b). One possibility is that heading perception and object motion perception share common mechanisms but diverge at later stages. Rushton et al. (2018b) suggest that the integration of other visual cues besides motion cues for heading perception might occur after heading had provided input to perception of object motion. 
In some situations, the relative motion of objects could be extracted easily from optic flow, so judgments could be based on computations that do not use an integrated estimate of heading. In conditions used in most previous studies, the depth of an object was within the range of depths of background objects, so object motion can be detected based on deviations from the global pattern. This property of the stimuli could explain the high precision of object motion judgments observed in some studies (Rushton et al., 2018b). In our study, determining object motion is more difficult based on comparison with the global pattern because there was a large depth difference between the moving dot and the background. 
The trial-to-trial correlation could in principle be due to perception of object motion affecting perceived self-motion, but we think this is unlikely in our conditions. In both experiments, the object only appeared briefly in the middle of the simulated walking/active walking, and the simulated walking/active walking ended after the object disappears. In Experiment 1, the object appeared after 1 m simulated walking, was present for 1 second, and simulated walking ends 1.5 seconds after object disappears. In Experiment 2, the object appeared after 1 m of active walking, was present for 1 second, and the active walking ends when reaching 2.5 m. 
To conclude, our study found that heading perception and perception of object motion were consistent in multiple ways: both exhibited center biases, both were based on a combination of visual and nonvisual cues, and heading errors were predictive of object motion judgments on a trial-to-trial basis. These results suggest that there is substantial overlap in the processes underlying perception of self-motion and object motion. 
Acknowledgments
Supported by a grant from the Hong Kong Research Grants Council, GRF 17407914. 
Commercial relationships: none. 
Corresponding author: Jeffrey A. Saunders. 
Email: jsaun@hku.hk. 
Address: Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong SAR, China. 
References
Anstis, S. M. (1970). Phi movement as a subtraction process. Vision Research, 10, 1411–1430. [CrossRef]
Bruggeman, H., & Warren, W. H. (2010). The direction of walking–but not throwing or kicking–is adapted by optic flow. Psychological Science, 21(7), 1006–1013, http://doi.org/10.1177/0956797610372635. [CrossRef]
Bruggeman, H., Zosh, W., & Warren, W. H. (2007). Optic flow drives human visuo- locomotor adaptation. Current Biology: CB, 17(23), 2035–2040, http://doi.org/10.1016/j.cub.2007.10.059. [CrossRef]
Dokka, K., MacNeilage, P. R., DeAngelis, G. C., & Angelaki, D. E. (2015). Multisensory self-motion compensation during object trajectory judgments. Cerebral Cortex, 25(3), 619–630. [CrossRef]
Dokka, K., Park, H., Jansen, M., DeAngelis, G. C., & Angelaki, D. E. (2019). Causal inference accounts for heading perception in the presence of object motion. Proceedings of the National Academy of Sciences of the United States of American, 116(18), 9060–9065, doi:10.1073/pnas.1820373116. [CrossRef]
Duffy, C. J., & Wurtz, R. H. (1993). An illusory transformation of optic flow fields. Vision Research, 33(11), 1481–1490. [CrossRef]
Dupin, L., & Wexler, M. (2013). Motion perception by a moving observer in a three-dimensional environment. Journal of Vision, 13(2):15, 1–14, https://doi.org/10.1167/13.2.15. [CrossRef]
Dyde, R. T., & Harris, L. R. (2008). The influence of retinal and extra-retinal motion cues on perceived object motion during self-motion. Journal of Vision, 8(14):5, 1–10, https://doi.org/10.1167/8.14.5. [CrossRef]
Ehrlich, S. M., Beck, D. M., Crowell, J. A., Freeman, T. C. A., & Banks, M. S. (1998). Depth information and perceived self-motion during simulated gaze rotation. Vision Research, 38, 3129–3145. [CrossRef]
Fajen, B. R., & Matthis, J. S. (2013). Visual and non-visual contributions to the perception of object motion during self-motion. PloS One, 8(2), e55446, doi:10.1371/journal.pone.0055446. [CrossRef]
Fajen, B. R., Parade, M. S., & Matthis, J. S. (2013). Humans perceive object motion in world coordinates during obstacle avoidance. Journal of Vision, 13, 1–13, doi:10.1167/13.8.25. [CrossRef]
Foulkes, A. J., Rushton, S. K., & Warren, P. A. (2013a). Heading recovery from optic flow: Comparing performance of humans and computational models. Frontiers in Behavioral Neuroscience, 7(June), 53, doi:10.3389/fnbeh.2013.00053.
Foulkes, A. J., Rushton, S. K., & Warren, P. A. (2013b). Flow parsing and heading perception show similar dependence on quality and quantity of optic flow. Frontiers in Behavioral Neuroscience, 7(June), 49, doi:10.3389/fnbeh.2013.00049.
Hanada, M., & Ejima, Y. (2000). Effects of roll and pitch components in retinal flow on heading judgement. Vision Research, 40, 1827–1838. [CrossRef]
Johnston, I. R., White, G. R., & Cumming, R. W. (1973). The role of optical expansion patterns in locomotor control. American Journal of Psychology, 86(2), 311–324, https://doi.org/10.2307/1421439. [CrossRef]
MacNeilage, P. R., Zhang, Z., DeAngelis, G. C., & Angelaki, D. E. (2012) Vestibular facilitation of optic flow parsing. PLoS One 7(7), e40264, doi:10.1371/journal.pone.0040264. [CrossRef] [PubMed]
Niehorster, D. C., & Li, L. (2017). Accuracy and tuning of flow parsing for visual perception of object motion during self-motion. i-Perception, 8(3), 2041669517708206.
Niehorster, D. C., Cheng, J. C. K., & Li, L. (2010). Optimal combination of form and motion cues in human heading perception. Journal of Vision, 10(11):20, 1–15, http://www.journalofvision.org/content/10/11/20.
Rushton, S. K., & Warren, P. A. (2005). Moving observers, relative retinal motion and the detection of object movement. Current Biology, 15(14), R542–R543. [CrossRef]
Rushton, S. K., Bradshaw, M. F., & Warren, P. A. (2007). The pop out of scene-relative object movement against retinal motion due to self-movement. Cognition, 105(1), 237–245, doi:10.1016/j.cognition.2006.09.004. [CrossRef]
Rushton, S. K., Niehorster, D. C., Warren, P. A., & Li, L. (2018a) The primary role of flow processing in the identification of scene-relative object movement. Journal of Neuroscience, 38 (7), 1737–1743, https://doi.org/10.1523/JNEUROSCI.3530-16.2017. [CrossRef]
Rushton, S. K., Chen, R., & Li, L. (2018b). Ability to identify scene-relative object movement is not limited by, or yoked to, ability to perceive heading. Journal of Vision, 18(6), 1–16, https://doi.org/10.1167/18.6.11. [CrossRef]
Saunders, J. A. (2014). Reliability and relative weighting of visual and nonvisual information for perceiving direction of self- motion during walking. Journal of Vision, 14(3), 1–17, http://www.journalofvision.org/content/14/3/24. [CrossRef]
Saunders, J. A., & Durgin, F. H. (2011). Adaptation to conflicting visual and physical heading directions during walking. Journal of Vision, 11(3), 1–10, http://www.journalofvision.org/content/11/3/15. [CrossRef]
Saunders, J. A. (2010). View rotation is used to perceive path curvature from optic flow. Journal of Vision, 10(13), 1–15, http://www.journalofvision.org/content/10/13/25.
Vaina, L. M., Beardsley, S. A., & Rushton, S. K. (Eds.). (2004). Optic flow and beyond. Alphen aan den Rijn, the Netherlands: Kluwer.
Warren, W. H., Jr., Blackwell, A. W., Kurtz, K. J., Hatsopoulos, N. G., & Kalish, M. L. (1991). On the sufficiency of the velocity field for perception of heading. Biological Cybernetics, 65(5), 311–320. [CrossRef]
Warren, W. H., & Hannon, D. J. (1988). Direction of self-motion is perceived from optical flow. Nature, 336(6195), 162–163. [CrossRef]
Warren, W. H. Jr, Kay, B. A., Zosh, W. D., Duchon, A. P., & Sahuc, S. (2001). Optic flow is used to control human walking. Nature Neuroscience, 4(2), 213–216, PMID: 11175884 [CrossRef]
Warren, W. H., Jr., Morris, M. W., & Kalish, M. (1988). Perception of translational heading from optic flow. Journal of Experimental Psychology: Human Perception and Performance, 14(4), 646–660.
Warren, P. A., & Rushton, S. K. (2007). Perception of object trajectory: Parsing retinal motion into self and object movement components. Journal of Vision, 7(11):2, 1–11, http://journalofvision.org/7/11/2/. [CrossRef]
Warren, P. A., & Rushton, S. K. (2008). Evidence for flow-parsing in radial flow displays. Vision Research, 48(5), 655–663, doi:10.1016/j.visres.2007.10.023. [CrossRef]
Warren, P. A., & Rushton, S. K. (2009a). Optic flow processing for the assessment of object movement during ego movement. Current Biology : CB, 19(18), 1555–1560, doi:10.1016/j.cub.2009.07.057. [CrossRef]
Warren, P. A., & Rushton, S. K. (2009b). Perception of scene-relative object movement: Optic flow parsing and the contribution of monocular depth cues. Vision Research, 49(11), 1406–1419, doi:10.1016/j.visres.2009.01.016. [CrossRef]
Warren, P. A., Rushton, S. K., & Foulkes, A. J. (2012). Does optic flow parsing depend on prior estimation of heading? Journal of Vision, 12(11), 1–14, http://www.journalofvision.org/content/12/11/8, doi:10.1167/12.11.8. [CrossRef]
Warren, W. H., & Saunders, J. A. (1995). Perceiving heading in the presence of moving objects. Perception, 24(3), 315–331. [CrossRef]
Xie, M., Niehorster, D. C., Lappe, M., & Li, L. (2020). Roles of visual and non-visual information in the perception of scene-relative object motion during walking. Journal of Vision, 20(10), 1–11, https://doi.org/10.1167/jov.20.10.15. [CrossRef]
Xing, X., & Saunders, J. (2016). Center bias in perceived heading from optic flow. Journal of Vision, 16, 884. [Abstract] [CrossRef]
Figure 1.
 
The problem of determining the motion of independently moving object during self-motion. Retinal motion is a combination of optic flow due to self-motion and object motion. To identify object motion in world coordinates, the visual system has to distinguish the component of retinal motion that is not due to self-motion.
Figure 1.
 
The problem of determining the motion of independently moving object during self-motion. Retinal motion is a combination of optic flow due to self-motion and object motion. To identify object motion in world coordinates, the visual system has to distinguish the component of retinal motion that is not due to self-motion.
Figure 2.
 
Predicted object motion judgments when perceived heading is unbiased (top) or biased in a leftward direction (bottom). The white circle indicates the true heading and the grey circle is the perceived heading. The solid arrow indicates the retinal motion of object and the dotted line shows the direction of retinal motion that would be consistent with the perceived heading. If perception of object motion was based on accurate perception of the heading, the object should appeared stationary when its retinal motion aligns with the radial direction from the FOE (top row). If perception of heading was biased and object motion was determined based on the biased perceived heading, the retinal motion of the object would have to be different to be perceived as stationary (bottom row). If the perceived heading was biased to the left, as in this example, an object would have to be moving rightward relative to the environment to be perceived as stationary.
Figure 2.
 
Predicted object motion judgments when perceived heading is unbiased (top) or biased in a leftward direction (bottom). The white circle indicates the true heading and the grey circle is the perceived heading. The solid arrow indicates the retinal motion of object and the dotted line shows the direction of retinal motion that would be consistent with the perceived heading. If perception of object motion was based on accurate perception of the heading, the object should appeared stationary when its retinal motion aligns with the radial direction from the FOE (top row). If perception of heading was biased and object motion was determined based on the biased perceived heading, the retinal motion of the object would have to be different to be perceived as stationary (bottom row). If the perceived heading was biased to the left, as in this example, an object would have to be moving rightward relative to the environment to be perceived as stationary.
Figure 3.
 
(a) Illustration of the task and conditions in Experiment 1. Subjects viewed 3.5 seconds of simulated self-motion in virtual reality while seated. An independent moving object appeared 1 second after the onset of self-motion and remained visible for 1 second. Subjects estimated their heading direction and judged whether the object was moving leftward or rightward in world coordinates. The heading of the simulated self-motion was either +15° or −15°. Filler trials with random heading direction were also included. (b) Illustration of the task and conditions in Experiment 2. Subjects walked toward a virtual target in virtual reality. An independently moving object appeared after the subject had moved 1m and remained visible for 1 second. At 2.5 m, the subjects were cued to stop, and they judged whether the object was moving leftward or rightward. The visual direction of self-motion differed from the physical direction of self-motion by −5° or +5°.
Figure 3.
 
(a) Illustration of the task and conditions in Experiment 1. Subjects viewed 3.5 seconds of simulated self-motion in virtual reality while seated. An independent moving object appeared 1 second after the onset of self-motion and remained visible for 1 second. Subjects estimated their heading direction and judged whether the object was moving leftward or rightward in world coordinates. The heading of the simulated self-motion was either +15° or −15°. Filler trials with random heading direction were also included. (b) Illustration of the task and conditions in Experiment 2. Subjects walked toward a virtual target in virtual reality. An independently moving object appeared after the subject had moved 1m and remained visible for 1 second. At 2.5 m, the subjects were cued to stop, and they judged whether the object was moving leftward or rightward. The visual direction of self-motion differed from the physical direction of self-motion by −5° or +5°.
Figure 4.
 
(a, b) Histograms of heading judgments from two sample subjects. The distribution of responses from trials with 15° heading are shown in blue, and the distribution from trials with −15° heading are shown in red. The left sample subject showed a large bias toward the center and the right sample subject showed a smaller center bias. (c) Results from all subjects. The horizontal lines plot the mean judged heading averaged across subjects for 15° heading (blue) and −15° heading conditions (red). Shaded regions indicate ±1 standard error. The small diamonds plot the mean heading judgments from individual subjects.
Figure 4.
 
(a, b) Histograms of heading judgments from two sample subjects. The distribution of responses from trials with 15° heading are shown in blue, and the distribution from trials with −15° heading are shown in red. The left sample subject showed a large bias toward the center and the right sample subject showed a smaller center bias. (c) Results from all subjects. The horizontal lines plot the mean judged heading averaged across subjects for 15° heading (blue) and −15° heading conditions (red). Shaded regions indicate ±1 standard error. The small diamonds plot the mean heading judgments from individual subjects.
Figure 5.
 
(a, b) Psychometric functions from the object motion judgments of two sample subjects. The graphs plot the mean percentage of trials on which the object was judged to be moving rightward as a function of the object motion FOE for the 15° heading condition (blue dots) and the −15° heading condition (red dots). The curves show the estimated psychometric functions and the asterisks indicate the PSE for each curve. The PSEs correspond to the object motion FOE that would be perceived as stationary on average. Sample subject A shows a large center bias while sample subject C shows a smaller center bias. (c) Object motion PSEs for all subjects. The horizontal lines plot the mean PSEs averaged across subjects for the 15° heading (blue) and −15° heading conditions (red). Shaded regions depict ±1 standard error. The small diamonds plot results from individual subjects.
Figure 5.
 
(a, b) Psychometric functions from the object motion judgments of two sample subjects. The graphs plot the mean percentage of trials on which the object was judged to be moving rightward as a function of the object motion FOE for the 15° heading condition (blue dots) and the −15° heading condition (red dots). The curves show the estimated psychometric functions and the asterisks indicate the PSE for each curve. The PSEs correspond to the object motion FOE that would be perceived as stationary on average. Sample subject A shows a large center bias while sample subject C shows a smaller center bias. (c) Object motion PSEs for all subjects. The horizontal lines plot the mean PSEs averaged across subjects for the 15° heading (blue) and −15° heading conditions (red). Shaded regions depict ±1 standard error. The small diamonds plot results from individual subjects.
Figure 6.
 
Comparison of center biases in heading judgment and center biases in object motion judgment. (a) Mean center bias for heading judgment task and object motion judgment task. Error bars depict ± standard error. (b) Individual results of center bias in object motion judgments as a function of their center bias in heading judgments. (c) Individual results of constant bias in object motion judgments as a function of their constant bias in heading judgments.
Figure 6.
 
Comparison of center biases in heading judgment and center biases in object motion judgment. (a) Mean center bias for heading judgment task and object motion judgment task. Error bars depict ± standard error. (b) Individual results of center bias in object motion judgments as a function of their center bias in heading judgments. (c) Individual results of constant bias in object motion judgments as a function of their constant bias in heading judgments.
Figure 7.
 
(a–c) Plots of the trial-to-trial relationship between heading estimates and object motion judgments for some sample subjects. Each point corresponds to a trial. The x values are the FOEs of object motion and the y values are the heading estimates. The dark points are trials where the subjects judged the object to be moving rightward, and the light points are the trials where the subject judged the object to be moving leftward. Dashed lines show the best-fitting transition boundary based on a multivariate probit analysis. If object motion judgments were independent of walking error, the transition lines would be vertical. For the top subjects, one can see that the trial-to-trial differences in heading estimates are predictive of object motion response. (d) Mean change in PSE per change in heading judgment averaged across subjects. Error bars depict ± standard error.
Figure 7.
 
(a–c) Plots of the trial-to-trial relationship between heading estimates and object motion judgments for some sample subjects. Each point corresponds to a trial. The x values are the FOEs of object motion and the y values are the heading estimates. The dark points are trials where the subjects judged the object to be moving rightward, and the light points are the trials where the subject judged the object to be moving leftward. Dashed lines show the best-fitting transition boundary based on a multivariate probit analysis. If object motion judgments were independent of walking error, the transition lines would be vertical. For the top subjects, one can see that the trial-to-trial differences in heading estimates are predictive of object motion response. (d) Mean change in PSE per change in heading judgment averaged across subjects. Error bars depict ± standard error.
Figure 8.
 
(a) Illustration of the variables computed from walking trajectories. We computed vectors representing walking direction in the horizontal plane from the first 1 m of walking (H0) and from 1 m to 2.5 m (H1). The initial visual heading error (H0-T) was the difference between H0 and the target direction (T). The difference between H0 and H1 was used as a measure of walking adjustment on a trial (DH = H1 – H0). The figure shows the measures for three hypothetical walking trials. (b) Expected relationship between initial visual heading error (H0-T) and walking adjustment (DH) if subjects steered to aim their perceived heading toward the target. The heading direction for which no adjustment is predicted (DH = 0) would be the average heading that is perceived to be toward the target.
Figure 8.
 
(a) Illustration of the variables computed from walking trajectories. We computed vectors representing walking direction in the horizontal plane from the first 1 m of walking (H0) and from 1 m to 2.5 m (H1). The initial visual heading error (H0-T) was the difference between H0 and the target direction (T). The difference between H0 and H1 was used as a measure of walking adjustment on a trial (DH = H1 – H0). The figure shows the measures for three hypothetical walking trials. (b) Expected relationship between initial visual heading error (H0-T) and walking adjustment (DH) if subjects steered to aim their perceived heading toward the target. The heading direction for which no adjustment is predicted (DH = 0) would be the average heading that is perceived to be toward the target.
Figure 9.
 
(a, b) Examples of how the walking goal direction was estimated from walking trajectories for two sample subjects. The graphs plot the heading adjustment between initial heading direction and final heading direction (H1 – H0) as a function of the initial visual heading error (H0 – T). Blue points show trials with +5° conflict between visual and physical heading and red points show trials with –5° conflict. Overall, subjects tended to adjust their heading in a negative direction when initial heading error was positive and vice versa. No adjustment would suggest that a subject perceived themselves to be headed toward the target. For each subject and condition, we performed regression fits to estimate the visual heading error that would be expected to result in no adjustment (solid diamonds), and used this as an estimate of the walking goal. Steering to reduce visual error would result in a walking goal of 0°, while steering to align physical direction with the target would result in walking goals of ±5°. (c) The mean walking goals, averaged across subjects, for the conditions with positive (blue) and negative (red) heading conflicts. Shaded regions depict ±1 standard error. Small diamonds plot means from individual subjects.
Figure 9.
 
(a, b) Examples of how the walking goal direction was estimated from walking trajectories for two sample subjects. The graphs plot the heading adjustment between initial heading direction and final heading direction (H1 – H0) as a function of the initial visual heading error (H0 – T). Blue points show trials with +5° conflict between visual and physical heading and red points show trials with –5° conflict. Overall, subjects tended to adjust their heading in a negative direction when initial heading error was positive and vice versa. No adjustment would suggest that a subject perceived themselves to be headed toward the target. For each subject and condition, we performed regression fits to estimate the visual heading error that would be expected to result in no adjustment (solid diamonds), and used this as an estimate of the walking goal. Steering to reduce visual error would result in a walking goal of 0°, while steering to align physical direction with the target would result in walking goals of ±5°. (c) The mean walking goals, averaged across subjects, for the conditions with positive (blue) and negative (red) heading conflicts. Shaded regions depict ±1 standard error. Small diamonds plot means from individual subjects.
Figure 10.
 
(a, b) Psychometric functions from the object motion judgments of two sample subjects in Experiment 2. The graphs plot the mean percentage of trials on which the object was judged to be moving rightward as a function of the object motion FOE for the conditions with 5° cue conflicts (blue) and −5° cue conflicts. The curves show the estimated psychometric functions and the asterisks indicate the PSE for each curve. The PSEs correspond to the object motion FOE that would be perceived as stationary on average. Sample subject F shows a large of conflicting visual heading while sample subject G shows a smaller effect. (c) Objection motion PSEs for all subjects. The horizontal lines plot the mean PSEs averaged across subjects for the 5° conflict (blue) and −5° conflict (red) conditions. Shaded regions depict ± 1 standard error. The small diamonds plot results from individual subjects.
Figure 10.
 
(a, b) Psychometric functions from the object motion judgments of two sample subjects in Experiment 2. The graphs plot the mean percentage of trials on which the object was judged to be moving rightward as a function of the object motion FOE for the conditions with 5° cue conflicts (blue) and −5° cue conflicts. The curves show the estimated psychometric functions and the asterisks indicate the PSE for each curve. The PSEs correspond to the object motion FOE that would be perceived as stationary on average. Sample subject F shows a large of conflicting visual heading while sample subject G shows a smaller effect. (c) Objection motion PSEs for all subjects. The horizontal lines plot the mean PSEs averaged across subjects for the 5° conflict (blue) and −5° conflict (red) conditions. Shaded regions depict ± 1 standard error. The small diamonds plot results from individual subjects.
Figure 11.
 
Comparison of center biases in heading judgment and center biases in object motion judgment. (a) Mean visual cue weights for walking control task and object motion judgment task. Error bars depict ± standard error. For walking control, steering to align the visual heading from optic flow with the target direction would result in a visual cue weight of 1, and steering to aim the physical heading toward the target would result in a visual cue weight of 0. For object motion, judgments based solely on the visual motion of the target relative to the visual motion of the background would result in a visual cue weight of 1, and judgment that is consistent with the physical direction of motion would result in a visual cue weight of 0. (b) Individual results of visual weighting in walking control as a function of visual heading in object motion judgments. (c) Individual results of constant bias in walking control as a function of constant bias in object judgments.
Figure 11.
 
Comparison of center biases in heading judgment and center biases in object motion judgment. (a) Mean visual cue weights for walking control task and object motion judgment task. Error bars depict ± standard error. For walking control, steering to align the visual heading from optic flow with the target direction would result in a visual cue weight of 1, and steering to aim the physical heading toward the target would result in a visual cue weight of 0. For object motion, judgments based solely on the visual motion of the target relative to the visual motion of the background would result in a visual cue weight of 1, and judgment that is consistent with the physical direction of motion would result in a visual cue weight of 0. (b) Individual results of visual weighting in walking control as a function of visual heading in object motion judgments. (c) Individual results of constant bias in walking control as a function of constant bias in object judgments.
Figure 12.
 
(a) Plots of the trial-to-trial relationship between visual heading error and object motion judgments for some sample subjects. Each point corresponds to a trial. The x values are the differences between object FOE and visual heading, and the y values are the visual heading errors of the walking direction. The dark points are trials where the subjects judged the object to be moving rightward, and the light points are the trials where the subject judged the object to be moving leftward. Dashed lines show the best-fitting transition boundary based on a multivariate probit analysis. If object motion judgments were independent of walking error, the transition lines would be vertical. For the two subjects on the left, one can see that the walking error is also predictive of object motion response. (b) Probability of responding “rightward” as a function of the difference between the object FOE and the visual heading (ignoring walking performance).
Figure 12.
 
(a) Plots of the trial-to-trial relationship between visual heading error and object motion judgments for some sample subjects. Each point corresponds to a trial. The x values are the differences between object FOE and visual heading, and the y values are the visual heading errors of the walking direction. The dark points are trials where the subjects judged the object to be moving rightward, and the light points are the trials where the subject judged the object to be moving leftward. Dashed lines show the best-fitting transition boundary based on a multivariate probit analysis. If object motion judgments were independent of walking error, the transition lines would be vertical. For the two subjects on the left, one can see that the walking error is also predictive of object motion response. (b) Probability of responding “rightward” as a function of the difference between the object FOE and the visual heading (ignoring walking performance).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×