September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Interpretation of Depth from Scaled Motion Parallax in Virtual Reality
Author Affiliations & Notes
  • Xue Teng
    Department of Electrical Engineering & Computer Science, York University
  • Laurie Wilcox
    Centre for Vision Research, York University
  • Robert Allison
    Department of Electrical Engineering & Computer Science, York University
    Centre for Vision Research, York University
  • Footnotes
    Acknowledgements  Thanks to NSERC Canada for support under a Collaborative Research and Development grant in partnership with Qualcomm Canada and to the Vision: Science to Applications (VISTA) program partly funded by the Canada First Research Excellence Fund.
Journal of Vision September 2021, Vol.21, 2035. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xue Teng, Laurie Wilcox, Robert Allison; Interpretation of Depth from Scaled Motion Parallax in Virtual Reality. Journal of Vision 2021;21(9):2035. doi:

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Humans use visual, vestibular, kinesthetic and other cues to effectively navigate through the world. Therefore, conflict between these sources of information has potentially significant implications for human perception of geometric layout. Previous work has found that introducing gain differences between physical and virtual head movement had little effect on distance perception. However, motion parallax is known to be a potent cue to relative depth. In the present study, we explore the impact of conflict between physical and portrayed self-motion on perception of object shape. To do so we varied the gain between virtual and physical head motion (ranging from a factor of 0.5 to 2) and measured the effect on depth perception. Observers viewed a ‘fold’ stimulus, a convex dihedral angle formed by two irregularly-textured, wall-oriented planes connected at a common vertical edge. Stimuli were rendered and presented using head mounted displays (Oculus Rift S or Quest in Rift S emulation mode). On each trial, observers adjusted the angle of the fold till the two joined planes appeared perpendicular. To assess the role of stereopsis we tested binocularly and monocularly. To introduced motion parallax, observers swayed laterally through a distance of 30 cm at 0.5 Hz timed to a metronome beat; this motion was multiplied by the gain to produce the virtual view-point. Our results showed that gain had little effect on depth perception in the binocular test conditions. Using a model incorporating self and object motion, we computed predicted perceived depths based on the adjusted angles and then compared these with each observer’s input. The modelled outcomes were very consistent across visual manipulations, suggesting that observers have remarkably accurate perception of object motion under these conditions. Additional analyses predict corresponding variations in distance perception and we will test these hypotheses in future experiments.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.