August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Increasing motion parallax gain compresses space and 3D object shape
Author Affiliations & Notes
  • Xue Teng
    Centre for Vision Research, York University
  • Robert Allison
    Centre for Vision Research, York University
  • Laurie Wilcox
    Centre for Vision Research, York University
  • Footnotes
    Acknowledgements  The authors wish to thank NSERC Canada for funding support.
Journal of Vision August 2023, Vol.23, 5015. doi:https://doi.org/10.1167/jov.23.9.5015
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xue Teng, Robert Allison, Laurie Wilcox; Increasing motion parallax gain compresses space and 3D object shape. Journal of Vision 2023;23(9):5015. https://doi.org/10.1167/jov.23.9.5015.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

When moving about the world, humans rely on visual, proprioceptive and vestibular cues to perceive depth and distance. Normally, these sources of information are consistent. However, what happens if we receive conflicting information about how far we have moved? A previous study reported that at distances of 1.3 to 1.5 m, portrayed binocular 3D shape was not affected by motion gain; however, apparent distance and monocular depth settings were influenced. In our study, we extended the range of distances to 1.5 to 6 m. A VR headset was used to display gain distortions binocularly and monocularly to one eye. Observers swayed from side to side through 20 cm at 0.5 Hz to the beat of a metronome. The simulated virtual motion was varied by a gain of 0.5 to 2.0 times the physical motion. Observers first adjusted a vertical fold until its sides appeared to form a 90-degree angle. The fold then disappeared and they indicated its remembered distance by adjusting the position of a virtual pole. In the monocular condition as gain increased, observers provided increasingly compressed fold depth settings at 1.5 and 3 but not at 6 m. Under binocular viewing, increasing gain compressed distance but not object shape settings. To ensure that the weak binocular effects were not due to failure to perceive the gain, we separately assessed gain discrimination thresholds using the fold stimulus. We found that observers were sensitive to the manipulation over this range and tended to perceive a gain of 1.1 as having no motion distortion under both viewing conditions. It is clear from our data that monocular viewing of kinesthetic/visual mismatch results in significant variations in portrayed depth of the fold. These effects can be somewhat mitigated by increasing viewing distance, but even more so by viewing with both eyes.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×