September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Head tracking in virtual reality displays reduces the misperception of 3D motion
Author Affiliations
  • Jacqueline Fulvio
    Department of Psychology, University of Wisconsin - Madison
  • Michelle Wang
    Department of Psychology, Otterbein University
  • Bas Rokers
    Department of Psychology, University of Wisconsin - Madison Department of Psychology, Utrecht University, the Netherlands
Journal of Vision September 2015, Vol.15, 1180. doi:https://doi.org/10.1167/15.12.1180
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jacqueline Fulvio, Michelle Wang, Bas Rokers; Head tracking in virtual reality displays reduces the misperception of 3D motion. Journal of Vision 2015;15(12):1180. https://doi.org/10.1167/15.12.1180.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Observers rely on multiple complementary sensory cues to guide behavior. Previously, we showed that when observers estimate 3D object motion, they exhibit a surprising tendency to confuse approaching and receding motion (Fulvio et al, JOV 2014). However, such confusion seems to rarely occur in real world motion estimation. Extra-retinal cues such as head motion likely help disambiguate motion in depth. We tested this using the Oculus Rift, a head-mounted virtual reality display that can provide appropriate sensory feedback in response to changes in head orientation. We predicted that if observers rely on head motion to disambiguate motion in depth, accuracy in estimation of 3D object motion would improve with more natural viewing conditions. On each trial, a target with one of 3 contrast levels moved in a randomly chosen direction in the horizontal plane for 1s before disappearing. Observers (n=28) then adjusted the position of a ‘paddle’ around an invisible orbit so that it would intercept the target had it continued along the trajectory. Three head-motion conditions were tested: (i) ‘fixed’ – no updating with head orientation; (ii) ‘tracked’ – display updating with changes in head orientation; (iii) ‘lagged’ – display updating delayed by 50ms. Feedback was not provided. Reduced target contrast increased the proportion of motion in depth confusions across head-motion conditions (t(207)=-12.905, p< 0.001, d=0.89). Importantly, changes in head orientation were small (average max ~ 0.94deg in 3D space), yet had a significant effect on performance (t(207)=-3.3491, p< 0.001, d=0.23) - motion in depth confusions were less likely in the tracked versus fixed condition (t(166)=-1.94, p=0.027, d=0.42). We did not find differences between the tracked and lagged conditions (p=0.69), or fixed and lagged conditions (p=0.13). These results reveal a critical role for extra-retinal cues in 3D motion perception and contribute to the understanding of cross-modal integration.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×