September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Vestibular and visual Information are required for the accurate perception of object motion during self-motion
Author Affiliations
  • Mingyang Xie
    Institute of Cognitive Neuroscience, East China Normal University, Shanghai, PRC
  • Diederick Niehorster
    Institute of Psychology, University of Muenster, Muenster, Germany
  • Markus Lappe
    Institute of Psychology, University of Muenster, Muenster, Germany
  • Li Li
    Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong SAR
    Neural Science Program, NYU Shanghai, Shanghai, PRC
Journal of Vision August 2017, Vol.17, 427. doi:https://doi.org/10.1167/17.10.427
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mingyang Xie, Diederick Niehorster, Markus Lappe, Li Li; Vestibular and visual Information are required for the accurate perception of object motion during self-motion. Journal of Vision 2017;17(10):427. https://doi.org/10.1167/17.10.427.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Although humans can accurately perceive scene-relative object motion during self-motion in the real world, recent studies reported that such object motion perception is not accurate when based on visual information alone. Here we extend this work by systematically examining the perception of scene-relative object motion based on vestibular information only, visual information only, and combined vestibular and visual information. In the vestibular only condition, observers wore a Head Mounted Display (Oculus DK2, 100° FOV) and walked at 1 m/s through an empty ground environment that provided no flow information about self-motion. In the visual only condition, observers viewed a simulation of linear translation at the same speed over a random dot ground environment that provided optic flow. In the combined vestibular and visual condition, observers walked through the random dot ground environment. For all three display conditions, a fixation point was placed on the ground in front of the observer. After 1s of self-motion at 1 m/s, this fixation point disappeared and a moving red dot probe (diameter: 1°) appeared in front of the observer at 15° below the horizon. The probe moved sideways in the virtual world at 6°/s, and observers judged whether the probe moved away or toward them using a handheld controller. We found that with only vestibular information about self-motion, about half of the probe's retinal motion component due to self-motion (mean±SE: 54%±4%) was removed for the recovery of scene-relative object motion. With only visual information, a higher percentage (81%±4%, t(11)=4.830, p< 0.05) was removed. With combined vestibular and visual information, the percentage removed increased to 98%±3% (t(11)=5.785, p< 0.05). We conclude that neither vestibular information nor visual information alone is sufficient for the accurate perception of scene-relative object motion during self-motion. Accurate perception of scene-relative object motion requires the integration of vestibular and visual information.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×