October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Scene-relative object motion biases depth percepts based on motion parallax
Author Affiliations
  • Ranran French
    University of Rochester
  • Gregory DeAngelis
    University of Rochester
Journal of Vision October 2020, Vol.20, 569. doi:https://doi.org/10.1167/jov.20.11.569
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ranran French, Gregory DeAngelis; Scene-relative object motion biases depth percepts based on motion parallax. Journal of Vision 2020;20(11):569. https://doi.org/10.1167/jov.20.11.569.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

An important function of the visual system is to represent the 3D structure of the world from the sequence of 2D images projected onto the retinae. During observer translation, relative image motion between stationary objects at different distances (motion parallax, MP) provides potent depth information. However, if an object is moving relative to the scene, this complicates the computation of depth from MP since there will be an additional component of image motion related to object motion. To correctly compute depth from MP, this component should be ignored by the brain. Previous experimental and theoretical work on depth perception from MP has assumed that objects are stationary in the world. How the brain perceives depth of moving objects based on motion parallax has not been examined. Naïve human subjects viewed a virtual 3D scene consisting of a ground plane and stationary background objects, while lateral self-motion was simulated by optic flow. A target object, lying above the ground plane, could be either stationary or moving laterally at different velocities. Subjects were asked to judge the depth of the target object relative to the plane of fixation. Subjects showed systematic biases in perceived depth that depend on object velocity, with larger biases during monocular presentation of the target object. We consider two possible sources for this bias. First, if the brain computes depth by parsing retinal image motion into components related to self-motion and object motion, then incomplete flow parsing should result in inaccurate depth estimates. Second, uncertainty regarding whether or not the object is moving in the world may affect the brain’s ability to isolate image motion caused by self-motion. Future work will evaluate whether these two possible explanations account for our observations. Our findings establish that perception of depth from MP does not compensate for object motion.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.