Abstract
An important function of the visual system is to represent the 3D structure of the world from the sequence of 2D images projected onto the retinae. During observer translation, relative image motion between stationary objects at different distances (motion parallax, MP) provides potent depth information. However, if an object is moving relative to the scene, this complicates the computation of depth from MP since there will be an additional component of image motion related to object motion. To correctly compute depth from MP, this component should be ignored by the brain. Previous experimental and theoretical work on depth perception from MP has assumed that objects are stationary in the world. How the brain perceives depth of moving objects based on motion parallax has not been examined.
Naïve human subjects viewed a virtual 3D scene consisting of a ground plane and stationary background objects, while lateral self-motion was simulated by optic flow. A target object, lying above the ground plane, could be either stationary or moving laterally at different velocities. Subjects were asked to judge the depth of the target object relative to the plane of fixation. Subjects showed systematic biases in perceived depth that depend on object velocity, with larger biases during monocular presentation of the target object. We consider two possible sources for this bias. First, if the brain computes depth by parsing retinal image motion into components related to self-motion and object motion, then incomplete flow parsing should result in inaccurate depth estimates. Second, uncertainty regarding whether or not the object is moving in the world may affect the brain’s ability to isolate image motion caused by self-motion. Future work will evaluate whether these two possible explanations account for our observations. Our findings establish that perception of depth from MP does not compensate for object motion.