Abstract
It is crucial for animals to accurately judge the depth of moving objects. During observer translation, the relative image motion between stationary objects at different distances, known as motion parallax (MP), provides important depth information. However, when an object also moves relative to the scene, the computation of depth from MP is complicated by the object’s independent motion. Previously we have shown that, when humans view a moving object during visually simulated self-motion, they show a systematic bias in perceived depth that depends on object and self-motion directions, as well as object speed. Here, we examined the origins of this depth bias by directly asking subjects to report whether or not an object moves relative to the scene while simultaneously performing a depth discrimination task. Naïve human subjects viewed a virtual 3D scene consisting of a ground plane and stationary background objects, while lateral self-motion was simulated by optic flow. A target object, lying above the ground plane, could be either stationary or moving laterally at different velocities. Subjects were asked to judge the depth of the target object relative to the plane of fixation, as well as whether they thought the object was moving independently relative to the scene. For object speeds at which subjects report the object to be moving ~50% of the time, they show biases in perceived depth that depend on their report about object motion. This dependence is more prominent when the object is viewed monocularly, such that depth cues are less reliable. Our results indicate that perceived depth based on MP depends systematically on subjects’ causal inference regarding scene-relative object motion, consistent with predictions of a Bayesian observer model.