Abstract
The perception of binocular motion in depth has been studied under a wide range of conditions. Under some conditions, but not others, thresholds for motion in depth detection can be poorer than for equivalent lateral motion. However, studies all agree that the visual system responds symmetrically to disparities in front of and behind the fixation point. Here we were interested in how useful these precise computer-controlled studies are for predicting how we respond to motion in depth in the real world. We addressed this issue by measuring motion in depth direction thresholds for the real world motion of a small bright target, moving along a linear track, powered by a stepper motor. Observers viewed a stationary fixation point at 1.3m, while motion in depth thresholds were measured for motions centred around a range of positions from 15cm in front of, to 15cm behind fixation. We collected data for both monocular and binocular viewing. Data were compared with those from experiments in which motion in depth was generated and presented on a computer monitor, with motion parameters as close as possible to those for the real world motion. For real-world motion, we found a binocular advantage for motions centred in front of and around fixation. Performance was not symmetrical around fixation: binocular performance was worse than monocular for motions centred behind fixation. For computer generated motion in depth we found a symmetrical pattern of performance with binocular performance deteriorating as motions were centred further away from fixation. In summary, there are small, but distinct, differences in performance between real-world and computer-generated motion in depth perception. We will consider the implications of this data for our understanding of binocular depth perception and the use of binocular disparity in virtual displays.