Abstract
The extent to which motion parallax can provide depth and distance information sufficient for manual interaction has not been clearly established. A series of experiments are presented which assess the contribution of motion parallax to judgements of reach distance and object depth under monocular, bi-ocular and stereo viewing conditions. A camera pair captured images of real objects, in a sparse environment, and relayed these to a modified Wheatstone stereoscope where they were viewed as virtual objects in front of the observer, which could be “grasped” without vision of the hand. The cameras and stereoscope formed a rigidly linked system which rested on a linear track, thus allowing the whole device to be slaved to lateral movement of the participant's head over an equivalent distance to the observers inter-ocular separation. Reach distance and grasp aperture were recorded via a magnetic tracking device (miniBIRD). The results confirm the advantage of stereo information in specifying object depth (gain approx 1.2) but motion parallax did not enable equivalent performance when added to monocular or bi-ocular viewing conditions (gain approx 0.5 for both). Without the benefit of motion or stereo information, performance was poor (gain approx 0.2). Participants were more variable in judging reach distance and this diluted the effect of viewing conditions that was demonstrated for object depth. The stereo and motion parallax information do not seem to enable accurate judgements of egocentric distance in the absence of additional cues such as height in the scene (vertical gaze angle) and vertical disparity. In a further experiment where participants were presented with conflicting motion parallax and stereo information, distance estimation results supported a cue-averaging model where each cue carried equal weight. Some participants reported apparent object motion under these viewing conditions. Research supported by the UK EPSRC.