Abstract
When grasping with one eye covered the finger and thumb are opened significantly wider than for binocularly-guided grasps, as if to build in a margin of error. This has been interpreted as evidence for a functional specialism for binocular vision in the control of grasping (Servos et al., 1992; Watt & Bradshaw, 2000). Such studies have, however, confounded the available depth cues and the precision with which object properties are estimated. Removing binocular cues from a normal scene degrades depth information, and so impaired performance is to be expected even if grasping does not depend specifically on binocular cues. We first measured JNDs in object size for computer-generated objects viewed binocularly and monocularly. In the binocular condition the object was a sparse random-dot stereogram depicting a rectangular block. In the monocular condition the same shape was defined by texture gradients on its surfaces. Objects were presented along a horizontal surface, below eye level. By varying object distance (cf. Hillis et al, 2004) we determined stimulus conditions in which the precision of estimates of object size was matched across binocular and monocular conditions. We then measured movement kinematics for grasps to these “matched” stimuli. Appropriate haptic feedback was provided. Vision of the hand and stimulus was occluded at movement onset. For most observers, grasp apertures did not differ significantly when reaching for matched binocular and monocular stimuli. Their grasps were smaller under monocular viewing when monocular size estimates were more precise. For some observers the pattern of JNDs predicted trends in the grasp apertures, but grasps remained slightly larger in the monocular condition. These results suggest that grasping is not controlled by a specifically binocular system, but the weighting given to binocular cues may be higher than is predicted by statistically optimal cue integration (Knill, 2005).