Abstract
It is known that people perform different grip apertures when required to reach and grasp objects or when required to manually estimate them. Typically, larger spans between index and thumb fingers are found during grasping then during manual estimation (Foster, Fantoni, Caudek & Domini, 2011). An important difference between these tasks concerns the position of the hand with respect to the object. This hand position signal might provide additional information used for the shaping of the grasp (e.g., egocentric distance for visual information scaling).
Participants were asked to grasp a virtual object without seeing their hand and without haptic feedback. This grasp was measured in three tasks: (a) in a reach-to-grasp task; (b) in a grasp-on-location task, in which the hand was positioned by a robotic arm in the locations participants reached in the reach-to-grasp task; (c) in a grasp-off-location task, in which the hand was located away from the object and close to the body. The disparity-defined virtual object was composed of three vertical rods: one rod was positioned midway and in front of two flanking rods. Four depth separations were used between the central rod and the flanker rods, and this arrangement of rods was presented at two distances with consistent vergence and accommodative cues.
We found that the final grip aperture was consistently larger in the grasp-on-location than in the grasp-off-location task. The exact same visual information thus gave rise to different grasping behaviors depending on the hand's position with respect to the object. However, an even larger final grip aperture was observed in the reach-to-grasp task, even though hand's positions were coincident with those used in the grasp-on-location task. Actively generated movements towards the same objects thus produced yet a different grasping behavior.
Meeting abstract presented at VSS 2012