Abstract
Grasping is a flexible human motor behavior coordinated on the basis of perceptual information about the structure of surfaces in reachable space. In two original experiments, we investigated the perceptual information supporting accurate grasp performance. Participants in these experiments reached-to-grasp target objects that were situated in illusory contexts under two perceptual conditions: a natural closed-loop condition with full visual feedback and a modified closed-loop condition which selectively prevented online vision of the hand. In natural closed-loop grasping in an illusory context, we found that the anticipatory opening between the forefinger and thumb (grip aperture) reflected the illusory perceptual size in early stages and the veridical physical size in late stages. Dynamic analysis of grip aperture scaling revealed a clear mid-flight correction, suggesting that additional information for motor control was made available during grasp execution. Based on this finding, we conducted a follow-up experiment in which we prevented online vision of the hand. In contrast to the natural closed-loop condition where maximum grip aperture (MGA) was tuned to veridical physical size, in the modified closed-loop condition we found that MGA was tuned to illusory perceptual size. This work focuses on the implications of these results for the perceptual control of action, arguing that they cannot be accounted for by explanations that posit specialized vision-for-action processes capable of extracting metrically accurate, Euclidean spatial information akin to a 3D depth map of the local environment. As an alternative, the results suggest that online control processes based on visual comparison of hand and target positions could support accurate grasp performance in illusory contexts.
Meeting abstract presented at VSS 2015