Abstract
When grasping, humans integrate information from different sensory modalities. Even high-level, semantic information can affect the kinematics of the grasping process. We wanted to know whether semantic information alone can specify grasping parameters as, for example, the size of an object. We tested the precision (variability) and the slope of the maximum grip aperture (MGA - maximal opening between index finger and thumb) across different object sizes in a visual, a semantic and a bimodal (visual + semantic) condition. Eighteen subjects grasped bars of different sizes (2-7cm) when seeing a bar (visual condition), hearing a number (2-7) representing the size of the bar without seeing it (semantic condition) or seeing the bar and hearing the size information (bimodal condition). In all conditions, MGA was linearly related to bar size with similar slopes, indicating that verbal information about object size can be used to scale the grip aperture in an efficient manner when vision is not available. Because we used natural viewing conditions, cue integration approaches suggest visual capture, such that visual information should dominate (due to its higher reliability) the semantic information in the bimodal condition. This is what we found (about three times higher variability of the MGA in the semantic condition compared to the visual or bimodal conditions, which did not differ significantly). Based on these results future research can degrade the visual information and measure the degree to which semantic and visual information is integrated in grasping.
Meeting abstract presented at VSS 2012