Abstract
The ability to visually localize an object and perceive its shape is essential to execute successful grasping movements. However, the haptic sense can also provide valuable information about the object's position and shape when touching an object with the hand. Here we investigate if grasping actions toward objects that are simultaneously seen and touched are more efficient than those under unimodal guidance. Moreover, we identify which haptic object properties (position and/or size) play the major role in multisensory grasping. Participants (n = 20, 6000 total trials) performed grasping movements toward five differently sized objects ranging from 30 mm to 70 mm located at three egocentric distances. In the visual condition (V), participants had full vision of the object and the workspace. In the haptic condition (H), vision was prevented and the action was under the guidance of haptic information from the other non-grasping hand. In the visuo-haptic condition (VH), both visual and haptic information were available throughout the movement. In an additional visuo-haptic condition (VHp), participants held a post which supported the object, instead of holding the object itself, while vision was fully available. In this case, haptics was informative only about the position of the object, but not about its size. Participants opened their hands wider in the H condition than in the V condition (92 mm vs. 86 mm). The multisensory advantage was clear in the VH condition: the maximum grip aperture was considerably smaller (81 mm) and movements were 125 ms faster than in the unisensory conditions. Critically, in the VHp condition, in which participants had full vision of the object, but only positional haptic information, grasping movements were as efficient as in the VH condition. We conclude that haptic position, not haptic size, is merged with visual signals when grasping movements are directed toward multisensory objects.
Meeting abstract presented at VSS 2018