Abstract
Visuo-haptically guided actions (grasping a handheld object) are more accurate than actions guided by vision or haptics. This multisensory advantage originates from the additional haptic positional information provided by the hand holding the object. However, it is still unclear whether grasping relies on the average fingers’ position providing the overall object location or on the individual finger positions providing information about the object sides. Here we contrasted these hypotheses by introducing visuo-haptic size incongruencies. We varied the held lower part of an object (30, 40, 50 mm) with respect to the grasped upper part, which remained constant in size (40 mm). We then compared Visuo-Haptic (VH) grasping with two additional conditions where participants (n = 25) could either only see (Visual, V) the object or hold its lower part (Haptic, H). We found a modulation of grip aperture in both H and VH but not in V, with haptics in VH accounting for 25% of these changes. This suggests that the multisensory advantage is not based on overall object location only. In a second experiment (n = 27), we contrasted the separate and joint contributions of the individual fingers holding the object. We compared VH with two additional conditions where only the index finger or the thumb were contacting either the back or front side of the object, respectively. Grip aperture was similar between conditions but was modulated only when both fingers were contacting the object (VH condition). In contrast, when the index finger and thumb contacted the object separately, grasping movements were shifted toward each finger. Our results suggest that multisensory grasping relies on individual finger positions and their joint relationship, which provide combined information about the position of the object sides and their mutual distance.