Abstract
Several studies suggest that size estimates from visual and haptic modalities are combined in a statistically optimal fashion (Ernst & Banks, 2002). Recently, Gepshtein et al. (2005) showed that optimal integration occurs only if the two signals originate from the same perceived spatial location. Here we show that when using a tool to explore an object haptically, optimal integration is largely restored despite spatial separation between haptic and visual input. Our method was similar to Gepshtein et al.'s. We used a two-interval forced-choice task to measure discrimination thresholds for judging the distance between two planes. We first measured thresholds in visual- and haptic-alone conditions, in order to predict performance when both cues were available. We then measured thresholds when both cues were available, at spatial offsets between 0 and 100 mm, in (i) a no-tool condition, and (ii) a with-tool condition, in which subjects grasped the stimulus with visually defined “sticks” attached to the finger and thumb. Tool length varied with spatial offset so that the ends of the tool always aligned with the visual stimulus. The tool was extinguished before contact with the planes, so the information available was identical in no-tool and with-tool conditions. In the no-tool condition we replicated Gepshtein et al.'s result. With zero offset, two-cue thresholds were as predicted by optimal cue integration, increasing to single-cue levels at 100 mm offset. In the with-tool condition, thresholds were significantly lower than single-cue performance at all spatial offsets and were close, though not equal, to the prediction of optimal cue integration. We conclude that the amount of integration of visual and haptic information is not determined by the spatial separation of the hand and visual object, but by a more sophisticated mapping process that can take into account the dynamics and geometry of tools.