Abstract
Vision and touch code spatial information in different reference frames. For sensory integration, establishing whether visual and tactile stimuli share a common source is costly and might not occur automatically. We tested whether task-enforced encoding of both visual and tactile stimulus locations fosters multisensory integration (Experiment 1) and cross-sensory calibration (Experiment 2). On each trial, a visual, tactile or visual-tactile stimulus was presented on a participant's occluded arm. Participants indicated the location of one stimulus. In multisensory trials, a cue indicated which modality to localize. This cue was pre- or post-stimulation (varied across participants); the latter forces participants to encode both vision and touch. Experiment 1: Unisensory and multisensory trials were interleaved, and visual-tactile pairs with different spatial discrepancies were tested. After localizing the cued stimulus, participants indicated whether they perceived the stimuli in the same (fusion) or in different (non-fusion) locations. Experiment 2: Unisensory and multisensory trials were blocked, and visual-tactile stimulus pairs with one fixed spatial discrepancy were presented in multisensory trials. Unisensory localization performance was tested before and after the multisensory phase. In Experiment 1, tactile location reports were shifted towards the location of the visual stimulus, indicating multisensory integration. Crucially, when the relevant modality was cued after — rather than before — the stimuli, tactile localization was also shifted in non-fusion trials, and the proportion of fused percepts increased. In Experiment 2, when the relevant modality was cued after the stimuli in multisensory trials, tactile localization in subsequent unisensory trials was significantly shifted, indicating cross-sensory calibration. This was not the case when the cue occurred before the stimuli. In sum, we found stronger effects of vision on touch when post-stimulation cueing forced participants to encode spatial information from both modalities. Hence, integration of visual and tactile spatial information is not an automatic process.
Meeting abstract presented at VSS 2018