September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Vision and touch are not automatically integrated
Author Affiliations
  • Stephanie Badde
    Department of Psychology, New York UniversityCenter for Neural Science, New York University
  • Karen Navarro
    Department of Psychology, City College of New York
Journal of Vision September 2018, Vol.18, 95. doi:https://doi.org/10.1167/18.10.95
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Stephanie Badde, Karen Navarro, Michael Landy; Vision and touch are not automatically integrated. Journal of Vision 2018;18(10):95. https://doi.org/10.1167/18.10.95.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Vision and touch code spatial information in different reference frames. For sensory integration, establishing whether visual and tactile stimuli share a common source is costly and might not occur automatically. We tested whether task-enforced encoding of both visual and tactile stimulus locations fosters multisensory integration (Experiment 1) and cross-sensory calibration (Experiment 2). On each trial, a visual, tactile or visual-tactile stimulus was presented on a participant's occluded arm. Participants indicated the location of one stimulus. In multisensory trials, a cue indicated which modality to localize. This cue was pre- or post-stimulation (varied across participants); the latter forces participants to encode both vision and touch. Experiment 1: Unisensory and multisensory trials were interleaved, and visual-tactile pairs with different spatial discrepancies were tested. After localizing the cued stimulus, participants indicated whether they perceived the stimuli in the same (fusion) or in different (non-fusion) locations. Experiment 2: Unisensory and multisensory trials were blocked, and visual-tactile stimulus pairs with one fixed spatial discrepancy were presented in multisensory trials. Unisensory localization performance was tested before and after the multisensory phase. In Experiment 1, tactile location reports were shifted towards the location of the visual stimulus, indicating multisensory integration. Crucially, when the relevant modality was cued after — rather than before — the stimuli, tactile localization was also shifted in non-fusion trials, and the proportion of fused percepts increased. In Experiment 2, when the relevant modality was cued after the stimuli in multisensory trials, tactile localization in subsequent unisensory trials was significantly shifted, indicating cross-sensory calibration. This was not the case when the cue occurred before the stimuli. In sum, we found stronger effects of vision on touch when post-stimulation cueing forced participants to encode spatial information from both modalities. Hence, integration of visual and tactile spatial information is not an automatic process.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×