September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Spatiotemporal mechanisms of multisensory integration
Author Affiliations & Notes
  • Majed J Samad
    Facebook Reality Labs
  • Cesare V Parise
    Facebook Reality Labs
Journal of Vision September 2019, Vol.19, 300. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Majed J Samad, Cesare V Parise; Spatiotemporal mechanisms of multisensory integration. Journal of Vision 2019;19(10):300.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Multisensory integration is fundamentally a problem of redundancy exploitation in which the brain combines correspondent information from different senses to get faster and more reliable perceptual estimates. To achieve that, the brain must solve the correspondence problem, that is to continuously monitor the senses to infer which signals contain redundant (i.e., correlated) information that should be integrated. Over the last few years, several psychophysical studies have demonstrated the fundamental role of temporal correlation in the integration and temporal processing of visual and acoustic stimuli. However, it is still an open question whether the same principles operate across other pairs of modalities and whether the detection of spatial correlation is also necessary for integration. To answer this question, we used state-of-the-art virtual reality and hand tracking technology to deliver visual and tactile stimuli on the palm of the hand. On each trial the stimuli–composed of a train of four white spheres and four vibrotactile bursts– were randomly delivered in different spatial positions and with random timing, and different seeds were used to generate the visual and tactile streams. Participants observed the stimuli and reported whether the two modalities appeared to share a common cause or not. Reverse correlation analyses were then performed on the spatiotemporal cross-correlation of the visuotactile stimuli. Results clearly demonstrated that participants indeed relied on spatiotemporal correlation to perform the causality judgment task: only stimuli with high correlation at near-zero spatial and temporal offset were consistently judged as sharing a common cause. These results are consistent with the behavior of a population of biologically-plausible multisensory correlation detectors, whose architecture closely resembles the Hassenstein-Reichardt detector originally proposed for motion vision. Taken together, this study supports the view that correlation detection is indeed the canonical computation for multisensory integration, and that it operates across modalities in both time and space.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.