August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Probing bimodal neural mechanisms in human ventral visual cortex
Author Affiliations
  • Job van den Hurk
    Brain and Cognition, Faculty of Biological Psychology, KU Leuven, Leuven, Belgium
  • Hans Op de Beeck
    Brain and Cognition, Faculty of Biological Psychology, KU Leuven, Leuven, Belgium
Journal of Vision September 2016, Vol.16, 508. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Job van den Hurk, Hans Op de Beeck; Probing bimodal neural mechanisms in human ventral visual cortex. Journal of Vision 2016;16(12):508.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Does perception of natural sounds probe category selective ventral temporal cortex VTC in the same way as visual stimuli do, or are other mechanisms involved? Here we investigate if cross-decoding between auditory and visual modalities is possible from neural responses in VTC. We hypothesize that natural sounds from a given category can predict the neural response to visual stimuli from the same category and vice versa. In a 3T MRI-scanner, subjects (n=18) were presented with 4 categories: face, body, scene, and object. These categories were presented in 4 auditory and 4 visual runs, repeating each category 4 times per run. For each pair-wise combination of conditions, we trained an SVM-classifier on multi-voxel patterns in the visual or auditory trials in VTC, and assessed the prediction accuracy using independent auditory or visual trials, testing all within/cross modality combinations. Across subjects, the visual to visual decoding was nearly perfect for all conditions pairs (Wilcoxon test, range of means across pairs: = [96.9-99.7], p(FDR-corr) less than 0.005). Auditory to auditory was significant for all but [body vs object], ( = [58.6-59.8], p(FDR-corr) 0.05). Auditory to visual was significant for all pairs ( = [62.5-76.0], p(FDR-corr) less than 0.01), and visual to auditory significant for all but [body vs face] and [scene vs object], ( = [53.9-58.0], p(FDR-corr) less than 0.05). Strikingly, the across-modality generalization was asymmetric, being significantly better for auditory-to-visual than for visual-to-auditory (p 0.001). Our results show not only that auditory stimuli elicit a distinctive response in VTC, but also that to a large extent we could cross-decode across modalities. This indicates that the neural responses in VTC to sound categories at least partially rely on the same neural mechanisms as with visual stimulation.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.