September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Spatiotemporal neural representations in high-level visual cortex evoked from sounds
Author Affiliations & Notes
  • Matthew X Lowe
    Computer Science and Artificial Intelligence Lab, MIT
  • Yalda Mohsenzadeh
    Computer Science and Artificial Intelligence Lab, MIT
  • Benjamin Lahner
    Computer Science and Artificial Intelligence Lab, MIT
  • Santani Teng
    Computer Science and Artificial Intelligence Lab, MIT
    Smith-Kettlewell Eye Research Institute
  • Ian Charest
    School of Psychology, University of Birmingham
  • Aude Oliva
    Computer Science and Artificial Intelligence Lab, MIT
Journal of Vision September 2019, Vol.19, 174. doi:https://doi.org/10.1167/19.10.174
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Matthew X Lowe, Yalda Mohsenzadeh, Benjamin Lahner, Santani Teng, Ian Charest, Aude Oliva; Spatiotemporal neural representations in high-level visual cortex evoked from sounds. Journal of Vision 2019;19(10):174. https://doi.org/10.1167/19.10.174.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

It is well established that areas of high-level visual cortex are selectively driven by visual categories such as places, objects, and faces. These areas include the scene-selective parahippocampal place area (PPA), occipital place area (OPA), and retrosplenial cortex (RSC), the object-selective lateral occipital complex (LOC), and the face-selective fusiform face area (FFA). Here we sought to determine whether neural representations in these regions are evoked without visual input, and if so, how these representations emerge across space and time in the human brain. Using an event-related design, we presented participants (n = 15) with 80 real-world sounds from various sources (animals, human voices, objects, and spaces) and instructed them to form a corresponding mental image with their eyes closed. To trace the emergence of neural representations at both the millisecond and millimeter level, we acquired spatial data from functional magnetic resonance imaging (fMRI) and temporal data from magnetoencephalography (MEG) in independent sessions. Regions of interest (ROIs) were independently localized in auditory and visual cortex. Using similarity-based fusion (Cichy et al., 2014), we correlated MEG and fMRI data to reveal correspondence between temporal and spatial neural dynamics. Our results reveal neural representations evoked from auditory stimuli emerge rapidly in the face-selective FFA, in addition to voice-selective auditory areas (< 100ms). In contrast, representations in scene- and object-selective cortex emerged later (>130ms). We found no evidence for neural representations in early visual cortex, as expected. By tracing the emergence of neural representations in cascade across the human brain, we therefore reveal the differential spatiotemporal neural dynamics of these representations in high-level visual cortex evoked in the absence of visual input. Our findings thus support a multimodal neural framework for sensory representations, and track these emerging neural representations across space and time in the human brain.

Acknowledgement: Vannevar Bush Faculty Fellowship program funded by the ONR grant number N00014-16-1-3116. The experiments were conducted at the Athinoula A. Martinos Imaging Center at the McGovern Institute for Brain Research, Massachusetts Institute of Technology. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×