Purchase this article with an account.
Matthew X Lowe, Yalda Mohsenzadeh, Benjamin Lahner, Santani Teng, Ian Charest, Aude Oliva; Spatiotemporal neural representations in high-level visual cortex evoked from sounds. Journal of Vision 2019;19(10):174. doi: https://doi.org/10.1167/19.10.174.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
It is well established that areas of high-level visual cortex are selectively driven by visual categories such as places, objects, and faces. These areas include the scene-selective parahippocampal place area (PPA), occipital place area (OPA), and retrosplenial cortex (RSC), the object-selective lateral occipital complex (LOC), and the face-selective fusiform face area (FFA). Here we sought to determine whether neural representations in these regions are evoked without visual input, and if so, how these representations emerge across space and time in the human brain. Using an event-related design, we presented participants (n = 15) with 80 real-world sounds from various sources (animals, human voices, objects, and spaces) and instructed them to form a corresponding mental image with their eyes closed. To trace the emergence of neural representations at both the millisecond and millimeter level, we acquired spatial data from functional magnetic resonance imaging (fMRI) and temporal data from magnetoencephalography (MEG) in independent sessions. Regions of interest (ROIs) were independently localized in auditory and visual cortex. Using similarity-based fusion (Cichy et al., 2014), we correlated MEG and fMRI data to reveal correspondence between temporal and spatial neural dynamics. Our results reveal neural representations evoked from auditory stimuli emerge rapidly in the face-selective FFA, in addition to voice-selective auditory areas (< 100ms). In contrast, representations in scene- and object-selective cortex emerged later (>130ms). We found no evidence for neural representations in early visual cortex, as expected. By tracing the emergence of neural representations in cascade across the human brain, we therefore reveal the differential spatiotemporal neural dynamics of these representations in high-level visual cortex evoked in the absence of visual input. Our findings thus support a multimodal neural framework for sensory representations, and track these emerging neural representations across space and time in the human brain.
This PDF is available to Subscribers Only