Abstract
Previous studies have shown that, unlike V1 and A1, temporal, parietal, and prefrontal cortices process sensory information from multiple sensory modalities (Downar et al, 2000). However, it is unknown whether neurons in these areas process sensory information regardless of modality (amodal), or whether these areas contain separate but spatially mixed populations of neurons dedicated to each sensory modality (multimodal). Here we used fMRI to study how temporal, parietal, and prefrontal areas represent scene categories in the case of conflicting evidence from visual and auditory input. For instance, participants were shown an image of a beach and played office sounds at the same time. If a brain area processes visual and auditory information separately, then we expect scene categories to be decodable from at least one modality, as conflicting information from the other modality is not processed by the same neurons. However, in an area where neurons integrate information across sensory modalities, conflicting information from visual and auditory inputs should lead to interference and hence a deterioration of the neural representation of scene categories. In our experiment, we were able to decode scene categories from fMRI activity in temporal and parietal areas for visual or auditory stimuli. By contrast, in prefrontal areas, we could decode neither visual nor auditory scene categories in this conflicting condition. Note that both types of scene categories were decodable from the image-only and sound-only conditions, when there was no conflicting information from the other modality. These results show that even though temporal, parietal, and prefrontal cortices all represent scene categories based on multimodal inputs, only prefrontal cortex contains an amodal representation of scene categories, presumably at a conceptual level.
Meeting abstract presented at VSS 2017