September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Discriminating multimodal from amodal representations of scene categories using fMRI decoding
Author Affiliations
  • Yaelan Jung
    Department of Psychology, University of Toronto
  • Bart Larsen
    Department of Psychology, University of Pittsburgh
  • Dirk Bernhardt-Walther
    Department of Psychology, University of Toronto
Journal of Vision August 2017, Vol.17, 308. doi:https://doi.org/10.1167/17.10.308
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yaelan Jung, Bart Larsen, Dirk Bernhardt-Walther; Discriminating multimodal from amodal representations of scene categories using fMRI decoding. Journal of Vision 2017;17(10):308. https://doi.org/10.1167/17.10.308.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous studies have shown that, unlike V1 and A1, temporal, parietal, and prefrontal cortices process sensory information from multiple sensory modalities (Downar et al, 2000). However, it is unknown whether neurons in these areas process sensory information regardless of modality (amodal), or whether these areas contain separate but spatially mixed populations of neurons dedicated to each sensory modality (multimodal). Here we used fMRI to study how temporal, parietal, and prefrontal areas represent scene categories in the case of conflicting evidence from visual and auditory input. For instance, participants were shown an image of a beach and played office sounds at the same time. If a brain area processes visual and auditory information separately, then we expect scene categories to be decodable from at least one modality, as conflicting information from the other modality is not processed by the same neurons. However, in an area where neurons integrate information across sensory modalities, conflicting information from visual and auditory inputs should lead to interference and hence a deterioration of the neural representation of scene categories. In our experiment, we were able to decode scene categories from fMRI activity in temporal and parietal areas for visual or auditory stimuli. By contrast, in prefrontal areas, we could decode neither visual nor auditory scene categories in this conflicting condition. Note that both types of scene categories were decodable from the image-only and sound-only conditions, when there was no conflicting information from the other modality. These results show that even though temporal, parietal, and prefrontal cortices all represent scene categories based on multimodal inputs, only prefrontal cortex contains an amodal representation of scene categories, presumably at a conceptual level.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×