September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
An Efficient Multimodal fMRI Localizer for High-Level Visual, Auditory, and Cognitive Regions in Humans
Author Affiliations & Notes
  • Samuel Hutchinson
    Massachusetts Institute of Technology
  • Ammar Marvi
    Massachusetts Institute of Technology
  • Freddy Kamps
    Massachusetts Institute of Technology
  • Emily M. Chen
    Massachusetts Institute of Technology
  • Rebecca Saxe
    Massachusetts Institute of Technology
  • Ev Fedorenko
    Massachusetts Institute of Technology
  • Nancy Kanwisher
    Massachusetts Institute of Technology
  • Footnotes
    Acknowledgements  This work was supported by NIH grant 1R01HD103847-01A1 (awarded to RS) and NIH grant 5UM1MH130981.
Journal of Vision September 2024, Vol.24, 668. doi:https://doi.org/10.1167/jov.24.10.668
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Samuel Hutchinson, Ammar Marvi, Freddy Kamps, Emily M. Chen, Rebecca Saxe, Ev Fedorenko, Nancy Kanwisher; An Efficient Multimodal fMRI Localizer for High-Level Visual, Auditory, and Cognitive Regions in Humans. Journal of Vision 2024;24(10):668. https://doi.org/10.1167/jov.24.10.668.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Although localizers for functional identification of category-selective regions in individual participants are widely used in fMRI research, most have not been optimized for the reliability and number of functionally-distinctive regions they can identify, or for the amount of scan time needed to identify these regions. Further, functional localizers for regions in high-level visual cortex do not enable localization of cortical regions specialized for other domains of cognition. Here we attempt to solve these problems by developing a single localizer that enables reliable localization in just 23 minutes of fMRI scan time of cortical regions selectively engaged in processing faces, places, bodies, words, and objects, as well as cortical regions selectively engaged in processing speech sounds, language, and theory of mind. To this end, we use a blocked design in which participants watch videos from five different visual categories (of scenes, faces, objects, words, and bodies), while simultaneously listening to and performing tasks on five different kinds of audio stimuli (false belief sentences, false photo sentences, arithmetic problems, nonword strings, and texturized speech). We counterbalance these conditions across five runs of ten blocks each, with each block consisting of one 21-second auditory stimulus and seven three-second videos from one visual category. Each visual stimulus occurs equally often with each audio stimulus, so that contrasts in each modality are unconfounded from conditions in the other. Data from ten participants show that this Efficient Multimodal Localizer robustly identifies, within individual participants, cortical regions selectively engaged in processing faces, places, bodies, words, and objects, as well as speech sounds, language, and theory of mind, as tested against established standard localizers for these functions. The stimuli and presentation code for this new localizer will be made publicly available online, enabling future studies to identify functional regions of interest with the same procedure across multiple labs.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×