December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Forming 3-dimensional multimodal object representations relies on integrative coding
Author Affiliations
  • Aedan Y. Li
    Department of Psychology, University of Toronto
  • Natalia Ladyka-Wojcik
    Department of Psychology, University of Toronto
  • Chris B. Martin
    Florida State University
  • Heba Qazilbash
    Department of Psychology, University of Toronto
  • Ali Golestani
    Department of Psychology, University of Toronto
  • Dirk B. Walther
    Department of Psychology, University of Toronto
    Rotman Research Institute, Baycrest Health Sciences
  • Morgan D. Barense
    Department of Psychology, University of Toronto
    Rotman Research Institute, Baycrest Health Sciences
Journal of Vision December 2022, Vol.22, 3286. doi:https://doi.org/10.1167/jov.22.14.3286
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Aedan Y. Li, Natalia Ladyka-Wojcik, Chris B. Martin, Heba Qazilbash, Ali Golestani, Dirk B. Walther, Morgan D. Barense; Forming 3-dimensional multimodal object representations relies on integrative coding. Journal of Vision 2022;22(14):3286. https://doi.org/10.1167/jov.22.14.3286.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How do we combine complex multimodal information to form a coherent representation of “what” an object is? Existing literature has predominantly used visual stimuli to study the neural architecture of well-established object representations. Here, we studied how new multimodal object representations are formed in the first place, using a set of well-characterized 3D-printed shapes embedded with audio speakers. Applying multi-echo fMRI across a four-day learning paradigm, we examined the behavioral and neural changes that occurred before and after shape-sound features were paired to form objects. To quantify learning, we developed a within-subject measure of representational geometry based on collected similarity ratings. Before shape and sound features were paired together, representational geometry was driven by modality-specific information, providing direct evidence of feature-based representations. After shape-sound features were paired to form objects, representational geometry was now additionally driven by information about the pairing, providing causal evidence for an integrated object representation distinct from its features. Complementing these behavioral results, we observed a robust learning-related change in pattern similarity for shape-sound pairings in the anterior temporal lobes. Intriguingly, we also observed greater pre-learning activity for visual over auditory features in the ventral visual stream extending into perirhinal cortex, with the visual bias in perirhinal cortex attenuated after the shape-sound relationships were learned. Collectively, these results provide causal evidence that forming new multimodal object representations relies on integrative coding in the anterior temporal lobes and perirhinal cortex.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×