September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Multi-modal representation of visual and auditory motion directions in hMT+/V5
Author Affiliations
  • Mohamed Rezk
    Institut de recherche en sciences psychologiques (IPSY), Universite catholique de Louvain (UCL), Belgium.
  • Stephanie Cattoir
    Center for Mind/Brain Sciences (CiMeC), University of Trento, Italy.
  • Ceren Battal
    Institut de recherche en sciences psychologiques (IPSY), Universite catholique de Louvain (UCL), Belgium.Center for Mind/Brain Sciences (CiMeC), University of Trento, Italy.
  • Olivier Collignon
    Institut de recherche en sciences psychologiques (IPSY), Universite catholique de Louvain (UCL), Belgium.Center for Mind/Brain Sciences (CiMeC), University of Trento, Italy.
Journal of Vision September 2018, Vol.18, 1065. doi:https://doi.org/10.1167/18.10.1065
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mohamed Rezk, Stephanie Cattoir, Ceren Battal, Olivier Collignon; Multi-modal representation of visual and auditory motion directions in hMT+/V5. Journal of Vision 2018;18(10):1065. https://doi.org/10.1167/18.10.1065.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The human middle temporal area hMT+/V5 is a region of the extrastriate occipital cortex that has long been known to code for the direction of visual motion trajectories. Even if this region has been traditionally considered as purely visual, recent studies suggested that the hMT+/V5 complex could also selectively code for auditory motion. However, the nature of this cross-modal response in hMT+/V5 remains unsolved. In this study, we used functional magnetic resonance imaging (fMRI) to comprehensively investigate the representational format of visual and auditory motion directions in hMT+/V5. Using multivariate pattern analysis, we demonstrate that visual and auditory motion direction can be reliably decoded inside individually localized hMT+/V5. Moreover, we could predict the motion directions in one modality by training the classifier on patterns from the other modality. Such successful cross-modal decoding indicates the presence of shared motion information across the different modalities. Previous studies used successful cross-modal decoding as a proxy for abstracted representation in a brain region. However, relying on series of complementary multivariate analysis, we unambiguously show that brain responses underlying auditory and visual motion direction in hMT+/V5 is highly dissimilar. For instance, our results demonstrated that auditory motion direction patterns are strongly anti-correlated with the visual motion patterns, and that the two modalities can be highly discriminated based on their activity patterns. Moreover, representational similarity analyses demonstrated that modality invariant models poorly fitted our data while models assuming separate pattern geometries between audition and vision strongly correlated with our observed data. Our results demonstrate that hMT+/V5 is a multi-modal region that contains motion information from different modalities. However, while shared information exists across modalities, hMT+/V5 maintains highly separate response geometries for each modality. These results also serve as a timely reminder that observing significant cross-modal decoding is not a proxy for abstracted representation in the brain.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×