September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Combining human MEG and fMRI data reveals the spatio-temporal dynamics of animacy and real-world object size
Author Affiliations
  • Seyed-Mahdi Khaligh-Razavi
    Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
  • Radoslaw Cichy
    Department of Education and Psychology, Free University Berlin, Berlin, Germany
  • Dimitrios Pantazis
    McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
  • Aude Oliva
    Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
Journal of Vision August 2017, Vol.17, 574. doi:https://doi.org/10.1167/17.10.574
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Seyed-Mahdi Khaligh-Razavi, Radoslaw Cichy, Dimitrios Pantazis, Aude Oliva; Combining human MEG and fMRI data reveals the spatio-temporal dynamics of animacy and real-world object size. Journal of Vision 2017;17(10):574. https://doi.org/10.1167/17.10.574.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Understanding the computational principles used by the human brain to perform recognition tasks requires a comprehensive view of brain regions involved in processing the sensory information and their temporal dynamics. Here, by combining high spatial resolution (fMRI) and high temporal resolution (MEG) brain data (N=15) with theoretical models using representational similarity analysis (RSA), we reveal the spatio-temporal dynamics of processing object properties, such as animacy and real-world size, in the human brain. We show that the two properties engage overlapping but different network of brain areas. The peak for representing animacy information is earlier (~173 ms) than the peak for representing real-world object size (~196 ms) [two-sided sign-rank test, p < 0.0001]. Regions associated with the peak of animacy representation are bilateral PHC, VO, LO, left fusiform and MT, while regions associated with the peak size representation are right VO, left MT, and bilateral PHC. Our analyses also suggest that the animacy information is spatiotemporally more sustained than the real-world size information. The novel Content Dependent Fusion (CDF) approach proposed here for combining MEG and fMRI data further enabled us to visualize the representational connectivity finger prints of the human brain regions involved in identifying animacy and real-world size information during the first few hundred milliseconds of vision. Mapping the dynamics of neural information processing in space and time can reveal the nature of specific informational pathways allowing for a broad view of where and when the neural information is computed and transmitted for creating mental representations in the human brain.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×