Abstract
Understanding the computational principles used by the human brain to perform recognition tasks requires a comprehensive view of brain regions involved in processing the sensory information and their temporal dynamics. Here, by combining high spatial resolution (fMRI) and high temporal resolution (MEG) brain data (N=15) with theoretical models using representational similarity analysis (RSA), we reveal the spatio-temporal dynamics of processing object properties, such as animacy and real-world size, in the human brain. We show that the two properties engage overlapping but different network of brain areas. The peak for representing animacy information is earlier (~173 ms) than the peak for representing real-world object size (~196 ms) [two-sided sign-rank test, p < 0.0001]. Regions associated with the peak of animacy representation are bilateral PHC, VO, LO, left fusiform and MT, while regions associated with the peak size representation are right VO, left MT, and bilateral PHC. Our analyses also suggest that the animacy information is spatiotemporally more sustained than the real-world size information. The novel Content Dependent Fusion (CDF) approach proposed here for combining MEG and fMRI data further enabled us to visualize the representational connectivity finger prints of the human brain regions involved in identifying animacy and real-world size information during the first few hundred milliseconds of vision. Mapping the dynamics of neural information processing in space and time can reveal the nature of specific informational pathways allowing for a broad view of where and when the neural information is computed and transmitted for creating mental representations in the human brain.
Meeting abstract presented at VSS 2017