Abstract
Categorical information represented in the ventral temporal cortex (VT) can be decoded from functional MRI (fMRI) using multivariate pattern analyses (MVPA). Our understanding of the representational space that encodes this information is incomplete. Category-selective regions have been found for faces, bodies, and places, but these regions cannot explain how fine-scale category information beyond those high-level categories is represented in VT. Here, we present a method to derive a model of representational space in VT that is common to subjects and can capture the fine-scale decodable information content in VT. This representational space model we derived for VT enabled us to successfully perform both within-subject and between-subject classification of categorical information and complex movie scenes. To characterize the dimensions of this space, we mapped their cortical topographies, their response profiles for different animate and inanimate categories, and their functional connectivity with the rest of the brain. We further derived the vectors in this space for different contrasts of interest, such as faces versus objects, human faces versus animal faces etc., and mapped their topographies and functional connectivities with rest of the brain. Category-selective regions – the FFA and PPA – emerge as parts of the cortical topographies of respective contrast vectors mapped in this space. Moreover, this model, being a common model for all of our subjects, allows us to compute such functionally-defined ROIs from localizer data in a subset of subjects and then identify the location of the FFA and PPA in other subjects with no localizer data. These results suggest that our representational space model captures coarse- and fine-scale category-related information content in VT that is common across brains and preserves the associated cortical topography.