Abstract
Over the last two decades neurophysiological and neuroimaging studies have identified a network of brain regions in occipito-temporal, parietal, and frontal cortex that are involved in visual processing of actions. What remains unclear are the neural computations and representational properties in each area. In this study, we investigated the representational content of human brain areas in the action observation network using fMRI and representational similarity analysis. Observers were shown video clips of 8 different actions performed by 3 different agents (actors) during fMRI scanning. We then derived two indices from the representational similarity matrices for each region of interest (ROI): Agent decoding index and action decoding index, which reflect the presence of significant agent and action information, respectively. We found significant agent decoding in early visual areas and category sensitive cortical regions including FFA and EBA, as well as in the action observation network. However, agent decoding index varied across ROIs and was strongest in the right posterior superior temporal sulcus (pSTS), and was significantly greater than the indices in ROIs in the parietal and frontal cortex in the right hemisphere. On the other hand, although we found significant action decoding in all visual areas as well as the action observation network, the strength of action decoding was similar across ROIs. However, the representational structure of action types varies across ROIs as revealed by hierarchical clustering, indicating that action-related information changes along the levels of the cortical hierarchy. These results suggest that during visual action processing, pSTS pools information from the early visual areas to compute the identity of the agent, and passes that information to regions in parietal and frontal cortex that code higher-level aspects of actions, consistent with computational models of visual action recognition.
Meeting abstract presented at VSS 2015