Abstract
In the primate brain, visual perception of actions is supported by a distributed network of regions in occipital, temporal, parietal, and premotor areas. The representational properties of each of the regions involved in visual action processing remain to be specified. Here, we investigated the representational content of these regions using fMRI with representational similarity analyses (RSA), along with computer vision-based modeling of the stimuli. Participants viewed 2-second video clips of three agents performing eight different actions during fMRI scanning. We computed the representational dissimilarity matrices (RDMs) for each brain region of interest, and compared these with two different sets of computational model representations constructed based on visual and semantic attributes. We found that different nodes of the action processing network have different representational properties. Posterior STS, known to be a key visual area for processing body movements and actions, appears to represent high-level visual features such as movement kinematics. As expected based on prior research and theory on mirror neurons, as well as computational models of biological motion perception and action recognition, representations became more abstract higher in the hierarchy; e.g., our results suggest inferior parietal cortex represents aspects such as action category, intention, and target of the action. Taken together with prior theory, empirical work, and computational modeling, we conclude that during visual processing of actions, pSTS pools information from downstream visual areas to compute/represent movement kinematics, which are then passed on to nodes of the action processing network in parietal and frontal regions coding higher-order/semantic aspects.
Meeting abstract presented at VSS 2017