Abstract
Humans can recognize others’ actions in the social environment. This recognition ability is tolerant to drastic variations in the visual input caused by the movements of people in the environment. What neural underpinnings support this position-tolerant action recognition? In the present study, we aimed to identify regions in the brain that contain position-tolerant representations of actions and explore the representational content of these regions. We recorded fMRI data from twenty-two subjects while they observed video clips of ten different human actions in Point Light Display format. Each stimulus was presented in either the upper or the lower visual fields. We used multivoxel pattern analysis and a searchlight technique to identify brain regions that contain position tolerant action representation. In a generalization test, linear support vector machine classifiers were trained with fMRI patterns in response to stimuli presented in one position and tested with stimuli presented in another position. Results showed above-chance classification in the left and right lateral occipitotemporal cortex, right inferior intraparietal sulcus, and right superior intraparietal sulcus. To investigated the representational content of these regions, we constructed two models, one based on the movement of the body parts and another based on the similarity ratings obtained from an independent behavioral experiment. In a multiple regression analysis, we used these models to predict the cross-position decoding accuracies for each ROI. Results showed that the objective body-part model was a better predictor for the accuracies in the parietal regions, while the model based on the subjective ratings of similarity was a better predictor of the accuracies in the occipitotemporal regions. These results suggest the existence of two distinct networks containing abstract representations of human actions.