Purchase this article with an account.
Ramprakash Srinivasan, Julie Golomb, Aleix Martinez; A Neural Basis of Facial Action Recognition in Humans. Journal of Vision 2016;16(12):382. doi: 10.1167/16.12.382.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized the brain needs to visually interpret these action units to understand other people's actions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional Magnetic Resonance Imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multi-voxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multi-voxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only