Purchase this article with an account.
Marie L. Smith, Frederic Gosselin, Garrison W. Cottrell, Philippe G. Schyns; Transmitting and decoding facial expressions of emotion. Journal of Vision 2004;4(8):909. doi: https://doi.org/10.1167/4.8.909.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Accurate and efficient interpretation of facial expressions of emotion is essential for humans to socially interact with others. Facial expressions communicate information from which we can quickly infer the state of mind of our peers and adjust our behavior accordingly. Considering the face as a transmitter of emotion signals and the brain as a decoder we expect minimal overlap in the specific information used for each expression. Here we characterize the information underlying the recognition of the six basic facial expressions (fear, anger, sadness, happiness, surprise and disgust) and evaluate how well each expression is interpreted. Using the Bubbles method with human observers and a model observer for benchmarking we characterize the specific information subsets corresponding to diagnostic (decoded, human) and available (transmitted, model) information for each expression and neutral. We found in general low correlations (m = .28, s = .14) in the available informative regions across expressions with further de-correlations in the diagnostic regions of human observers (m = .12, s = .09). In particular, for human observers we found the informative regions for anger and fear to be orthogonal to all other expressions. Furthermore, for each expression, we determine the optimality of information use by human observers from a pixel-wise comparison of the human and model informative regions. The de-correlated information subsets of human observers can be considered as optimized inputs with which the specific response of brain structures to facial features transmitting emotion signals can be isolated.
This PDF is available to Subscribers Only