June 2004
Volume 4, Issue 8
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2004
Transmitting and decoding facial expressions of emotion
Author Affiliations
  • Marie L. Smith
    University of Glasgow, Scotland
  • Frederic Gosselin
    University of Montreal, Canada
  • Garrison W. Cottrell
    University of California, USA
  • Philippe G. Schyns
    University of Glasgow, Scotland
Journal of Vision August 2004, Vol.4, 909. doi:https://doi.org/10.1167/4.8.909
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Marie L. Smith, Frederic Gosselin, Garrison W. Cottrell, Philippe G. Schyns; Transmitting and decoding facial expressions of emotion. Journal of Vision 2004;4(8):909. https://doi.org/10.1167/4.8.909.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Accurate and efficient interpretation of facial expressions of emotion is essential for humans to socially interact with others. Facial expressions communicate information from which we can quickly infer the state of mind of our peers and adjust our behavior accordingly. Considering the face as a transmitter of emotion signals and the brain as a decoder we expect minimal overlap in the specific information used for each expression. Here we characterize the information underlying the recognition of the six basic facial expressions (fear, anger, sadness, happiness, surprise and disgust) and evaluate how well each expression is interpreted. Using the Bubbles method with human observers and a model observer for benchmarking we characterize the specific information subsets corresponding to diagnostic (decoded, human) and available (transmitted, model) information for each expression and neutral. We found in general low correlations (m = .28, s = .14) in the available informative regions across expressions with further de-correlations in the diagnostic regions of human observers (m = .12, s = .09). In particular, for human observers we found the informative regions for anger and fear to be orthogonal to all other expressions. Furthermore, for each expression, we determine the optimality of information use by human observers from a pixel-wise comparison of the human and model informative regions. The de-correlated information subsets of human observers can be considered as optimized inputs with which the specific response of brain structures to facial features transmitting emotion signals can be isolated.

Smith, M. L., Gosselin, F., Cottrell, G. W., Schyns, P. G.(2004). Transmitting and decoding facial expressions of emotion [Abstract]. Journal of Vision, 4( 8): 909, 909a, http://journalofvision.org/4/8/909/, doi:10.1167/4.8.909. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×