December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Facial expressions of threatening emotions show greater communicative robustness
Author Affiliations & Notes
  • Tobias Thejll-Madsen
    School of Psychology & Neuroscience, University of Glasgow
  • Robin A.A. Ince
    School of Psychology & Neuroscience, University of Glasgow
  • Oliver G.B. Garrod
    School of Psychology & Neuroscience, University of Glasgow
  • Philippe G. Schyns
    School of Psychology & Neuroscience, University of Glasgow
  • Rachael E. Jack
    School of Psychology & Neuroscience, University of Glasgow
  • Footnotes
    Acknowledgements  TTM: UK Research & Innovation [EP/S02266X/1]; REJ: European Research Council [759796], Economic & Social Research Council [ES/K001973/1];PGS: Multidisciplinary University Research Initiative/Engineering & Physical Sciences Research Council [172046-01]; RAAI/PGS: Wellcome Trust [214120/Z/18/Z;107802]
Journal of Vision December 2022, Vol.22, 4031. doi:https://doi.org/10.1167/jov.22.14.4031
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tobias Thejll-Madsen, Robin A.A. Ince, Oliver G.B. Garrod, Philippe G. Schyns, Rachael E. Jack; Facial expressions of threatening emotions show greater communicative robustness. Journal of Vision 2022;22(14):4031. https://doi.org/10.1167/jov.22.14.4031.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Facial expressions are effective at communicating emotions partly due to their high signal variability. In line with communication theory, such signal variance could reflect evolutionary design to optimize communication efficiency and success—e.g., in-built degeneracy (different signals communicate the same message) and redundancy (similar signals communicate the same message, e.g., Hebets et al., 2016). However, such knowledge remains limited because current facial expression models are largely restricted to few, static, Western-centric signals. We address this knowledge gap by modelling dynamic facial expressions of the six classic basic emotions—happy, surprise, fear, anger, disgust, sad—in sixty individual participants from two cultures (Western European, East Asian) and characterizing their signal variance. Using the data-driven method of reverse correlation (e.g., see Jack et al., 2012), we agnostically generated facial expressions (i.e., combinations of dynamic Action Units—AUs) and asked participants to categorize 2,400 such stimuli according to one of the six emotions, or select ‘other’. We then measured the statistical relationship between the AUs presented on each trial and each participant’s emotion responses using Mutual Information (e.g., Ince et al., 2017), thus producing 720 dynamic facial expression models (6 emotions x 60 participants x 2 cultures). Finally, we used information-theoretic analyses and Bayesian inference of population prevalence (Ince et al., 2021) to characterize signal variance within each emotion category and culture. Results showed that, in both cultures, high-threat emotions (e.g., anger, disgust) are associated with a broader set of AUs than low-threat emotions (e.g., happy, sad; see also Liu et al., 2021), suggesting that costly-to-miss signals have higher levels of in-built redundancy and degeneracy that increase communication efficiency and success in noisy real-world environments. Our results contribute to unravelling the complex system of facial expression communication with implications for current theoretical accounts and the design of socially interactive digital agents.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×