September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Setting the Record Straight: Dynamic but not Static Facial Expressions are Better Recognized
Author Affiliations & Notes
  • Anne-Raphaelle Richoz
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Valentina Ticcinelli
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Pauline Schaller
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Junpeng Lao
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Roberto Caldara
    Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
Journal of Vision September 2019, Vol.19, 92c. doi:https://doi.org/10.1167/19.10.92c
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anne-Raphaelle Richoz, Valentina Ticcinelli, Pauline Schaller, Junpeng Lao, Roberto Caldara; Setting the Record Straight: Dynamic but not Static Facial Expressions are Better Recognized. Journal of Vision 2019;19(10):92c. doi: https://doi.org/10.1167/19.10.92c.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans communicate social internal states through complex facial movement signals that have been shaped by biological and evolutionary constraints. The temporal dynamics of facial expressions of emotion are finely optimized to rapidly transmit orthogonal signals to the decoder (Jack et al., 2014). While real life social interactions are flooded with dynamic signals, current knowledge on the recognition of facial expressions essentially arises from studies using static face images. This experimental bias might arise from a large and consistent body of evidence reporting that young adults do not benefit from the richer dynamic over static information, whereas children, elderly and clinical populations do (Richoz et al., 2015; 2018). These counterintuitive observations in young adults suggest the existence of a near-optimal facial expression decoding system, insensitive to dynamic cues. Surprisingly, no study has yet tested the idea that such evidence might be rooted in a ceiling effect. To this aim, we used the QUEST threshold-seeking algorithm to determine the perceptual thresholds of 70 healthy young adults recognizing static and dynamic versions of the six basic facial expressions of emotion, while parametrically and randomly varying their phase-signals (0–100%) normalized for amplitude and spectra. We observed the expected recognition profiles, with happiness requiring minimum signals to be accurately categorized and fear the maximum. Overall, dynamic facial expressions of emotion were all better decoded when presented with low phase-signals (peaking at 20%). With the exception of fear, this advantage gradually decreased with increasing phase-signals, disappearing at 40% and reaching a ceiling effect at 70%. Our data show that facial movements play a critical role in our ability to reliably identify the emotional states of others with suboptimal visual signals typical of everyday life interactions. Dynamic signals are more effective and sensitive than static inputs for decoding all facial expressions of emotion, for all human observers.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×