August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Quantifying dynamic facial expression recognition thresholds in prosopagnosia
Author Affiliations
  • Fanny Poncet
    University of Fribourg
  • Lisa Stacchi
    University of Fribourg
  • Anne-Raphaëlle Richoz
    University of Fribourg
  • Roberto Caldara
    University of Fribourg
Journal of Vision August 2023, Vol.23, 5234. doi:https://doi.org/10.1167/jov.23.9.5234
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Fanny Poncet, Lisa Stacchi, Anne-Raphaëlle Richoz, Roberto Caldara; Quantifying dynamic facial expression recognition thresholds in prosopagnosia. Journal of Vision 2023;23(9):5234. https://doi.org/10.1167/jov.23.9.5234.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The study and comparison of dynamic over static facial expression recognition (FER) has significantly gained interest in recent years. Brain lesion and neuroimaging studies have shown that static and dynamic FER rely on distinct neural pathways. In addition, importantly, while elderly observers and prosopagnosic patients have difficulties in recognizing static expressions, their performance significantly improves with dynamic stimuli. However, whether this dynamic advantage is fully comparable in healthy and damaged brains remains to be clarified. To this aim, we developed a new tool parametrically manipulating the quantity of phase signal of dynamic facial expressions, while normalizing luminance and contrast across video frames. PS and 15 age-matched healthy controls performed FER with dynamic facial expressions’ sampling the 0% to 100% signal space of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise). We then implemented a threshold-seeking algorithm to precisely determine how much signal the participants needed to achieve a given performance. Interestingly, we did not observe strong differences in FER performance between PS and the controls in the highest signal levels. Nevertheless, differences markedly appeared with lower signal levels. In the mid-range, PS globally showed a decrease of recognition performance, especially for some expressions (sadness, fear). More generally, the emotion recognition trajectories showed that all the age-matched controls outperformed PS on their FER thresholds which were lower (i.e., better) than those of PS. Altogether, these observations provide critical insights into the healthy and impaired FER system in the elderlies and prosopagnosia. In addition, this new tool offers a new sensitive metric for the evaluation of FER in the healthy and impaired populations.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×