Abstract
The study and comparison of dynamic over static facial expression recognition (FER) has significantly gained interest in recent years. Brain lesion and neuroimaging studies have shown that static and dynamic FER rely on distinct neural pathways. In addition, importantly, while elderly observers and prosopagnosic patients have difficulties in recognizing static expressions, their performance significantly improves with dynamic stimuli. However, whether this dynamic advantage is fully comparable in healthy and damaged brains remains to be clarified. To this aim, we developed a new tool parametrically manipulating the quantity of phase signal of dynamic facial expressions, while normalizing luminance and contrast across video frames. PS and 15 age-matched healthy controls performed FER with dynamic facial expressions’ sampling the 0% to 100% signal space of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise). We then implemented a threshold-seeking algorithm to precisely determine how much signal the participants needed to achieve a given performance. Interestingly, we did not observe strong differences in FER performance between PS and the controls in the highest signal levels. Nevertheless, differences markedly appeared with lower signal levels. In the mid-range, PS globally showed a decrease of recognition performance, especially for some expressions (sadness, fear). More generally, the emotion recognition trajectories showed that all the age-matched controls outperformed PS on their FER thresholds which were lower (i.e., better) than those of PS. Altogether, these observations provide critical insights into the healthy and impaired FER system in the elderlies and prosopagnosia. In addition, this new tool offers a new sensitive metric for the evaluation of FER in the healthy and impaired populations.