Abstract
We are living in a world of rich dynamic multisensory sensory signals. Normal-hearing individuals rapidly integrate multimodal information for effectively decoding biologically relevant social signals, in particular from faces. However, it remains unclear how the representations of facial expression of emotions develop in the absence of the auditory sensory channel and whether they are as effective as those of hearing individuals. To this aim, we performed four psychophysical studies on observers with early-onset severe-to-profound deafness and normal-hearing controls. We first examined their ability to recognize the six basic facial expressions (anger, disgust, fear, happiness, sadness, and surprise) using (1) static and (2) dynamic stimuli. We then applied an adaptive maximum likelihood procedure to quantify (3) the intensity (using neutral-to-expression morphs) and (4) the signal levels (using noise-to-face images) required for observers to achieve expression recognition with 75% accuracy. Deaf observers showed the normal categorization profiles and confusions across expressions (e.g., confusing surprise with fear), despite requiring more intensity and signal from faces when compared to the controls. Notably, however, deaf observers showed a significantly larger advantage when decoding dynamic compared to static facial expressions, reaching a performance comparable to that of normal-hearing controls. Our data show that static visual representations for facial expression of emotions are better (de)coded in hearing compared to deaf individuals. However, this effect disappears during the more ecologically valid decoding of dynamic facial expressions, showing a critical sensitivity to motion information in the deaf population. Altogether, these findings offer novel insights into the processing of facial expression of emotions in deafness and question the conclusions obtained in this population with the use of static images only.
Meeting abstract presented at VSS 2016