August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
Mapping the recognition of facial expression of emotions in deafness
Author Affiliations
  • Junpeng Lao
    Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Anne-Raphaëlle Richoz
    Department of Psychology, University of Fribourg, Fribourg, Switzerland
  • Chloé Stoll
    Laboratoire de Psychologie et Neurocognition (CNRS), Université Pierre Mend�s-France, Grenoble, France
  • Olivier Pascalis
    Laboratoire de Psychologie et Neurocognition (CNRS), Université Pierre Mend�s-France, Grenoble, France
  • Matthew Dye
    Rochester Institute of Technology/National Technical Institute for Deaf, Rochester, New York, USA
  • Roberto Caldara
    Department of Psychology, University of Fribourg, Fribourg, Switzerland
Journal of Vision September 2016, Vol.16, 1391. doi:https://doi.org/10.1167/16.12.1391
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Junpeng Lao, Anne-Raphaëlle Richoz, Chloé Stoll, Olivier Pascalis, Matthew Dye, Roberto Caldara; Mapping the recognition of facial expression of emotions in deafness. Journal of Vision 2016;16(12):1391. https://doi.org/10.1167/16.12.1391.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We are living in a world of rich dynamic multisensory sensory signals. Normal-hearing individuals rapidly integrate multimodal information for effectively decoding biologically relevant social signals, in particular from faces. However, it remains unclear how the representations of facial expression of emotions develop in the absence of the auditory sensory channel and whether they are as effective as those of hearing individuals. To this aim, we performed four psychophysical studies on observers with early-onset severe-to-profound deafness and normal-hearing controls. We first examined their ability to recognize the six basic facial expressions (anger, disgust, fear, happiness, sadness, and surprise) using (1) static and (2) dynamic stimuli. We then applied an adaptive maximum likelihood procedure to quantify (3) the intensity (using neutral-to-expression morphs) and (4) the signal levels (using noise-to-face images) required for observers to achieve expression recognition with 75% accuracy. Deaf observers showed the normal categorization profiles and confusions across expressions (e.g., confusing surprise with fear), despite requiring more intensity and signal from faces when compared to the controls. Notably, however, deaf observers showed a significantly larger advantage when decoding dynamic compared to static facial expressions, reaching a performance comparable to that of normal-hearing controls. Our data show that static visual representations for facial expression of emotions are better (de)coded in hearing compared to deaf individuals. However, this effect disappears during the more ecologically valid decoding of dynamic facial expressions, showing a critical sensitivity to motion information in the deaf population. Altogether, these findings offer novel insights into the processing of facial expression of emotions in deafness and question the conclusions obtained in this population with the use of static images only.

Meeting abstract presented at VSS 2016

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×