September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Orientation Information in Encoding Facial Expressions
Author Affiliations
  • Deyue Yu
    School of Optometry, University of California, Berkeley, USA
  • Andrea Chai
    School of Optometry, University of California, Berkeley, USA
  • Susana Chung
    School of Optometry, University of California, Berkeley, USA
Journal of Vision September 2011, Vol.11, 604. doi:https://doi.org/10.1167/11.11.604
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Deyue Yu, Andrea Chai, Susana Chung; Orientation Information in Encoding Facial Expressions. Journal of Vision 2011;11(11):604. https://doi.org/10.1167/11.11.604.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Previous research showed that we use different regions of a face to categorize different facial expressions, e.g. the mouth region for identifying happy faces; eyebrows and eyes for identifying angry faces. These findings imply that the spatial information along or close to the horizontal orientation might be more useful than others for facial expression recognition. In this study, we examined how the performance for recognizing facial expression depends on the spatial information along different orientations. Fifteen normally sighted young observers recognized four facial expressions - angry, fear, happy and sad - with 140 different images for each expression. An orientation filter (bandwidth = 23°) was applied to restrict information within the face images, with the center of the filter ranged from 0° (horizontal) to 150° in steps of 30°. Accuracy for identifying facial expression filtered with each of these six filters, as well as for the unfiltered condition, was measured for an exposure duration of 53 ms. We computed recognition accuracy as dâ™ to separate discriminability from response bias. For all four facial expressions, recognition performance was virtually identical for filter orientations of −30°, 0° (horizontal) and 30°. Beyond ±30° filter orientation, performance declined systematically as the filter orientation approached 90° (vertical). Averaged across observers and the filtered conditions, d′ corresponding to the best (between ±30° orientations) and worst performance (90° orientation) was 2.85 and 0.61, respectively (cf. 3.43 for the unfiltered condition). Normalized to the performance for the unfiltered condition, performance around the horizontal orientation was highest for identifying happy faces, and least for sad. At 90° filter orientation, performance was the highest for identifying fearful faces, and least for happy. We conclude that the spatial information around the horizontal orientation, which captures primary changes of facial features across expressions, is the most important for recognizing facial expressions, at least for people with normal vision.

NIH grants R01-EY016093 and R01-EY012810, URAP Summer Apprenticeship Program (UCB). 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×