October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Facial Expression Information in Humans and DCNNs
Author Affiliations & Notes
  • Y. Ivette Colón
    The University of Texas at Dallas
  • Connor Parde
    The University of Texas at Dallas
  • Carlos Castillo
    University of Maryland
  • Jacqueline Cavazos
    The University of Texas at Dallas
  • Alice O'Toole
    The University of Texas at Dallas
  • Footnotes
    Acknowledgements  National Eye Institute Grant 1R01EY029692-01 to A. O’T.
Journal of Vision October 2020, Vol.20, 600. doi:https://doi.org/10.1167/jov.20.11.600
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Y. Ivette Colón, Connor Parde, Carlos Castillo, Jacqueline Cavazos, Alice O'Toole; Facial Expression Information in Humans and DCNNs. Journal of Vision 2020;20(11):600. https://doi.org/10.1167/jov.20.11.600.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Facial expression perception in-the-wild requires an ability to see emotional signals from different views. Some facial expressions (e.g., happiness) are recognized more accurately than others (e.g., fear) from faces viewed frontally. However, results on facial expression perception from non-frontal viewpoints are limited and non-convergent (Matsumoto & Hwang, 2011; Hess et. al, 2007). We investigated expression classification over viewpoint change in an experiment that incorporated human and machine perception. The goal was to test the effects of viewpoint on expression perception, and to examine the role of the visual stimulus, via machine perception, in supporting classification. We tested expression classification for human subjects (N=160) and a deep convolutional neural network (DCNN) trained for face identification (Ranjan et al., 2018). DCNNs model ventral visual stream processing and are known to retain expression and viewpoint information about face images (Colón, et al. 2018; Hill et al., 2019). The test employed the Karolinksa database (KDEF)—a controlled dataset of expressions containing 4,900 images of 70 actors posing 7 facial expressions (happy, sad, angry, surprised, fearful, disgusted, neutral) photographed from 5 viewpoints (90- and 45-degree left and right profiles, and frontal) (Lindqvist et al., 1998). For frontal faces, both humans and the DCNN replicate findings of better recognition of some expressions (e.g., happy > fear) (humans, p < .001), and equivalent classification across viewpoint. For humans, however, there was a strong advantage for detecting angry faces from the frontal viewpoint (viewpoint-expression interaction, p <.01). There was no such interaction in the DCNN, indicating that the human advantage for detecting angry faces from the front cannot be accounted for completely by visual features. This suggests that the high accuracy humans show for detecting angry faces from the front may be due to independent facial expression processing outside of the ventral visual stream (e.g., dorsal, subcortical).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×