September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Facial Expression Information in Deep Convolutional Neural Networks Trained for Face Identification
Author Affiliations & Notes
  • Y. Ivette Colon
    Behavioral and Brain Sciences, The University of Texas at Dallas
  • Matthew Q Hill
    Behavioral and Brain Sciences, The University of Texas at Dallas
  • Connor J Parde
    Behavioral and Brain Sciences, The University of Texas at Dallas
  • Carlos D Castillo
    University of Maryland Institute for Advanced Computer Studies
  • Rajeev Ranjan
    University of Maryland Institute for Advanced Computer Studies
  • Alice J O’Toole
    Behavioral and Brain Sciences, The University of Texas at Dallas
Journal of Vision September 2019, Vol.19, 93b. doi:https://doi.org/10.1167/19.10.93b
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Y. Ivette Colon, Matthew Q Hill, Connor J Parde, Carlos D Castillo, Rajeev Ranjan, Alice J O’Toole; Facial Expression Information in Deep Convolutional Neural Networks Trained for Face Identification. Journal of Vision 2019;19(10):93b. doi: https://doi.org/10.1167/19.10.93b.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Deep convolutional neural networks (DCNNs) are state-of-the-art learning algorithms inspired by the primate visual system (e.g., Fukushima, 1988; Krizhevsky et al., 2012). Face identification DCNNs produce a top-layer representation that supports face identification across changes in pose, illumination, and expression. Counter-intuitively, this representation also contains information not relevant for identification (e.g., viewpoint, illumination) (Parde et al., 2016). We asked whether DCNN identity codes also retain information about facial expression. Facial expressions are a type of identity-irrelevant information, though there are opposing neuropsychological theories about the independence of facial identity and facial expression processing (Fox and Barton, 2007; Calder and Young, 2005). Using the Karolinksa database (KDEF), a controlled dataset of expressions (Lindqvist et al., 1998), we examined whether the top-layer features of a high-performing DCNN trained for face recognition (Ranjan et al., 2017) could be used to classify expression. The KDEF dataset contains 4,900 images of 70 actors posing 7 facial expressions (happy, sad, angry, surprised, fearful, disgusted, and neutral), photographed from 5 viewpoints (90- and 45-degree left and right profiles, and frontal). All images were processed by the DCNN to generate 512-element face representations comprised of the top-layer DCNN features. Linear discriminant analysis revealed that the tested expressions were predicted accurately from the features, with happy expressions predicted most accurately (72% correct), followed by surprise (67.5%), disgust (67.5%), anger (65%), neutral (55.5%), sad (51%), and fearful (39%); (chance = 14.29%). We also examined the interaction between viewpoint and expression using a cosine similarity measure between the representations of different images in the dataset. Heatmaps and histograms of inter-image similarity indicated that there is a higher cost to identification with viewpoint change than with expression change. These results reveal a potential link between facial identity and facial expression processing and encoding in artificial neurons.

Acknowledgement: This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014-14071600012. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. We thank NVIDIA for donating of the K40 GPU used in this work. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×