September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Social Networks: Analyzing Social Information in Deep Convolutional Neural Networks Trained for Face Identification
Author Affiliations
  • Connor Parde
    School of Behavioral and Brain Sciences, The University of Texas at Dallas
  • Ying Hu
    School of Behavioral and Brain Sciences, The University of Texas at Dallas
  • Carlos Castillo
    Institute for Advanced Computer Studies, University of Maryland
  • Swami Sankaranarayanan
    Institute for Advanced Computer Studies, University of Maryland
  • Alice O'Toole
    School of Behavioral and Brain Sciences, The University of Texas at Dallas
Journal of Vision September 2018, Vol.18, 1342. doi:10.1167/18.10.1342
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Connor Parde, Ying Hu, Carlos Castillo, Swami Sankaranarayanan, Alice O'Toole; Social Networks: Analyzing Social Information in Deep Convolutional Neural Networks Trained for Face Identification. Journal of Vision 2018;18(10):1342. doi: 10.1167/18.10.1342.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The state-of-the-art for face identification algorithms has improved due to the development of deep convolutional neural networks (DCNNs) trained on large datasets of face images. We asked whether DCNNs trained for face identification also retain information useful for modeling the social and personality inferences people make spontaneously from faces. Participants (n=80) rated 280 frontal faces on a diverse set of 18 social traits from the Big Five Factors of Personality (Gosling, Rentfrow & Swann, 2003). These five factors are openness, conscientiousness, extroversion, agreeableness, and neuroticism. We predicted the human-assigned social trait ratings for each image from the top-level features produced by a DCNN trained for face recognition, using a cross-validation method to train linear classifiers. This DCNN (Sankaranarayanan et al., 2016) was trained on 494,414 images of 10,575 identities and consisted of seven layers (19.8 million parameters). At the top level, the network produces 512 features for each face image. The top-level features from this DCNN predicted human-assigned social trait profiles (i.e., vectors of trait ratings) well (average cosine similarity between vectors = 0.53, p < 0.001). To determine which traits were important for trait-profile estimation accuracy, we tested predictions for individual traits by measuring the error between human-assigned trait ratings and the DCNN-predicted traits. All of the traits were predicted reliably (Bonferroni corrected alpha level = .00225). Next we tested whether trait information could be predicted from DCNN features of profile-view images (90 degrees) of each identity. The results indicated a robust representation of traits across changes in viewpoint (p < .001). We conclude that social trait information is well represented at the top level of DCNNs trained for face recognition. This suggests that the information needed for face identification is not domain-specific and can be leveraged to solve a range of face-perception tasks.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×