September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Face recognition in humans and machines
Author Affiliations
  • Naphtali Abudarham
    School of Psychological Sciences, Tel Aviv University
  • Lior Shkiller
    School of Psychological Sciences, Tel Aviv University
  • Galit Yovel
    School of Psychological Sciences, Tel Aviv UniversitySagol School of Neuroscience, Tel Aviv University
Journal of Vision September 2018, Vol.18, 156. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Naphtali Abudarham, Lior Shkiller, Galit Yovel; Face recognition in humans and machines. Journal of Vision 2018;18(10):156.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Deep neural networks (DNNs) have recently reached the level of human face recognition in unconstrained settings. Most studies that compared human and machine recognition so far have focused on overall performance level. Yet, little is known about the nature of the face representation generated by humans and machines. In the current study, we compared human and machine similarity ratings on faces in which we systematically changed different features. In previous studies, we found that features for which humans show high perceptual sensitivity (PS) for detecting differences between them are more important for face recognition than features for which humans have low perceptual sensitivity. We also found that these high-PS features tend to vary less across appearance variations, making them more useful for recognition under changing conditions. In this study, faces were manipulated such that they differed in either high-PS or low-PS features, and we measured the distance scores between original and changed faces using a DNN algorithm (learned features), a traditional LBP algorithm (engineered features), and human similarity scores. We found that both DNNs and humans rated faces that differed in high-PS features as more dissimilar than faces that differed in low-PS features, whereas for the engineered-feature algorithm, similarity ratings were similar for both types of changes. Taken together, our findings suggest that DNNs, which are trained on unconstrained images, produce an internal face representation that is similar to that of humans, relying on a subset of facial features for recognition that are invariant across different appearances of the same individual, whereas the engineered-feature algorithm assigns more evenly distributed weights to all the information in the face. We conclude that training with unconstrained faces, in humans and DNNs, biases the representation of faces to a similar subset of facial features that support face recognition across different appearances.

Meeting abstract presented at VSS 2018


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.