September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
A Human-like View-invariant Representation of Faces in Deep Neural Networks Trained with Faces but not with Objects
Author Affiliations & Notes
  • Naphtali Abudarham
    School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
  • Galit Yovel
    School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
    Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
Journal of Vision September 2019, Vol.19, 93a. doi:https://doi.org/10.1167/19.10.93a
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Naphtali Abudarham, Galit Yovel; A Human-like View-invariant Representation of Faces in Deep Neural Networks Trained with Faces but not with Objects. Journal of Vision 2019;19(10):93a. https://doi.org/10.1167/19.10.93a.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Face-recognition Deep Convolutional Neural-Networks (DCNNs) show excellent generalization under variations in face appearance, such as changes in pose, illumination or expression. To what extent this view-invariant representation depends on training with faces, or may also emerge following training with non-face objects? To examine the emergence of a view-invariant representation across the network’s layers, we measured the representation similarity of different head views across different identities in DCNNs trained with faces or with objects. We found similar, view-selective representation, in lower layers of the face and object networks. A view-invariant representation emerged at higher layers of the face-trained network, but not the object-trained network, which was view-selective across all its layers. To examine whether these representations depend on facial information used by humans to extract view-invariant information from faces, we examined the sensitivity of the face and object networks to a subset of facial features that remain invariant across head views. This subset of facial features were also shown to be critical for human face recognition. Lower layers of the face network and all layers of the object network were not sensitive to this subset of critical, view-invariant features, whereas higher layers of the face network were sensitive to these view-invariant facial features. We conclude that a face-trained DCNN, but not an object-trained DCNN, displays a hierarchical process of extracting view-invariant facial features, similar to humans. These findings imply that invariant face recognition depends on experience with faces, during which the system learns to extract these invariant features, and demonstrate the advantage of separate neural systems for faces and objects. These results may generate predictions for neurophysiological studies aimed at discovering the type of facial information used through the hierarchy of the face and object-processing systems.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×