September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
What type of experience is needed to generate a human-like view-invariant representation of face identity? Evidence from Deep Convolutional Neural Networks
Author Affiliations
  • Mandy Rosemblaum
    Tel-Aviv University
  • Idan Grosbard
    Tel-Aviv University
  • Naphtali Abudarham
    Tel-Aviv University
  • Galit Yovel
    Tel-Aviv University
Journal of Vision September 2021, Vol.21, 2595. doi:https://doi.org/10.1167/jov.21.9.2595
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mandy Rosemblaum, Idan Grosbard, Naphtali Abudarham, Galit Yovel; What type of experience is needed to generate a human-like view-invariant representation of face identity? Evidence from Deep Convolutional Neural Networks. Journal of Vision 2021;21(9):2595. https://doi.org/10.1167/jov.21.9.2595.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Face recognition depends on the generation of a view-invariant representation of face identity. We have recently discovered a subset of view-invariant facial features that humans use for face identification. But what type of experience is needed in order to generate this face representation? This question is hard to answer in humans as we have no access to the type of experience humans have with faces during development. In previous studies, we discovered that face-trained deep convolutional neural networks (DCNNs) are sensitive to the same subset of facial features humans use for face identification. This sensitivity emerges at high layers of the network, where a view-invariant representation of face identity is generated. These models enable us to ask what type of experience is required to achieve this human-like, view-invariant face representation. To that end, we systematically trained a DCNN with different number of identities and different images per identity. We found that the number of training images that is required for the generation of sensitivity to human-like critical features corresponds with the generation of a view-invariant face representation. Furthermore, we found a tradeoff between the number of identities and the number of images per identity that are required to generate a view-invariant representation such that training with 10 identities requires 300 images per identity, whereas training with 1000 identities requires only 10 images per identity. These findings suggest that sensitivity to human-like view-invariant facial features that define the identity of the face can be achieved with a relatively small training set. These findings may shed light on the initial stages of development of the human face recognition system suggesting that infants who are exposed to a relatively small number of identities during their first year of life can already extract identity-relevant facial information.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×