September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Modeling face-Identity “likeness” with a convolutional neural network trained for face identification
Author Affiliations & Notes
  • Connor J. Parde
    Vanderbilt University
    The University of Texas at Dallas
  • Alice J. O'Toole
    The University of Texas at Dallas
  • Footnotes
    Acknowledgements  Funding provided by National Eye Institute Grant R01EY029692-04 to AOT
Journal of Vision September 2024, Vol.24, 1292. doi:https://doi.org/10.1167/jov.24.10.1292
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Connor J. Parde, Alice J. O'Toole; Modeling face-Identity “likeness” with a convolutional neural network trained for face identification. Journal of Vision 2024;24(10):1292. https://doi.org/10.1167/jov.24.10.1292.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Certain face images are considered a better “likeness” of an identity than others. However, it remains unclear whether perceived likeness is driven by: a) the proximity of a face image to an averaged prototype comprising all instances of an identity (i.e., a “central prototype”), or b) how closely a face image approximates the most-common appearances of the identity (i.e., the “density” of known instances). Convolutional neural networks (CNNs) trained for face recognition can be used to instantiate face spaces that model both within- and between-identity variability. These networks provide a testbed for investigating human perceptions of face likeness. We compared human-assigned likeness ratings with likeness ratings based on identity-prototypes and local image density from a face-identification CNN. Participants (n=50) viewed 20 face images simultaneously (5 viewpoints x 4 illumination conditions) of each of 72 identities and adjusted a slider bar to indicate whether each image was a “good likeness” of the identity being shown. Responses were collapsed across participants to generate a single likeness rating per image. Next, we used face-image descriptors from a CNN to generate likeness ratings based on either the proximity of a descriptor from the “prototype” created by averaging all corresponding same-identity descriptors (prototype-proximity likeness) or the density of same-identity descriptors located around a given descriptor in the output space generated by the CNN (density likeness). For all measures of perceived likeness (human-assigned ratings, prototype-proximity and density), likeness differed across viewpoint (p < 0.0001) and illumination (p < 0.001). However, only the CNN density-based likeness ratings mirrored the pattern of human likeness ratings. These results demonstrate that density within an identity-specific face space is a better model of human-assigned perceived-likeness ratings than distance to an identity prototype. In addition, these results show that viewpoint and illumination influence perceived-likeness ratings for face images.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×