September 2017
Volume 17, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2017
Comparing human and deep convolutional neural network face-matching performance on disguised face images
Author Affiliations
  • Eilidh Noyes
    School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
  • Connor Parde
    School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
  • Y. Colon
    School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
  • Matthew Hill
    School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
  • Carlos Castillo
    Department of Electrical Engineering, University of Maryland, USA
  • Jun-Cheng Chen
    Department of Electrical Engineering, University of Maryland, USA
  • Rob Jenkins
    Department of Psychology, University of York, England, UK
  • Alice O'Toole
    School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
Journal of Vision August 2017, Vol.17, 1003. doi:10.1167/17.10.1003
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Eilidh Noyes, Connor Parde, Y. Colon, Matthew Hill, Carlos Castillo, Jun-Cheng Chen, Rob Jenkins, Alice O'Toole; Comparing human and deep convolutional neural network face-matching performance on disguised face images. Journal of Vision 2017;17(10):1003. doi: 10.1167/17.10.1003.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

People perform poorly on face-matching tasks that involve unfamiliar identities. Disguise further impairs performance, even for familiar identities, which are usually recognized robustly across image variation (Noyes & Jenkins, 2016). Face recognition algorithms based on deep learning and convolutional neural networks (DCNNs) now perform surprisingly well across image variation, but have not been tested with disguised faces. Here we directly compare DCNN accuracy with human accuracy (Noyes, 2016) for identifying disguised and un-disguised faces. Using the features from the top-level "compact layer" of a recent DCNN (Chen, 2016), we generated representations for un-disguised and disguised images of the same faces from the FAƇADE database (Noyes, 2016). To determine whether images would cluster by their true identity, hierarchical agglomerative clustering was applied to the DCNN face representations. This clustering was compared to human identity-matching performance for the same images. Humans and the DCNN were similarly impaired by disguise when the comparison images were of the same identity (disguise involved evading identity). For different-identity trials, human performance dropped with disguise, but the DCNN performed at the same level on un-disguised and disguised (impersonation) comparisons. Next, we compared performance for only disguised images. DCNN accuracy was surprisingly similar to that of humans for evasion faces - with machine accuracy of 61.54% and average human accuracy of 60.38%. For impersonation of a similar looking person, machine accuracy was 84.62% and human accuracy was 82.18%. However, machines performed more accurately than humans (96.15% > 89.62%) at matching faces disguised to impersonate a very different looking individual. Notably, familiar viewers from Noyes (2016) far outperformed machines and unfamiliar viewers on disguised faces. These findings provide insight into the current performance level of DCNNs in comparison to humans for identifying disguised and undisguised faces.

Meeting abstract presented at VSS 2017

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×