September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Synthetic faces: how perceptually convincing are they?
Author Affiliations
  • Sophie Nightingale
    Department of Psychology, Lancaster University
  • Shruti Agarwal
    Electrical Engineering and Computer Sciences, University of California, Berkeley
  • Erik Härkönen
    Department of Computer Science, Aalto University
  • Jaakko Lehtinen
    Department of Computer Science, Aalto University
  • Hany Farid
    Electrical Engineering and Computer Sciences, University of California, Berkeley
    School of Information, University of California, Berkeley
Journal of Vision September 2021, Vol.21, 2015. doi:https://doi.org/10.1167/jov.21.9.2015
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sophie Nightingale, Shruti Agarwal, Erik Härkönen, Jaakko Lehtinen, Hany Farid; Synthetic faces: how perceptually convincing are they?. Journal of Vision 2021;21(9):2015. https://doi.org/10.1167/jov.21.9.2015.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recent advances in machine learning, specifically generative adversarial networks (GANs), have made it possible to synthesize highly photo-realistic faces. Such synthetic faces have been used in the creation of fraudulent social media accounts, including the creation of a fictional candidate for U.S. Congress. It has been shown that deep neural networks can be trained to discriminate between real and synthesized faces; it remains unknown, however, if humans can. We examined people’s ability to discriminate between synthetic and real faces. We selected 400 faces synthesized using the state of the art StyleGAN2, further ensuring diversity across gender, age, and race. A convolutional neural network descriptor was used to extract a low-dimensional, perceptually meaningful, representation of each face. For each of the 400 synthesized faces, this representation was used to find the most similar real faces in the Flickr-Faces-HQ (FFHQ) dataset. From these, we manually selected a matching face that did not contain additional discriminative cues (e.g., complex background, other people in the scene). Participants (N=315) were recruited from Mechanical Turk and given a brief tutorial consisting of examples of synthesized and real faces. Each participant then saw 128 trials, each consisting of a single face, either synthesized or real, and had unlimited time to classify the face accordingly. Although unknown to the participant, half of the faces were real and half were synthesized. Across the 128 trials, faces were equally balanced in terms of gender and race. Average performance was close to chance with no response bias (d-prime = -0.09; beta = 0.99). These results suggest that StyleGAN2 can successfully synthesize faces that are realistic enough to fool naive observers. We are examining whether a more detailed training session, raising participants’ awareness of some common synthesis artifacts, will improve their ability to detect synthetic faces.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×