Abstract
Introduction. Generative adversarial networks have become popular platforms for synthesizing realistic visual content. In a large-sample survey, Nightingale & Farid (2022) demonstrated that AI-synthesized (GANsyn2) faces are indistinguishable from real faces and more trustworthy to adults. Here we extended this study to test children and adults in the East Asian context and further explored whether such ability is correlated with proficiency in featural/configural processing. Methods. Thirty-four 5- to 12-year-old children (18 boys) and 34 adults (17 males) participated in the study. All participants received three tasks: a computerized face judgment task whereby participants judged whether the image was a real person or synthesized by AI, a trustworthiness rating task whereby participants rated the trustworthiness of each face on a 1 (very untrustworthy) to 5 (very trustworthy) point-scale, and a paper-and-pencil face discrimination test containing two target faces and 12 comparison faces with modifications either on the eyes, nose, or mouth that participants had to mark the locus of alterations. Results. Neither adults (M = 0.476 ±0.015, d’=0) nor children (M =0.460 ±0.015, d’=0) were able to discriminate between real and synthesized faces better than chance. However, adults performed relatively better with identifying real faces (HIT/CR<1), whereas children were better at identifying synthesized faces (HIT/CR>1). Synthesized faces (M =3.116±0.042) were rated more trustworthy than real ones (M =2.696 ± 0.049) in all participants. Lastly, adults (M = 7.853±0.368) performed better than children (M = 4.971±0.361) in the face discrimination test, but the individual’s test score did not correlate with the accuracy of the face judgment task. Conclusion. Like adults, children cannot distinguish between AI-synthesized faces from real ones but exhibit a response bias toward synthesized faces. Proficiency in featural/configural processing does not contribute to the face authenticity judgment.