Abstract
The photo realism of synthetic media (deep fakes) continues to amaze and entertain, as well as alarm those concerned about abuses in the form of non-consensual pornography, fraud, and disinformation campaigns. We have previously shown that synthetic faces are visually indistinguishable from real faces. Because in just milliseconds faces illicit implicit inferences about traits such as trustworthiness, we wondered if synthetic and real faces illicit different responses of trustworthiness. We synthesized 400 faces using StyleGAN2, ensuring diversity across gender, age, and race. A convolutional neural network descriptor was used to extract a perceptually meaningful representation of each face, from which a matching real face was selected from the Flickr-Faces-HQ dataset. Mechanical Turk participants (N=223) read a brief introduction explaining that the purpose of the study was to assess face trustworthiness on a scale of 1 (very untrustworthy) to 7 (very trustworthy). Each participant then saw 128 faces, one at a time, and rated their trustworthiness. Participants had an unlimited amount of time to respond. The average trustworthy rating for synthetic faces of 4.82 is more than the rating of 4.48 for synthetic faces. Although only 7.7% more trustworthy, this difference is significant (t(222) = 14.6, p < 0.001, d = 0.49). Although a small effect, Black faces were rated more trustworthy than South Asian faces, but otherwise there was no effect across race. Women were rated as significantly more trustworthy than men, 4.94 as compared to 4.36 (t(222) = 19.5, p < 0.001, d = 0.82). Synthetically-generated faces are not just photo realistic, they are more trustworthy than real faces. This may be because synthesized faces tend to look more like average faces which themselves are deemed more trustworthy. Regardless of the underlying reason, and ready or not, synthetically-generated faces have emerged on the other side of the uncanny valley.