Abstract
Could face processing aptitude be a reliable indicator for how well humans detect deepfakes – videos manipulated by artificial intelligence to make someone appear to do or say something they did not do or say? Recent research finds that humans are significantly better than chance, but not perfect at detecting deepfakes, which varies depending on videos’ context and how videos have been manipulated (Groh et al., 2021). We compare how well so-called Super-Recognizers — people with superior skill for processing facial identity (Russell et al., 2009; Ramon, 2021) — perform on the same stimulus set. We invited individuals from a group of SRs identified previously via in-person assessments using novel, formalized diagnostic criteria (Ramon, 2021) to complete two deepfake detection experiments. Several of these volunteering SRs were reported in published behavioral (Ramon, 2021; Nador et al., 2021a), psychophysical (Nador et al., 2021b), or neuroimaging (Faghel-Soubeyrand et al., 2021) studies. The group of 28 SRs responded to 2009 trials across two experimental protocols: a 2-alternative forced-choice (2AFC) design and a single stimulus design. Compared to the original sample of 15,016 participants (Groh et al., 2021), SRs outperformed controls by 5 percentage points (p<0.001) on stimuli presented in a two alternative-choice (2AFC) design and 10.3-14.6 percentage points (p<0.001) on stimuli presented as a single video. These results provide initial evidence that facial processing ability as measured by highly challenging laboratory tests could be an indicator of an individual’s ability to identify algorithmic manipulations of unfamiliar faces in video.