December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Do Super Recognizers Excel at Deepfake Detection?
Author Affiliations & Notes
  • Matthew Groh
    MIT
  • Meike Ramon
    Applied Face Cognition Lab, Switzerland
  • Footnotes
    Acknowledgements  MR is supported by a Swiss National Science Foundation PRIMA (Promoting Women in Academia) grant (PR00P1_179872).
Journal of Vision December 2022, Vol.22, 3993. doi:https://doi.org/10.1167/jov.22.14.3993
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Matthew Groh, Meike Ramon; Do Super Recognizers Excel at Deepfake Detection?. Journal of Vision 2022;22(14):3993. https://doi.org/10.1167/jov.22.14.3993.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Could face processing aptitude be a reliable indicator for how well humans detect deepfakes – videos manipulated by artificial intelligence to make someone appear to do or say something they did not do or say? Recent research finds that humans are significantly better than chance, but not perfect at detecting deepfakes, which varies depending on videos’ context and how videos have been manipulated (Groh et al., 2021). We compare how well so-called Super-Recognizers — people with superior skill for processing facial identity (Russell et al., 2009; Ramon, 2021) — perform on the same stimulus set. We invited individuals from a group of SRs identified previously via in-person assessments using novel, formalized diagnostic criteria (Ramon, 2021) to complete two deepfake detection experiments. Several of these volunteering SRs were reported in published behavioral (Ramon, 2021; Nador et al., 2021a), psychophysical (Nador et al., 2021b), or neuroimaging (Faghel-Soubeyrand et al., 2021) studies. The group of 28 SRs responded to 2009 trials across two experimental protocols: a 2-alternative forced-choice (2AFC) design and a single stimulus design. Compared to the original sample of 15,016 participants (Groh et al., 2021), SRs outperformed controls by 5 percentage points (p<0.001) on stimuli presented in a two alternative-choice (2AFC) design and 10.3-14.6 percentage points (p<0.001) on stimuli presented as a single video. These results provide initial evidence that facial processing ability as measured by highly challenging laboratory tests could be an indicator of an individual’s ability to identify algorithmic manipulations of unfamiliar faces in video.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×