August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Videos, Deepfakes, and Dynamic Morphs: Neural and Perceptual Differences for Real and Artificial Faces.
Author Affiliations
  • Casey Becker
    RMIT University
  • Russell Conduit
    RMIT University
  • Philippe A Chouinard
    La Trobe University
  • Robin Laycock
    RMIT University
Journal of Vision August 2023, Vol.23, 5250. doi:https://doi.org/10.1167/jov.23.9.5250
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Casey Becker, Russell Conduit, Philippe A Chouinard, Robin Laycock; Videos, Deepfakes, and Dynamic Morphs: Neural and Perceptual Differences for Real and Artificial Faces.. Journal of Vision 2023;23(9):5250. https://doi.org/10.1167/jov.23.9.5250.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Dynamic facial expressions can be created by blending neutral and expressive photographs together. These dynamic morphs are popular in cognitive neuroscience research as they offer increased experimental control compared to video recordings, which can vary in the duration and temporal profile of each expression. However, emerging research suggests that morphed motion may not capture the temporal dynamics of real facial motion. Deepfake technology offers a potential solution this issue by using deep learning algorithms to transpose one face onto a video of another, while preserving the facial motion. This allows for the creation of facial expressions that exhibit multiple identities with the same spatiotemporal characteristics. Real videos were compared with static photographs, dynamic morphs, and deepfakes that were all created from the same set of video-recorded emotional expressions. We used electroencephalography (EEG) to measure neural and behavioural responses to emotional expressions exhibited in these four presentation types. Morphed expressions of emotion (happiness, anger, fear, and sadness) were perceived as less strong compared to video recordings, static photographs, and deepfakes. Morphed happy expressions were also rated as less genuine than those in video, static, and deepfake formats. Strength and genuineness ratings did not differ between videos and deepfakes. Visual evoked potentials revealed that compared to photographs and videos, dynamic morphs produced decreased amplitudes in the late positive potential (LPP) component, which is associated with recognition and evaluation of emotion intensity. These results suggest that linear morphs may not be a suitable replacement for video recordings. Deepfake technology may offer researchers the ability to create realistic dynamic stimuli that can be manipulated for specific experiments

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×