October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Unraveling the neural representation of dynamic facial expressions through EEG-based decoding and movie reconstruction
Author Affiliations
  • Tyler Roberts
    Department of Psychology, University of Toronto Scarborough
  • Gerald Cupchik
    Department of Psychology, University of Toronto Scarborough
  • Gloria Rebello
    Department of Psychology, University of Toronto Scarborough
  • Jonathan S. Cant
    Department of Psychology, University of Toronto Scarborough
  • Adrian Nestor
    Department of Psychology, University of Toronto Scarborough
Journal of Vision October 2020, Vol.20, 250. doi:https://doi.org/10.1167/jov.20.11.250
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tyler Roberts, Gerald Cupchik, Gloria Rebello, Jonathan S. Cant, Adrian Nestor; Unraveling the neural representation of dynamic facial expressions through EEG-based decoding and movie reconstruction. Journal of Vision 2020;20(11):250. doi: https://doi.org/10.1167/jov.20.11.250.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Extensive work has documented the perception of facial expressions, with particular focus on a handful of basic emotional expressions. However, the larger scope of facial expressions and their representational basis are less understood. Further, the neural processing of dynamic facial expression awaits clarification. Here, we address these challenges through EEG-based decoding and image/movie reconstruction. Specifically, EEG data were collected from healthy adults who viewed 1-second videos displaying 24 dynamic facial expressions while performing an oddball task (i.e., angry expression recognition). Each stimulus started with a neutral expression and transitioned to displaying a peak expression over the course of ten 100ms frames. In addition, participants performed a behavioral pairwise rating task with the 24 expressions. EEG-based pattern analyses were carried out for expression decoding and, then, movie reconstruction was conducted based on the resulting decoding patterns. Our investigation reveals, first, that the representational space of facial expression derived from both behavioral and neural data is broadly, though not exclusively, accounted for by the classical dimensions of valence and arousal. Second, the temporal profile of EEG-based decoding shows a gradual increase after 350ms reaching a classification plateau between roughly 600-1200ms following stimulus onset. Further, a frame-by-frame analysis indicates that peak accuracy is reached after the midway point of stimulus presentation (i.e., at the 6-7th frame of a video), prior to peak expression display (i.e., the 10th frame). Third, movie reconstruction is successfully achieved from the EEG signal and, consistent with the decoding results, reconstruction accuracy is maximized after the midway point of stimulus presentation. Thus, our results shed light on the representational structure and the neural processing of dynamic facial expressions. Of note, they elucidate with an unprecedented level of detail the pictorial content of expression representations while also providing proof of concept for the possibility of EEG-based dynamic stimulus reconstruction.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×