September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Semantic Decoding of Affective Face Signals in the Brain is Temporally Distinct
Author Affiliations & Notes
  • Meng Liu
    University of Glasgow
  • Nicola van Rijsbergen
    Edge Hill University
  • Oliver Garrod
    University of Glasgow
  • Robin Ince
    University of Glasgow
  • Rachael Jack
    University of Glasgow
  • Philippe Schyns
    University of Glasgow
  • Footnotes
    Acknowledgements  This work was supported by REJ: European Research Council[75858], Economic & Social Research Council[ES/K001973/1];PGS: Multidisciplinary University Research Initiative/Engineering & Physical Sciences Research Council[172046-01]; RAAI/PGS: Wellcome Trust [214120/Z/18/Z; 107802]
Journal of Vision September 2021, Vol.21, 2589. doi:https://doi.org/10.1167/jov.21.9.2589
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Meng Liu, Nicola van Rijsbergen, Oliver Garrod, Robin Ince, Rachael Jack, Philippe Schyns; Semantic Decoding of Affective Face Signals in the Brain is Temporally Distinct. Journal of Vision 2021;21(9):2589. https://doi.org/10.1167/jov.21.9.2589.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Facial expressions are a rich information source from which observers infer the emotional states of others. Despite much understanding about the brain regions that represent facial expressions, we do not yet know how representations of these facial movements transform into judgments of emotions in the brain. We addressed this question in 5 participants who judged the emotion of individual face movements called Action Units (AUs) while we concurrently measured brain activity using magnetoencephalography (MEG). Stimuli were animations of 5 facial movements--Outer Brower Raiser (AU2), Nose Wrinkler (AU9), Lip Corner Puller (AU12), Chin Raiser (AU17), Lip Stretcher (AU20), each at 4 levels of intensity (%25 - %100). We instructed participants to rate each animation according to either its perceived valence (‘negative’, ‘neutral’ or ‘positive’) or arousal (‘low,’ ‘neutral’ or ‘high’). Tasks alternated between blocks of 40 trials (5 AUs X 4 intensity levels X 2 repetitions) and participants completed 4,000 ~ 6,000 trials in total. We averaged all ratings of each AU and intensity level per task for each participant. We show that the arousal ratings increased along AU intensity levels while valence ratings are consistent for each AU (e.g., Nose Wrinkler (AU9) as negative and Lip Corner Puller (AU12) as positive). Then, we calculated Mutual Information (MI, permutation test) between MEG recording and task ratings. The results revealed the spatial and temporal distribution of brain activities related to the specific valence and arousal. We found that the valence and arousal evoked similar representational peaks ~270ms and ~750 ms in the temporal lobes while a special peak from parietal lobes at 387ms for valence task that differentiated between the two inferences. Our results show where (in temporal lobes and parietal lobes) and when (at ~270ms, 380ms and 750 ms post stimulus) the brain processes dynamic AUs as meaningful affective signals.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×