August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Temporal dynamics of facial identity and expression processing from magnetoencephalography
Author Affiliations & Notes
  • Rohini Kumar
    Laboratory of Brain and Cognition, NIMH, NIH
  • Kyla Brannigan
    Laboratory of Brain and Cognition, NIMH, NIH
  • Lina Teichmann
    Laboratory of Brain and Cognition, NIMH, NIH
  • Chris Baker
    Laboratory of Brain and Cognition, NIMH, NIH
  • Shruti Japee
    Laboratory of Brain and Cognition, NIMH, NIH
  • Footnotes
    Acknowledgements  NIMH Intramural Research Program
Journal of Vision August 2023, Vol.23, 5629. doi:https://doi.org/10.1167/jov.23.9.5629
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Rohini Kumar, Kyla Brannigan, Lina Teichmann, Chris Baker, Shruti Japee; Temporal dynamics of facial identity and expression processing from magnetoencephalography. Journal of Vision 2023;23(9):5629. https://doi.org/10.1167/jov.23.9.5629.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recognition of facial identity and facial expression are both critical for social communication. One model (Bruce & Young, 1986) proposes that invariant (like identity) and changeable aspects of a face (like expression) are processed by distinct neural pathways (Haxby, Hoffman & Gobbini, 2000). Evidence for this dissociation comes from functional neuroimaging studies, which have implicated the fusiform gyrus in the processing of invariant aspects (Grill-Spector et al., 2004) and the superior temporal sulcus in the processing of changeable aspects of a face (Pitcher et al., 2011). However, the timing of this dissociation has been less studied. Thus, the current study used magnetoencephalography (MEG) and time-resolved classification methods to examine how facial identity and expression processing unfolds in the human brain. Participants viewed videos of emotional faces that varied along two dimensions (six identities and six expressions) while performing an orthogonal target detection task. Linear support vector machine classifiers were used to predict which stimulus type was presented based on the pattern of MEG sensor activity at each time point during a trial. The resulting decoding performance reflects the discriminability of the brain activity patterns elicited by each identity and expression. Results showed successful decoding of both identity and expression and Bayes Factors revealed timepoints when decoding accuracy was significantly above chance. Identity decoding peaked rapidly around 190 ms after stimulus onset, while expression decoding rose slowly and peaked around 900 ms. Temporal generalization analyses revealed greater similarity over time in the representation of expression than identity. Further, representational similarity analyses revealed an early peak in MEG pattern dissimilarity between identities and a later peak in dissimilarity between expressions. Collectively, these results demonstrate distinct neural timecourses for both invariant (identity) and changeable aspects (expression) of a face and future source reconstruction analyses will determine the underlying neural substrate for these effects.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×