September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Unraveling the Neural Code for Real Life Facial Expression Perception
Author Affiliations & Notes
  • Arish Alreja
    Carnegie Mellon University
  • Michael Ward
    University of California, Los Angeles
  • Taylor Abel
    University of Pittsburgh
  • Mark Richardson
    Massachusetts General Hospital and Harvard University
  • Louis-Philippe Morency
    Carnegie Mellon University
  • Avniel Ghuman
    University of Pittsburgh
  • Footnotes
    Acknowledgements  NSF (1734907) and NIH (R01MH132225, R01MH107797)
Journal of Vision September 2024, Vol.24, 1471. doi:https://doi.org/10.1167/jov.24.10.1471
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Arish Alreja, Michael Ward, Taylor Abel, Mark Richardson, Louis-Philippe Morency, Avniel Ghuman; Unraveling the Neural Code for Real Life Facial Expression Perception. Journal of Vision 2024;24(10):1471. https://doi.org/10.1167/jov.24.10.1471.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We study face perception to understand how our brains process the identity, expressions, and facial movements of friends, family, coworkers, and others in real life. Controlled experiments have revealed many aspects of how the brain codes for faces, but little is known about how the brain codes for the natural intensity and expressions during real life interactions. We collected intracranial recordings from epilepsy patient-participants who wore eye-tracking glasses to capture everything they saw on a moment-to-moment basis during hours of natural unscripted interactions with friends, family, and experimenters. Face pose, identity, expressions, and motion were parameterized using computer vision, deep learning, face AI and state space models. Fixation locked facial features and brain activity were related using a bidirectional model which maximized the correlation between them in a jointly learned latent neuro-perceptual space. The model predicted brain and face dynamics from each other accurately (d’ of approximately 1.8, 2.47, 1.02 for overall, between and within identity comparisons). Reconstructed brain activity revealed an important role for the recently proposed putative social vision pathway alongside traditional face areas in ventral temporal cortex. Probing the representational space for facial expression and motion revealed a person’s resting facial expression as an important anchor point and that neural populations were more sharply tuned to changes in expression than their intensity. Lastly, the brain exhibited greater sensitivity to small changes from a person’s resting face, such as a coy smile, compared to similar differences between a big and a slightly bigger smile, a potential analog of the Weber-Fechner law for facial expressions. Together, these results demonstrate that during real world interactions, instances of individual fixations on a person’s face are coded with “oval” shaped tuning spaces wherein the oval pointed to the resting expression (norm) and became bigger further from that expression.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×