September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Intracranial electroencephalography reveals neurodynamics underlying face perception during real-world vision in humans
Author Affiliations & Notes
  • Arish Alreja
    Carnegie Mellon University
  • Hao Chen
    Carnegie Mellon University
  • Marcin Leszczynski
    Columbia University
    Nathan Kline Institute
  • Michael J. Ward
    University of Pittsburgh
  • R. Mark Richardson
    Massachusetts General Hospital
    Harvard University
  • Max G'Sell
    Carnegie Mellon University
  • Louis-Phillipe Morency
    Carnegie Mellon University
  • Charles Schroeder
    Columbia University
    Nathan Kline Institute
  • Avniel Ghuman
    University of Pittsburgh
  • Footnotes
    Acknowledgements  NIH 1R21EY030297, NSF 1734907, NIH R01MH107797
Journal of Vision September 2021, Vol.21, 2846. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Arish Alreja, Hao Chen, Marcin Leszczynski, Michael J. Ward, R. Mark Richardson, Max G'Sell, Louis-Phillipe Morency, Charles Schroeder, Avniel Ghuman; Intracranial electroencephalography reveals neurodynamics underlying face perception during real-world vision in humans. Journal of Vision 2021;21(9):2846.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Neural correlates of true, real-world vision remain almost entirely unknown. The lack of understanding of real-world vision is particularly problematic in the context of face perception where passive viewing of static, unfamiliar, and isolated faces, briefly presented in between blank screens bears little resemblance to the richness of real-world interpersonal interactions. The real world features context, familiar faces present in relatively stable positions and active sampling of information via eye movements. To address these gaps in knowledge, we simultaneously recorded intracranial electroencephalography (ECoG), eye-tracking and videos of scenes being viewed by human subjects over hours of natural conversations with friends, family, and experimenters. These videos were annotated on a frame-by-frame basis using computer vision models, to assess when subjects were fixating faces. The fixated faces were manually labeled for identity and emotional expression classification and the face at each fixation was extracted to allow for face image reconstruction based on neural activity and, inversely, neural reconstruction based on facial information. Neural selectivity for real-world facial identity and expression classification was seen distributed across occipital, temporal, parietal and cingulate cortices and had both overlap and critical spatiotemporal differences compared to when subjects viewed faces in traditional paradigms. Accurate reconstruction of the faces of different individuals, as well as the face of an individual with different emotional expressions, was seen using fixation locked neural activity. Conversely, reconstruction of fixation locked neural activity from face stimuli was accurate for specific frequency bands and temporal periods of the neural response, suggesting a relationship between facial features and oscillatory mechanisms during real-world interactions. These findings demonstrate the role of neurodynamics in capturing fine-grained details of facial information during real-world visual perception and demonstrate that combining invasive neural recordings with real-world behavior can be used to achieve a neurocomputational understanding of natural facial perception.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.