August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Reconstructing the neurodynamics of face perception during real world vision in humans using intracranial EEG recordings
Author Affiliations & Notes
  • Arish Alreja
    Carnegie Mellon University
  • Michael J. Ward
    Univerity of California, Los Angeles
  • Jhair A. Colan
    University of Pittsburgh
  • Qianli Ma
    Carnegie Mellon University
  • R. Mark Richardson
    Harvard University and Massachusetts General Hospital
  • Louis-Phillipe Morency
    Carnegie Mellon University
  • Avniel S. Ghuman
    University of Pittsburgh
  • Footnotes
    Acknowledgements  NIH R01MH107797, NIH R21EY030297, NSF 1734907
Journal of Vision August 2023, Vol.23, 5487. doi:https://doi.org/10.1167/jov.23.9.5487
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Arish Alreja, Michael J. Ward, Jhair A. Colan, Qianli Ma, R. Mark Richardson, Louis-Phillipe Morency, Avniel S. Ghuman; Reconstructing the neurodynamics of face perception during real world vision in humans using intracranial EEG recordings. Journal of Vision 2023;23(9):5487. https://doi.org/10.1167/jov.23.9.5487.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We use face perception to see and understand people around us during natural behavior in the real world. Here, we take advantage of the unique opportunity afforded by intracranial recordings in two epilepsy patients to assess the neural basis of face perception during natural, unscripted interactions in real world settings with friends, family, and experimenters. With eye tracking glasses, we captured what subjects saw, time locked to the corresponding neural activity, on a fixation-by-fixation basis for hours during these interactions. We restricted the analysis to face fixations annotated using a combination of manual annotations and computer vision. After training a bidirectional Canonical Component Analysis (CCA) model on training fixations, we sought to reconstruct an image of the face people were seeing based on the corresponding pattern of neural activity, and reconstruct an image of the neural activity based on the corresponding face image, on a fixation-by-fixation basis in a left out test sample of fixations. Significant reconstruction of both the face image subjects were seeing (out of sample R= 0.46; 0.26) and neural activity (out of sample R= 0.29; 0.14) was observed. By assessing which features are reconstructed accurately, we find that parietal, temporal and occipital cortices around 200 ms after fixation onset are important for face processing during natural social interactions. Individual Canonical Components of the model enable a more granular breakdown to examine which specific face features are coded by which particular aspects of neural activity. We will use this approach to test norm based and metric code models of face perception during natural face perception. Our results lay the foundation for understanding the neural basis of visual perception during natural behavior in the real world.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×