October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Intracranial electroencephalography reveals real world vision in humans is a contextually modulated, distributed, and active sensing process
Author Affiliations & Notes
  • Arish Alreja
    Carnegie Mellon University
  • Vasu Sharma
    Carnegie Mellon University
  • Michael Ward
    University of Pittsburgh
  • Mark Richardson
    Harvard University
    Massachusetts General Hospital
  • Max G'Sell
    Carnegie Mellon University
  • Louis-Philippe Morency
    Carnegie Mellon University
  • Avniel Ghuman
    University of Pittsburgh
  • Footnotes
    Acknowledgements  NIH 1R21EY030297, NSF 1734907, NIH R01MH107797
Journal of Vision October 2020, Vol.20, 1532. doi:https://doi.org/10.1167/jov.20.11.1532
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Arish Alreja, Vasu Sharma, Michael Ward, Mark Richardson, Max G'Sell, Louis-Philippe Morency, Avniel Ghuman; Intracranial electroencephalography reveals real world vision in humans is a contextually modulated, distributed, and active sensing process. Journal of Vision 2020;20(11):1532. https://doi.org/10.1167/jov.20.11.1532.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Neural correlates of true, real world vision are almost entirely unknown. The lack of understanding of real world vision is particularly problematic in the context of face perception, where passively viewing static, unfamiliar, and isolated faces briefly presented at fixation bears little resemblance to the richness of real world interpersonal interactions. In the real world there is context, familiar faces in relatively stable positions, and volitional eye movements are used to actively sample information. To begin filling this gap in knowledge, we simultaneously recorded intracranial electroencephalography (ECoG), eye-tracking and videos of scenes being viewed by human subjects over hours of natural conversations with friends, family and experimenters. Annotating these videos on a frame-by-frame basis using computer vision models, we delineated each fixation as being on a face or other object during natural behavior. Multivariate classification revealed that spatiotemporal signatures of activity in each subject were sensitive to whether they were looking at faces or objects. Notably, a far greater portion of cortex was involved in face processing during real world vision than in a traditional experimental paradigm. Additionally, neural activity during object fixations could be used to classify whether a face was present elsewhere in the visual field or not, demonstrating contextual modulation of spatiotemporal patterns of neural activity. The neurodynamics of eye movement guidance were then examined by showing that what patients were going to look at next could be classified. Specifically, not only did brain activity predict where in space subjects would saccade, but also whether or not the next saccade would be to a face. These findings demonstrate that richness of real world visual perception is captured from the neurodynamics of visual perception, highlighting the power of invasive neural recordings in humans in combination with real world behavior as a platform for studying visual neuroscience.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.