August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Free gaze: co-recording of eye and head tracking with EEG to understand unconstrained vision
Author Affiliations & Notes
  • Anna Madison
    DEVCOM, U.S. Army Research Laboratory
    U.S. Air Force Academy
  • Chloe Callahan-Flintoft
    DEVCOM, U.S. Army Research Laboratory
  • Kennedy Nevling
    U.S. Air Force Academy
  • Ashley Weidenbach
    U.S. Air Force Academy
  • Haiden Moody
    U.S. Air Force Academy
  • Anthony Ries
    DEVCOM, U.S. Army Research Laboratory
    U.S. Air Force Academy
  • Footnotes
    Acknowledgements  This research was sponsored by the U.S. Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-21-2-0187.
Journal of Vision August 2023, Vol.23, 5935. doi:https://doi.org/10.1167/jov.23.9.5935
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anna Madison, Chloe Callahan-Flintoft, Kennedy Nevling, Ashley Weidenbach, Haiden Moody, Anthony Ries; Free gaze: co-recording of eye and head tracking with EEG to understand unconstrained vision. Journal of Vision 2023;23(9):5935. https://doi.org/10.1167/jov.23.9.5935.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Selection of visual information in a 3-dimensional space occurs through the coordination eye and head movements. Yet, traditionally studies using electroencephalography (EEG) limit these naturally occurring visual behaviors to minimize artifacts in the data recording. Experiments that use co-recording of eye tracking and EEG are becoming more prevalent due to advances in signal processing to minimize artifacts and modeling approaches that overcome overlapping neural activity and nonlinear covariates. Time locking EEG to fixation onsets results in fixation-related potentials (FRPS) providing a neural snapshot of visual processing in a more naturalistic context. But, the impact of head movements on FRPs are still poorly understood limiting the use of this approach in a more unconstrained context. The present work aimed to extend the co-recording approach by allowing participants to move their heads while completing a simple search task. We used an immersive virtual environment to elicit head movements and co-recorded eye and head movements with EEG. Participants reported the orientation of Gabor patches (0.5 cycles/degree or 4.9 cycles/degree) appearing at eight radial spatial positions varying in eccentricity (10 – 50 DVA). There were two types of trials to produce different head movement profiles: a pursuit condition where a disk moved at a constant speed of 20 degree/sec and turned into a Gabor patch at its final spatial position, and an instantaneous condition where the same light gray disk disappeared and a Gabor patch re-appeared at its final spatial position. We used deconvolution modeling to estimate potentials time-locked to head movement onsets, fixation onsets, and Gabor onsets. The modeling disentangled a prominent FRP component called the lambda response in addition to early (P1) and late (P300) ERP components while minimizing artifacts introduced from eye and head movements. Our approach demonstrates the co-recording approach can be used to better understand vision in unconstrained contexts.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×