Abstract
Selection of visual information in a 3-dimensional space occurs through the coordination eye and head movements. Yet, traditionally studies using electroencephalography (EEG) limit these naturally occurring visual behaviors to minimize artifacts in the data recording. Experiments that use co-recording of eye tracking and EEG are becoming more prevalent due to advances in signal processing to minimize artifacts and modeling approaches that overcome overlapping neural activity and nonlinear covariates. Time locking EEG to fixation onsets results in fixation-related potentials (FRPS) providing a neural snapshot of visual processing in a more naturalistic context. But, the impact of head movements on FRPs are still poorly understood limiting the use of this approach in a more unconstrained context. The present work aimed to extend the co-recording approach by allowing participants to move their heads while completing a simple search task. We used an immersive virtual environment to elicit head movements and co-recorded eye and head movements with EEG. Participants reported the orientation of Gabor patches (0.5 cycles/degree or 4.9 cycles/degree) appearing at eight radial spatial positions varying in eccentricity (10 – 50 DVA). There were two types of trials to produce different head movement profiles: a pursuit condition where a disk moved at a constant speed of 20 degree/sec and turned into a Gabor patch at its final spatial position, and an instantaneous condition where the same light gray disk disappeared and a Gabor patch re-appeared at its final spatial position. We used deconvolution modeling to estimate potentials time-locked to head movement onsets, fixation onsets, and Gabor onsets. The modeling disentangled a prominent FRP component called the lambda response in addition to early (P1) and late (P300) ERP components while minimizing artifacts introduced from eye and head movements. Our approach demonstrates the co-recording approach can be used to better understand vision in unconstrained contexts.