September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Steady-State Visually Evoked Potentials (SSVEPs) in the presence of voluntary eye and head movements
Author Affiliations & Notes
  • Weichen Liu
    UC San Diego, Department of Computer Science and Engineering
  • Chiyuan Chang
    Beth Israel Deaconess Medical Center
  • Russell Cohen Hoffing
    US DEVCOM Army Research Laboratory
  • Steven Thurman
    US DEVCOM Army Research Laboratory
  • Cory Stevenson
    Institute for Neural Computation
  • Tzyy-Ping Jung
    Institute for Neural Computation
  • Ying Choon Wu
    Institute for Neural Computation
  • Footnotes
    Acknowledgements  The project is supported by Army Research Laboratory (W911NF2120154)
Journal of Vision September 2024, Vol.24, 1338. doi:https://doi.org/10.1167/jov.24.10.1338
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Weichen Liu, Chiyuan Chang, Russell Cohen Hoffing, Steven Thurman, Cory Stevenson, Tzyy-Ping Jung, Ying Choon Wu; Steady-State Visually Evoked Potentials (SSVEPs) in the presence of voluntary eye and head movements. Journal of Vision 2024;24(10):1338. https://doi.org/10.1167/jov.24.10.1338.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

This study explored the feasibility of detecting steady state evoked potentials (SSVEPs) and endeavored to uncover the time course of SSVEP signal response under conditions of unrestricted head and eye movement in three dimensional space. Leveraging immersive, head-mounted virtual reality (VR), we recorded continuous eye and head movement data simultaneously with 64-channel electroencephalogram (EEG) during a virtual visual fixation task. Participants were instructed to fixate centrally until prompted to shift their gaze towards a flickering target appearing in their field of view at different distances towards the periphery (near target: 15 degrees, far target: 30 degrees). This gaze shift was accomplished either through a saccade alone or a self-directed head turn paired with saccading. Preprocessed EEG was epoched based upon time-locking to either stimulus, fixation or gaze (based on the gaze intersection point) onset. Canonical Correlation Analysis (CCA) was performed to identify the response frequency of each SSVEP trial using a certain time window. Aligning epochs using the fixation rather than stimulus onset led to higher classification accuracy for both near and far targets (e.g., 500ms - stimulus: 25.9%, fixation: 51.7%; 1500ms - stimulus: 54.9%, fixation: 70.3%). Further, it was found that CCA scores, which reflect the correlation between the single trial EEG and a set of reference frequencies, tended to increase in accuracy even before the onset of the gaze or fixation locking point, while the head and/or the eyes were still moving. In conclusion, we demonstrate that in realistic, unconstrained viewing conditions, SSVEP signal detection can be improved through fixation- and gaze-locking. Additionally, combined head and eye movement did not fully suppress processing of the visual signal, suggesting that the “blanking effect” reported in prior studies may not be accurately characterized as a full suppression of visual processing.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×