August 2016
Volume 16, Issue 12
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2016
A Standardized Methodology for Co-Registering Eye-Tracking and EEG Data
Author Affiliations
  • Joshua Zosky
    Psychology, University of Nebraska - Lincoln
  • Carly Molloy
    Psychology, University of Nebraska - Lincoln
  • Mark Mills
    Psychology, University of Nebraska - Lincoln
  • Arthur Maerlender
    Psychology, University of Nebraska - Lincoln
Journal of Vision September 2016, Vol.16, 614. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Joshua Zosky, Carly Molloy, Mark Mills, Arthur Maerlender; A Standardized Methodology for Co-Registering Eye-Tracking and EEG Data. Journal of Vision 2016;16(12):614.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

A popular new field in vision and neuroimaging is the co-registration of eye-tracking information with EEG recordings. Accomplishing this previously was a complicated and non-standardized procedure. This poster presents a free, open-source, operating system independent framework for conducting co-registration experiments. OpenSesame was used for experimental control and stimulus presentation, along with the PyGaze plugin and PyNetstation plugin. PyGaze is for control of multiple eye-tracking systems under uniform conditions, in this scenario it was tested with a Tobii TX 300 eye tracking system. PyNetstation was used for control of Net Station EEG recording software and event markers, while most EEG systems can receive stimulus signals through a simple serial-port/parallel-port command within OpenSesame. A facial stroop task used to test participants for attention was used to determine the efficacy of equipment for creating a set up in which one could conduct co-registration analyses. The task presented participants with a face showing happy or sad emotions, and also showed the word "happy" or "sad" somewhere onscreen. If the face was presented centrally onscreen, a word was overlaid on the forehead. If the face was on the left or right side of the screen, then the word was on the opposite side of the screen. Participants were instructed to press a button to indicate when the face showed happy emotions or to press a different button when the face showed sad emotions. Visual fixations were determined in real-time within the experiment, and their onset and location were flagged in the EEG data. Data were analyzed to event-related potentials (ERP) and eye-fixation related potentials (EFRP) to determine the presence of an N170 component, commonly observed as a "face specific" component. Comparison of ERP and EFRP methods indicated the latter more reliably detected the N170 component, validating the utility of the proposed framework.

Meeting abstract presented at VSS 2016


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.