October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
DeepMReye: MR-based eye tracking without eye tracking
Author Affiliations
  • Matthias Nau
    Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway
    Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
  • Markus Frey
    Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway
    Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
  • Christian F. Doeller
    Kavli Institute for Systems Neuroscience, NTNU, Trondheim, Norway
    Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
Journal of Vision October 2020, Vol.20, 1014. doi:https://doi.org/10.1167/jov.20.11.1014
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Matthias Nau, Markus Frey, Christian F. Doeller; DeepMReye: MR-based eye tracking without eye tracking. Journal of Vision 2020;20(11):1014. https://doi.org/10.1167/jov.20.11.1014.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In many fMRI studies, viewing behavior is a major variable of interest, or one of confound. Concurrent eye tracking is expensive, often time consuming to set up and imposes experimental constraints (e.g. the eyes need to be open). Here, we developed DeepMReye; a deep-learning-based framework to decode viewing behavior from the MR-signal of the eye balls (see Frey et al., VSS2020). We trained and tested the model on data of more than 250 participants acquired on six 3T-MRI scanners with a variety of scanning protocols. Participants performed diverse viewing tasks including fixation, guided saccades and smooth pursuit, visual search, free movie- and picture viewing as well as eye movements when the eyes were closed. Our model successfully recovers gaze position and associated variables such as direction and amplitude at sub-TR resolution during these tasks, without the need for eye tracking equipment. A confidence score obtained for each decoded sample further indicates the intrinsic model certainty. Critically, our model generalizes across participants, tasks and MR-scanners, suggesting that viewing behavior could be reconstructed post-hoc even in existing fMRI data sets. To test this, we explore the boundary conditions and generalizability across fMRI-scanning protocols by systematically varying voxel size and repetition time (TR) in a subset of participants with concurrent eye tracking. In sum, DeepMReye allows to decode viewing behavior post-hoc from fMRI data, which can be integrated into existing fMRI pipelines to study or account for gaze related brain activity.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×