September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
I see what you did there: Deep learning algorithms can classify cognitive tasks from images of eye tracking data
Author Affiliations & Notes
  • Zachary J. Cole
    University of Nebraska-Lincoln
  • Karl M. Kuntzelman
    University of Nebraska-Lincoln
  • Michael D. Dodd
    University of Nebraska-Lincoln
  • Matthew R. Johnson
    University of Nebraska-Lincoln
Journal of Vision September 2019, Vol.19, 306b. doi:https://doi.org/10.1167/19.10.306b
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zachary J. Cole, Karl M. Kuntzelman, Michael D. Dodd, Matthew R. Johnson; I see what you did there: Deep learning algorithms can classify cognitive tasks from images of eye tracking data. Journal of Vision 2019;19(10):306b. https://doi.org/10.1167/19.10.306b.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Since Yarbus (1967) wrote the book on examining eye movements, researchers have tracked eye movements associated with various tasks and mindsets. This line of research has consistently shown that eye movements can be indicative of the task at hand (Einhauser et al., 2008; Yarbus, 1967). Recently, theoretically informed computational models have been able to categorize eye movements at levels significantly above chance (e.g., MacInnes et al., 2018). The purpose of the present study was to design a neural network alternative to the previously implemented eye tracking models by categorizing eye movements using a simple deep learning model that was not guided by theoretical assumptions. In the current study, participants were presented with color images of scenes (interior and exterior locations, no images of people) while performing either a search task, a memory task, or an image preference task. Each image was viewed for several seconds, during which an eye tracker sampling at 1000 Hz was used to record eye movements. During data processing, each trial was converted into an image that represented the full path of the eye movements throughout the trial, but without any explicit notation of saccades, fixations, dwell times, or other traditional eye tracking variables. The DeLINEATE deep learning toolbox (http://delineate.it) was used to classify these images. The classifier consisted of a convolutional neural network that decoded the image into search, memory, or preference tasks. The deep learning model classified the eye movement images with accuracy that was well above chance, commensurate to contemporary results using explicit cognitive models. This suggests that deep learning models are capable of extracting a surprising amount of useful information out of nearly-raw eye tracking data with minimal human guidance as to what the relevant features are in the data.

Acknowledgement: Supported by NSF/EPSCoR grant #1632849 to MRJ, MD, and colleagues. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×