Abstract
Since Yarbus (1967) wrote the book on examining eye movements, researchers have tracked eye movements associated with various tasks and mindsets. This line of research has consistently shown that eye movements can be indicative of the task at hand (Einhauser et al., 2008; Yarbus, 1967). Recently, theoretically informed computational models have been able to categorize eye movements at levels significantly above chance (e.g., MacInnes et al., 2018). The purpose of the present study was to design a neural network alternative to the previously implemented eye tracking models by categorizing eye movements using a simple deep learning model that was not guided by theoretical assumptions. In the current study, participants were presented with color images of scenes (interior and exterior locations, no images of people) while performing either a search task, a memory task, or an image preference task. Each image was viewed for several seconds, during which an eye tracker sampling at 1000 Hz was used to record eye movements. During data processing, each trial was converted into an image that represented the full path of the eye movements throughout the trial, but without any explicit notation of saccades, fixations, dwell times, or other traditional eye tracking variables. The DeLINEATE deep learning toolbox (http://delineate.it) was used to classify these images. The classifier consisted of a convolutional neural network that decoded the image into search, memory, or preference tasks. The deep learning model classified the eye movement images with accuracy that was well above chance, commensurate to contemporary results using explicit cognitive models. This suggests that deep learning models are capable of extracting a surprising amount of useful information out of nearly-raw eye tracking data with minimal human guidance as to what the relevant features are in the data.
Acknowledgement: Supported by NSF/EPSCoR grant #1632849 to MRJ, MD, and colleagues.