The early eye tracking studies of Yarbus (1965) provided descriptive evidence that an observer's task influenced patterns of eye movements, leading to the tantalizing prospect that an observer's intentions could be inferred from their saccade behaviour. If task influences eye movements in any systematic fashion, then it should be possible to determine the task of an observer using eye movement attributes alone. Recent attempts at such a classifier, however, have not been able to determine tasks above chance levels. Our approach is to train a classifier using eye movement data which has previously been shown to differ across task: Dodd et al. (2009) observed Inhibition of Return (IOR) in a search task but not in viewing, preference or memorization tasks. More than 17,000 saccades from 53 participants and 67 photographic images were used to train a Naive Bayes classifier on saccadic attributes such as latency, duration, peak velocity, amplitude and relative amplitude of sequential saccades. Ten-fold cross validation was used to maximize the data while preventing overtraining. The first classifier was trained with, and then used to classify based on, mean saccadic attributes for a given trial. The classifier was 45% accurate overall (chance is 25%), with highest accuracy for viewing (70%) and search (61%) followed by memorization (33%) and preference (25%). A second classifier was trained and tested using individual saccades. Even given just a single saccade, the algorithm was above chance at determining the task that produced it, with an overall accuracy of 31%. The classifier was more accurate for search (61%) and viewing (44%) tasks but there also was a bias in predicting these tasks resulting in below chance performance on preference (10%) and memorization (10%). We conclude that some tasks are discernible from patterns of saccades.
Meeting abstract presented at VSS 2014