August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Task classification from eye movement patterns
Author Affiliations
  • Joseph MacInnes
    Faculty of Psychology - Higher School of Economics, Moscow
  • Hunt Amelia
    School of Psychology - University of Aberdeen
  • Dodd Michael
    Psychology - University of Nebraska-Lincoln
Journal of Vision August 2014, Vol.14, 999. doi:10.1167/14.10.999
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Joseph MacInnes, Hunt Amelia, Dodd Michael; Task classification from eye movement patterns. Journal of Vision 2014;14(10):999. doi: 10.1167/14.10.999.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
 

The early eye tracking studies of Yarbus (1965) provided descriptive evidence that an observer's task influenced patterns of eye movements, leading to the tantalizing prospect that an observer's intentions could be inferred from their saccade behaviour. If task influences eye movements in any systematic fashion, then it should be possible to determine the task of an observer using eye movement attributes alone. Recent attempts at such a classifier, however, have not been able to determine tasks above chance levels. Our approach is to train a classifier using eye movement data which has previously been shown to differ across task: Dodd et al. (2009) observed Inhibition of Return (IOR) in a search task but not in viewing, preference or memorization tasks. More than 17,000 saccades from 53 participants and 67 photographic images were used to train a Naive Bayes classifier on saccadic attributes such as latency, duration, peak velocity, amplitude and relative amplitude of sequential saccades. Ten-fold cross validation was used to maximize the data while preventing overtraining. The first classifier was trained with, and then used to classify based on, mean saccadic attributes for a given trial. The classifier was 45% accurate overall (chance is 25%), with highest accuracy for viewing (70%) and search (61%) followed by memorization (33%) and preference (25%). A second classifier was trained and tested using individual saccades. Even given just a single saccade, the algorithm was above chance at determining the task that produced it, with an overall accuracy of 31%. The classifier was more accurate for search (61%) and viewing (44%) tasks but there also was a bias in predicting these tasks resulting in below chance performance on preference (10%) and memorization (10%). We conclude that some tasks are discernible from patterns of saccades.

 

Meeting abstract presented at VSS 2014

 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×