August 2014
Volume 14, Issue 10
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2014
Predicting observers' task from their scanpaths on natural scenes
Author Affiliations
  • Ali Borji
    Department of Computer Science, University of Southern California
  • Laurent Itti
    Department of Computer Science, University of Southern California
Journal of Vision August 2014, Vol.14, 762. doi:https://doi.org/10.1167/14.10.762
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ali Borji, Laurent Itti; Predicting observers' task from their scanpaths on natural scenes. Journal of Vision 2014;14(10):762. https://doi.org/10.1167/14.10.762.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

In an influential yet anecdotal illustration, Yarbus suggested that human eye movement patterns are modulated top-down by different task demands. While the hypothesis that it is possible to decode the observer's task from eye movements has received some support (e.g., Iqbal & Bailey (2004); Henderson et al. (2013)), recently Greene et al. (2012) argued against it by reporting a failure. Here, we perform a more systematic investigation of this problem and probe a larger number of experimental factors than previously. Our main goal is to determine the informativeness of eye movements for task and mental state decoding. We argue that task decoding accuracy depends critically on three factors: 1) spatial image information, 2) classification technique, and 3) image and observer idiosyncrasies. We perform two experiments. In the first experiment, we re-analyze the data of Greene et al. (2012) and contrary to their conclusion, we report that it is possible to decode the observer's task from aggregate eye movement features slightly but significantly above chance, using a Boosting classier (34.12% correct vs. 25% chance-level; binomial test, p = 1.07 e-04). In the second experiment, we repeat and extend Yarbus' original experiment by collecting eye movements of 21 observers viewing 15 natural scenes (including Yarbus' scene) under Yarbus' seven questions. We show that task decoding is possible, also moderately but significantly above chance (24.21% vs. 14.29% chance-level; binomial test, p = 2.45 e-06). We also find that task decoding accuracy is higher for images that contain more relevant information to answer the questions than for other images. Thus, we conclude that Yarbus' idea is supported by our data and continues to be an inspiration for future computational and experimental eye movement research. From a broader perspective, we discuss techniques, features, limitations, societal and technological impacts, and future directions in task decoding from eye movements.

Meeting abstract presented at VSS 2014

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×