September 2011
Volume 11, Issue 11
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Reconsidering Yarbus: Pattern classification cannot predict observer's task from scan paths
Author Affiliations
  • Michelle R. Greene
    Brigham and Women's Hospital, USA
    Harvard Medical School, USA
  • Tommy Liu
    Brigham and Women's Hospital, USA
  • Jeremy M. Wolfe
    Brigham and Women's Hospital, USA
    Harvard Medical School, USA
Journal of Vision September 2011, Vol.11, 498. doi:https://doi.org/10.1167/11.11.498
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michelle R. Greene, Tommy Liu, Jeremy M. Wolfe; Reconsidering Yarbus: Pattern classification cannot predict observer's task from scan paths. Journal of Vision 2011;11(11):498. https://doi.org/10.1167/11.11.498.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A very familiar illustration by Yarbus shows dramatic differences in eye movement patterns when a viewer performs different tasks while viewing the same image. The scan paths seem to be windows into the observer's mind but can the intentions of the viewer really be read from the pattern of eye movement. Yarbus' data are qualitative, drawn from only one observer examining one image. We showed 64 photographs to 16 observers for 10 s each while eye movements were recorded and observers performed one of four tasks: memorize the picture, determine the decade in which the image was taken, determine how well people in the picture knew each other, or determine the wealth of the people. Eye movement data were fed into a pattern classifier to predict task, using leave-one-out training and testing. Although the classifier could identify the image at above chance levels (23% correct, chance = 1.6%) as well as the observer (31% correct, chance = 6.3%), it was at chance identifying the task (28% correct, chance = 25% p = .49). Perhaps the earliest eye movements held the predictive information? Examining the first 2 and 5 seconds also yielded chance classification performance (27.4% and 27.7% correct). Perhaps more viewing would be more predictive? 16 additional participants viewed the images for 60 seconds each. Classifier performance remained at chance (28.1%). So, perhaps we built a bad classifier. Surely human observers can use observers' patterns of eye movements to predict task? 20 observers viewed another observer's eye movements, plotted over the image, and tried to predict which task was being done. Participants were at chance with either 10s (27.4%) or 60s (27.5%) scan paths. The famous Yarbus figure may be compelling but, sadly, its message appears to be misleading. Neither humans nor machines can use scan paths to identify the task of the viewer.

F32EY019815-01. 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×