September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
Human classifier: Can individuals deduce which task someone was performing based solely on their eye movements?
Author Affiliations
  • Michael Dodd
    Department of Psychology, University of Nebraska - Lincoln
  • Brett Bahle
    Department of Psychology, University of Nebraska - Lincoln
  • Mark Mills
    Department of Psychology, University of Nebraska - Lincoln
  • Monica Rosen
    Department of Psychology, University of Nebraska - Lincoln
  • Gerald McDonnell
    Department of Psychology, University of Nebraska - Lincoln
  • Joseph MacInnes
    Faculty of Psychology - Higher School of Economics, Moscow
Journal of Vision September 2015, Vol.15, 1268. doi:10.1167/15.12.1268
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael Dodd, Brett Bahle, Mark Mills, Monica Rosen, Gerald McDonnell, Joseph MacInnes; Human classifier: Can individuals deduce which task someone was performing based solely on their eye movements?. Journal of Vision 2015;15(12):1268. doi: 10.1167/15.12.1268.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Numerous investigations have revealed that eye movements and fixation locations differ as a function of how an individual is processing a scene (e.g., Castelhano et al., 2009; Dodd et al., 2009; Land & Hayhoe, 2001; Mills et al., 2011, Yarbus, 1967). As a consequence, a common question of interest is whether a participant’s task can be predicted from their observed pattern of eye movements. To that end, a number of researchers have taken a cue from the machine learning literature and attempted to train a task set classifier with varying degrees of success (e.g., Borji & Itti, 2014; Greene et al., 2012; Henderson et al., 2013). In the present experiments, we examine whether human participants can effectively classify task set based on the eye movements of others and how their performance compares to that of a recent classifier (MacInnes et al., VSS, 2014). Participants view either a) the fixation locations and fixation durations of an individual scanning a scene (independent of scanpath), b) the scanpaths of an individual scanning a scene (independent of fixation durations), or c) video playback of eye movement locations (preserving scanpath and duration information), as they attempt to determine whether the original task was visual search, memorization, or pleasantness rating. Moreover, eye movement information is provided to participants under conditions in which the original scene is present, or with the original scene absent. Participants perform this task at above-chance levels though there is considerable variability in performance as a function of task type (e.g. better at identifying search), whether the scene is present or absent, and whether the original task was performed under blocked or mixed (task-switching) conditions. These results provide important insight into our understanding of scene perception and the manner in which individuals interpret the eye movements of others.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×