June 2006
Volume 6, Issue 6
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2006
A computational model of task-dependent influences on eye position
Author Affiliations
  • Robert J. Peters
    Department of Computer Science, University of Southern California
  • Laurent Itti
    Department of Computer Science, University of Southern California, and Department of Neuroscience, University of Southern California
Journal of Vision June 2006, Vol.6, 512. doi:https://doi.org/10.1167/6.6.512
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Robert J. Peters, Laurent Itti; A computational model of task-dependent influences on eye position. Journal of Vision 2006;6(6):512. https://doi.org/10.1167/6.6.512.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Computational models of bottom-up attention can perform significantly above chance at predicting eye positions of observers passively viewing static or dynamic images. Nevertheless, much of eye movement behavior (50% or more) is unexplained by purely bottom-up models, and is typically attributed to top-down, inter-observer, task-dependent, or random effects. Other studies have qualitatively described such high-level effects in naturalistic interactive visual tasks (e.g., while driving, how often do people fixate other cars, or the road, or road signs); yet the underlying neurocomputational mechanisms remain unknown. Here, we introduce a simple computational model of task-related eye position influences in interactive tasks with dynamic stimuli. This model extracts from each frame a low-dimensional feature signature (“gist”), compares that with a database of eye position training frames, and produces an eye position prediction map. Finally, we combine the task-related and bottom-up maps, and compare the combined maps with observers' actual eye positions across 216000 frames from 24 five-minute videogame-playing sessions. For analysis, each map was rescaled to have zero mean and unit standard deviation; the average predicted value at human eye position locations was 0.61 ± 0.1 in the purely bottom-up maps, and 2.42 ± 0.07 in the combined maps (a random model gives an average value of 0). Thus, this straightforward model of task-dependent effects offers some of the strongest purely computational general-purpose eye movement predictions to date, going significantly beyond what is explained by purely bottom-up effects; yet it relies only on simple visual features, without requiring any high-level semantic scene description.

Peters, R. J. Itti, L. (2006). A computational model of task-dependent influences on eye position [Abstract]. Journal of Vision, 6(6):512, 512a, http://journalofvision.org/6/6/512/, doi:10.1167/6.6.512. [CrossRef]
Footnotes
 Supported by an Intelligence Community (IC) Postdoctoral Research Fellowship
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×