June 2007
Volume 7, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   June 2007
Integrating low-level and high-level visual influences on eye movement behavior
Author Affiliations
  • Robert Peters
    Computer Science, University of Southern California
  • Laurent Itti
    Computer Science, Neuroscience, Psychology, University of Southern California
Journal of Vision June 2007, Vol.7, 949. doi:https://doi.org/10.1167/7.9.949
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Robert Peters, Laurent Itti; Integrating low-level and high-level visual influences on eye movement behavior. Journal of Vision 2007;7(9):949. https://doi.org/10.1167/7.9.949.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We propose a comprehensive computational framework unifying previous qualitative studies of high-level cognitive influences on eye movements with quantitative studies demonstrating the influence of low-level factors such as saliency. In this framework, a top-level “governor” uses high-level task information to determine how best to combine low-level saliency and gist-based task-relevance maps into a single eye-movement priority map.

We recorded the eye movements of six trained subjects playing 18 different sessions of first-person perspective video games (car racing, flight combat, and “first-person shooter”) and simultaneously recorded the game's video frames, giving about 18 hours of recording for ∼15,000,000 eye movement samples (240Hz) and ∼1.1TB of video data (640×480 pixels at 30Hz). We then computed measures of how well the individual saliency and task-relevance maps predicted observers' eye positions in each frame, and probed for the role of the governor in relationships between high-level task information — such as altimeter and damage meter settings, or the presence/absence of a target — and the predictive strength of the maps.

One such relationship occurred in the flight combat game. In this game, observers are actively task-driven while tracking enemy planes, ignoring bottom-up saliency in favor of task-relevant items like the radar screen; then, after firing a missile, observers become passively stimulus-driven while awaiting visual confirmation of the missile hit. We confirmed this quantitatively by analyzing the correspondence between saliency and eye position across a window of ±10s relative to the time of 328 such missile hits. Around −200ms (before the hit), the saliency correspondence begins to rise, reaching a peak at +100ms (after the hit) of 10-fold above the previous baseline, then is suppressed below baseline at +800ms, and rebounds back to baseline at +2000ms. Thus, one mechanism by which high-level cognitive information can influence eye movements is through dynamically weighting competing saliency and task-relevance maps.

Peters, R. Itti, L. (2007). Integrating low-level and high-level visual influences on eye movement behavior [Abstract]. Journal of Vision, 7(9):949, 949a, http://journalofvision.org/7/9/949/, doi:10.1167/7.9.949. [CrossRef]
Footnotes
 Intelligence Community (IC) postdoctoral fellowship program
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×