September 2011
Volume 11, Issue 11
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Task-dependent gaze priorities in driving
Author Affiliations
  • Brian Sullivan
    Center for Perceptual Systems, University of Texas at Austin, USA
  • Constantin Rothkopf
    FIAS, University of Frankfurt, Germany
  • Mary Hayhoe
    Center for Perceptual Systems, University of Texas at Austin, USA
  • Dana Ballard
    Center for Perceptual Systems, University of Texas at Austin, USA
Journal of Vision September 2011, Vol.11, 932. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Brian Sullivan, Constantin Rothkopf, Mary Hayhoe, Dana Ballard; Task-dependent gaze priorities in driving. Journal of Vision 2011;11(11):932.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

During complex visuo-motor tasks, gaze behavior is largely controlled by the current task [1]. However, it is unclear how gaze allocation is accomplished when confronted with multiple task demands, especially in dynamic and unpredictable environments, as in driving. This has been called the “scheduling problem”. One simple solution is to actively search the visual scene for potentially important information (e.g. pedestrians, other cars, lane centering) at regular intervals (round-robin strategy). Another is to weight search frequency by the importance of the sub-tasks, as in reward-based models of gaze allocation. Alternatively, participants might rely on attentional capture by salient events. To address this question, we manipulated the tasks demands of participants who drove in a virtual environment including other cars, pedestrians and urban scenery. While driving there are several concurrent sub-goals, including avoiding other cars, following a lead car, and avoiding pedestrians. We recorded and analyzed gaze behavior for these driving sequences. These data provide the ground truth for gaze allocation as well as unique traversals through the environment for use in simulations. We simulate gaze allocation as a high-level, object based scheduling problem where covert object searches are engaged at regular intervals. If the object is present onscreen, an overt fixation occurs. Simulations were run with a uniform distribution across object categories and using a round-robin strategy. We found neither to be a good predictor of gaze allocation. Weighting search frequency by participants' global fixation distributions provides a better fit and using fixation distributions for local scene context provides the best. If subjects' fixation distributions are proportional to task priority, we discuss how a task-based reinforcement-learning model may accomplish such gaze allocation. [1] Hayhoe, M., & Ballard, D. (2005). Eye movements in natural behavior. Trends in Cognitive Sciences, 9(4), 188–193.

NIH R01's EY05729 and EY019174. 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.