Purchase this article with an account.
Brian Sullivan, Constantin Rothkopf, Mary Hayhoe, Dana Ballard; Modeling gaze priorities in driving. Journal of Vision 2010;10(7):139. doi: https://doi.org/10.1167/10.7.139.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Gaze behavior in complex tasks (e.g. navigation) has been studied[1,2] but it is still unknown in detail how humans allocate gaze within complex scenes especially in temporally demanding contexts. We have previously studied gaze allocation in a virtual walking environment and modeled human behavior using a reinforcement-learning model. We adapted this approach to the study of visuomotor behavior in virtual driving, allowing for controlled visual stimuli (e.g. other car paths) and monitoring of human motor control (e.g. steering). The model chooses amongst a set of visuomotor behaviors for following and avoiding other cars and successfully directs the car through an urban environment. Performance of the model was compared to that of human subjects. Eye movements were tracked while driving in a virtual environment presented in a head mounted display. Subjects were seated in a car interior mounted on a 6 degree-of-freedom hydraulic platform that allows the simulation of vehicle movements. Subjects were instructed to follow a ‘leader’ car and to avoid other traffic present. Our analysis assumes that subjects gaze strategies are a direct measure of task priorities. These priorities can be derived from subject behavior using inverse reinforcement learning to extract individual reward values. The majority of fixations were devoted to keeping gaze on the leader car and a smaller proportion on objects of avoidance, with relatively few fixations on other objects unless no traffic was present. We discuss the detailed measures underlying gaze and motor behavior in these experiments and their relationship to the reinforcement-learning model. Additionally, we will discuss future refinements of the model and techniques using inverse reinforcement learning that allow for better fitting of human data.  Jovancevic J, Sullivan B, Hayhoe M.; J Vis. 2006 Dec 15.  Rothkopf CA. Modular models of task based visually guided behavior; Ph. D. Thesis.
This PDF is available to Subscribers Only