Abstract
To ambulate in an urban environment, we face the management of many simultaneous tasks, such as walking on a sidewalk, avoiding pedestrians, and crossing busy streets. The management of simultaneous tasks has been studied extensively, but those studies tend to focus on attentional loading in reaction time experiments that use stimulus presentations of less than one second. As a result, little is known about how attention is managed in extended natural tasks that take on the order of many minutes. The current study addresses the extended allocation of visual attention. We study the gaze control and locomotion control for a virtual humanoid agent walking in a realistic simulated urban world. The agent's behaviors are directed by visuo-motor routines that each handle a single well-defined task, such as staying on the sidewalk or avoiding an obstacle. The key element in our model is that behaviors ask for and are given access to the body on a probabilistic basis. The probabilities are continually adjusted to reflect the contingencies produced by the environment. The gaze access period is adjusted to reflect the modal time of human fixations in similar tasks. The locomotion access period is adjusted to fit human performance data. Such a model can explain how different behaviors can compete for both eye gaze resources in addition to locomotion resources of the body in a seamless manner. In our simulation, the humanoid walks along a crowded sidewalk and crosses streets. During this time the instantaneous allocation of resources to gaze and ambulation are recorded. To test our model, we have human subjects doing the same task. Humans wear head-mounted displays that allow them to navigate in the same virtual environment. Six-dof head-tracking as well as eye-tracking in the HMD allows us to compare the use of gaze and locomotion by the human subjects with our model. Our initial results suggest that the probabilistic allocation of resources is a good fit to the human data.