Abstract
Human beings in the natural world must simultaneously perform a range of tasks, each of which may rely on different information from the scene. Eye movements are an important way of gathering necessary information, and the timing and choice of targets for gaze is critical for efficiently performing tasks. Sprague et al. (2007) proposed a model of gaze deployment based on the rewards and uncertainty associated with the tasks at hand. We have designed a novel virtual reality environment that allows us to systematically vary both the task rewards and the uncertainty of the information for each task. Subjects traverse the room on a path, while contacting targets (bubbles) and avoiding obstacles (pigeons). Location information is made more uncertain by having targets and obstacles move and (in future work) varying the complexity and contrast of the path. The system also allows us to record simultaneously how people orient their bodies, heads, and eyes, all of which play a role in directing gaze. We manipulate the task structure by independently varying the tasks subjects are asked to perform and the uncertainty of the world. While on all trials subjects followed the path, on different trials they were asked to contact bubbles, avoid pigeons, do both, or do neither. This allows us to examine how task priorities are managed under different conditions. Consistent with other work (e.g., Rothkopf et al., 2007), we see strong variations of fixations as the task structure is altered; trials on which subjects did not need to avoid pigeons showed a marked decrease in fixations to pigeons, for instance, despite their image salience remaining unchanged. We also find preliminary evidence that our uncertainty manipulation affects subjects' allocation of task priorities - this may necessitate revisions to the assumptions of the Sprague model.
Meeting abstract presented at VSS 2013