Abstract
Vision in the real world requires the nearly continuous selection of fixation targets, and routines to program and execute accurate eye movements to those targets. Many models have been proposed for the target-selection mechanism, from bottom-up models based solely on local image statistics to high-level, object-based models that presume pre-defined objects have already been parsed from the surrounding scene. The oculomotor system evolved in a rich, dynamic environment, yet experiments designed to evaluate proposed models are typically performed by head-fixed subjects viewing small, static displays. Two experiments were performed to study saccadic targeting under more natural conditions.
In Experiment 1 subjects' eye movements were monitored with a head-mounted eyetracker as they free-viewed color photographs on a large display subtending ∼40 × 65 deg. While the conditions of Experiment 1 are arguably more natural than many experiments, Experiment 2 was designed to go beyond stationary observers viewing limited static displays. Subjects were fitted with a wearable eyetracker that allowed unrestricted eye, head, and whole-body movements. Eye movements were recorded as subjects walked down a hallway toward a room where they were to perform a task. Both experiments represent ‘free viewing’ in that there was no instructed task related directly to eye movements. Video records were analyzed to identify saccadic targets and to measure the amplitude of saccadic eye movements and the duration of the intervening fixations.
In addition to fixations covering a greater extent of the visual field, the amplitude of individual saccades increased with field size; median saccade amplitude exceeded 10 degrees in the walking task. Many of the fixation durations were shorter than anticipated in both tasks; fixations as short as 100 msec were common, and subjects' mean fixation duration and mean saccade amplitude were inversely correlated.