Abstract
Humans detect objects in complex scenes at remarkable speed with little attentional effort. Do the mechanisms that underlie such rapid processing also guide attention and gaze during prolonged viewing, which operates on a much longer time scale? Low-level features, like luminance contrast, affect object detection in rapid-serial-visual-presentation (RSVP) paradigms and have some predictive power for gaze allocation, which is, however, explained away by objects. To test whether features similarly affect gaze and detection, we used the same stimuli in two tasks: prolonged viewing and RSVP. Stimuli consisted of natural images, in which the luminance contrast of an object and of its background were independently manipulated. In prolonged viewing, eye positions were recorded during 3 seconds of presentation, afterwards observers were queried for keywords describing the scene. In RSVP, observers had to detect the presence of a target object in a 1-second stream of 20 images presented at 20 Hz, and detection performance was measured. By comparing the changes in behavior relative to a neutral condition (i.e., the unmanipulated image) in both tasks, we show that gaze control and object detection, although very different tasks, are affected similarly by changes in a low-level feature; luminance contrast. Further experiments reveal that the pattern of results depends on the image manipulations being targeted at an object in the scene, and is independent of the presence of distractor objects. Although gaze is guided by luminance-contrast increases of objects, this does not change how characteristic the objects are perceived to be for the scene. These results imply that scene content interacts with low-level features to guide both detection and overt attention (gaze), while certain aspects of higher-level scene perception are not affected by the same low-level features.
Meeting abstract presented at VSS 2012