Abstract
When bottom-up and top-down signals are present in a search task, how do these signals combine to guide search? To address this question we hid a 0.25 degree “x” or “+” among 9 photorealistic objects. The subject's task was to find this ×/+ target and indicate its identity (i.e., “x” or “+”). All of the objects were grayscale except for one color distractor. Bottom-up salience was manipulated by the distractor's color saturation: 0% (grayscale), 50%, or 100% (vivid color). Top-down salience was manipulated by the availability of a target preview. The ×/+ target was spatially associated with a search object; the target preview indicated the object on which the ×/+ target could be found. The target preview was never the color distractor. Preview duration was either: 1000 msec, 100 msec, or 0 msec (i.e., no preview, subjects searched only for the ×/+ target). RTs were significantly faster in the preview conditions compared to the no-preview condition, but did not vary with the saturation manipulation. Analysis of eye movements made during search revealed more initial saccades to the target preview object in the preview conditions; the color distractor failed to attract initial saccades when a preview was available. Only in the no-preview condition was there a tendency to fixate the color distractor. We conclude that visual search is primarily a top-down process; when top-down information is available, bottom-up control signals are largely ignored. However, in the absence of top-down information, these bottom-up signals can direct search behavior.
This work was supported by a grant from the National Institute of Mental Health (R01 MH63748).