Abstract
Visual search in real-world scenes unfolds over two parallel pathways. One pathway processes scene information while the other processes object information. Scene context can limit the number of items that are searched through, resulting in reduced response times. The benefit of scene context has been studied extensively in inefficient search tasks. Recently, it was demonstrated that this benefit is also observed in efficient search tasks when search is sufficiently slow (e.g. when set size is large or when target-distractor similarity is high). In this study, we examined whether the mechanism behind the benefit of scene context in efficient search tasks. Eye movements were recorded while participants searched for a green turtle that could appear in the water, among black turtles that could appear anywhere on the search display. On half the trials, participants were given a 247ms preview of the scene background before the search items were presented. Response times were faster with preview, although there was no difference in search slopes. When the scene context was previewed, there was a greater proportion of initial saccades that were directed to the target-consistent region. Furthermore, this proportion increased as a function of initial saccade latency only when there was no preview, but not when there was a preview. Thus, when there was no preview, faster initial saccades were likely to be object-driven, while slower initial saccades were likely to be context-driven. On the other hand, when there was a preview, all initial saccades were likely to be context-driven. Lastly, the response times to distractors in the target-consistent region in the preview condition were similar to the response times to all distractors in the no-preview condition. Taken together, in efficient search tasks, scene context limits processing to target-consistent regions without changing the rate of evidence accumulation.