Abstract
In the real world, environmental cues help guide attention toward target-consistent regions. For example, Neider and Zelinsky (2006) found that observers restrict their gaze to ground areas when looking for a car in computer generated scenes. We continued exploring the role of semantic guidance, and how such top-down information interacts with bottom-up stimulus information, by examining search for spatially constrained targets in cluttered natural images. Specifically, we recorded eye movements while participants searched images of kitchens for a low-contrast target ‘T’ amongst similar ‘L’ distractors. Importantly, the target was always located in a region of the scene where a coffee cup would likely be found, with half of the participants given this information prior to beginning the experiment (context/no context groups). Target presence was also manipulated. In Experiment 1, the context group located the target more quickly (~547ms benefit of contextual information) than the no context group in target present trials, despite similar success rates (77% and 79% in the context and no context conditions, respectively). In Experiment 2, we evaluated whether the contextual guidance effect observed in Experiment 1 could be disrupted by making a single distractor ‘L’ more salient (a color singleton). The data were similar to Experiment 1. When there was no salient distractor, participants located the target faster (~741ms) when given contextual information. Furthermore, the contextual benefit persisted (~470ms) even in the presence of the salient distractor item. Taken together, our findings support the assertion that contextual information relating objects to likely spatial locations is not only important in visual search through real world scenes, but also provides a measure of insulation against distraction arising from task irrelevant differences in low-level stimulus properties.
Meeting abstract presented at VSS 2013