July 2013
Volume 13, Issue 9
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Attentional Guidance in Visual Search: Examining the Interaction Between Goal Driven and Stimulus Driven Information in Natural Images
Author Affiliations
  • Natalie Paquette
    Department of Psychology, University of Central Florida
  • Mark Neider
    Department of Psychology, University of Central Florida
Journal of Vision July 2013, Vol.13, 162. doi:https://doi.org/10.1167/13.9.162
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Natalie Paquette, Mark Neider; Attentional Guidance in Visual Search: Examining the Interaction Between Goal Driven and Stimulus Driven Information in Natural Images. Journal of Vision 2013;13(9):162. doi: https://doi.org/10.1167/13.9.162.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

In the real world, environmental cues help guide attention toward target-consistent regions. For example, Neider and Zelinsky (2006) found that observers restrict their gaze to ground areas when looking for a car in computer generated scenes. We continued exploring the role of semantic guidance, and how such top-down information interacts with bottom-up stimulus information, by examining search for spatially constrained targets in cluttered natural images. Specifically, we recorded eye movements while participants searched images of kitchens for a low-contrast target ‘T’ amongst similar ‘L’ distractors. Importantly, the target was always located in a region of the scene where a coffee cup would likely be found, with half of the participants given this information prior to beginning the experiment (context/no context groups). Target presence was also manipulated. In Experiment 1, the context group located the target more quickly (~547ms benefit of contextual information) than the no context group in target present trials, despite similar success rates (77% and 79% in the context and no context conditions, respectively). In Experiment 2, we evaluated whether the contextual guidance effect observed in Experiment 1 could be disrupted by making a single distractor ‘L’ more salient (a color singleton). The data were similar to Experiment 1. When there was no salient distractor, participants located the target faster (~741ms) when given contextual information. Furthermore, the contextual benefit persisted (~470ms) even in the presence of the salient distractor item. Taken together, our findings support the assertion that contextual information relating objects to likely spatial locations is not only important in visual search through real world scenes, but also provides a measure of insulation against distraction arising from task irrelevant differences in low-level stimulus properties.

Meeting abstract presented at VSS 2013


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.