July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Capture by object exemplars during category-based search of real-world scenes
Author Affiliations
  • Katharina N. Seidl
    Department of Psychology, Princeton University\nPrinceton Neuroscience Institute, Princeton University
  • Nicholas B. Turk-Browne
    Department of Psychology, Princeton University\nPrinceton Neuroscience Institute, Princeton University
  • Sabine Kastner
    Department of Psychology, Princeton University\nPrinceton Neuroscience Institute, Princeton University
Journal of Vision July 2013, Vol.13, 1314. doi:https://doi.org/10.1167/13.9.1314
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Katharina N. Seidl, Nicholas B. Turk-Browne, Sabine Kastner; Capture by object exemplars during category-based search of real-world scenes. Journal of Vision 2013;13(9):1314. https://doi.org/10.1167/13.9.1314.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Unexpected objects capture attention to the extent that they match a currently active attentional set. This contingent attentional capture has been demonstrated for relatively simple features and for conceptual information when the distractor is disqualified from being a target only because of its spatial location. Here we ask whether exemplars from an object category capture attention during preparation for real-world visual search. This form of category-based search has been shown to depend upon establishing an abstract attentional template that enables efficient target processing through the pre-activation of category-specific neural representations. Participants completed a category detection task in which they were asked to detect the presence of objects from a specific category in centrally presented and masked real-world scenes. At the beginning of each trial, a cue informed participants which of two task-relevant categories to attend to. The possible task-relevant categories were people, cars and trees, which were counterbalanced across participants. Scenes could contain objects from: the cued category, the non-cued task-relevant category, both task-relevant categories or none of the task-relevant categories. On 75% of the trials a distractor was presented 150 or 600ms before the scene. Distractors consisted of exemplars from the cued category (congruent), the non-cued task-relevant category (incongruent) or from a task-irrelevant category (neutral). They were presented either centrally or in the periphery. At short but not long SOAs, the presence of a congruent distractor reduced accuracy on the category detection task more than any of the other distractor categories. This indicates overlap between attentional sets for within-scene objects and isolated exemplars. The capture effect was observed regardless of distractor location, suggesting that reduced search accuracy did not only result from the deployment of spatial attention to an inappropriate location. Ongoing experiments aim to reveal the mechanisms by which congruent distractors reduce detection accuracy during real-world visual search.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×