July 2013
Volume 13, Issue 9
Free
Vision Sciences Society Annual Meeting Abstract  |   July 2013
Everything is relative: Contingent capture depends on feature relationships.
Author Affiliations
  • Stefanie I. Becker
    School of Psychology, The University of Queensland, Brisbane, Australia.
  • Charles L. Folk
    Department of Psychology, Villanova University, Villanova, USA.
  • Roger W. Remington
    School of Psychology, The University of Queensland, Brisbane, Australia.
Journal of Vision July 2013, Vol.13, 772. doi:https://doi.org/10.1167/13.9.772
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Stefanie I. Becker, Charles L. Folk, Roger W. Remington; Everything is relative: Contingent capture depends on feature relationships.. Journal of Vision 2013;13(9):772. https://doi.org/10.1167/13.9.772.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How do we select relevant information from cluttered visual environments? The prevalent view is that the intention to search for a particular feature enhances the attentional gain for the target feature or an exaggerated target feature shifted away from the nontarget feature value distribution (optimal tuning; e.g., Navalpakkam & Itti, 2007). By contrast, according to a new relational account, attention is not tuned to specific feature values, but only to the contextual properties that an item has relative to the features of the surrounding context (Becker, 2010). In the present study, we used a variant of the spatial cueing paradigm to test the relational account against current feature-based theories. Observers had to search for a target with a particular color (e.g., orange) among 3 nontargets of a different color (e.g., yellow-orange). To test whether attention would be tuned to the target color (e.g., orange), the exaggerated target color (e.g., red) or the target-nontarget relationship (e.g., redder), we presented an irrelevant distractor with a unique color prior to the target display (singleton cue), that was embedded in a context of three other cues (cue context). The results showed that capture by the singleton cue depended only on whether the cue's relative color to the cue context matched or mismatched the target-nontarget relationship, and was entirely independent on whether it had the same or a different color as the target. Specifically, singleton cues with the target color failed to capture attention when the cue-cue context relationship mismatched the target-nontarget relationship. Singleton cues whose cue-context relationship matched the target-nontarget relationship captured even when the singleton cue had the nontarget color. These results invalidate current feature-based theories of attention and provide strong support for the relational account, that attention is usually biased towards the relative properties of the target.

Meeting abstract presented at VSS 2013

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×