Abstract
How do we select relevant information from cluttered visual environments? The prevalent view is that the intention to search for a particular feature enhances the attentional gain for the target feature or an exaggerated target feature shifted away from the nontarget feature value distribution (optimal tuning; e.g., Navalpakkam & Itti, 2007). By contrast, according to a new relational account, attention is not tuned to specific feature values, but only to the contextual properties that an item has relative to the features of the surrounding context (Becker, 2010). In the present study, we used a variant of the spatial cueing paradigm to test the relational account against current feature-based theories. Observers had to search for a target with a particular color (e.g., orange) among 3 nontargets of a different color (e.g., yellow-orange). To test whether attention would be tuned to the target color (e.g., orange), the exaggerated target color (e.g., red) or the target-nontarget relationship (e.g., redder), we presented an irrelevant distractor with a unique color prior to the target display (singleton cue), that was embedded in a context of three other cues (cue context). The results showed that capture by the singleton cue depended only on whether the cue's relative color to the cue context matched or mismatched the target-nontarget relationship, and was entirely independent on whether it had the same or a different color as the target. Specifically, singleton cues with the target color failed to capture attention when the cue-cue context relationship mismatched the target-nontarget relationship. Singleton cues whose cue-context relationship matched the target-nontarget relationship captured even when the singleton cue had the nontarget color. These results invalidate current feature-based theories of attention and provide strong support for the relational account, that attention is usually biased towards the relative properties of the target.
Meeting abstract presented at VSS 2013