Abstract
How do we select relevant information from cluttered visual environments? The prevalent view is that the intention to search for a particular feature enhances the attentional gain for the target feature or an exaggerated target feature shifted away from the nontarget feature value distribution (optimal tuning; e.g., Navalpakkam & Itti, 2007). By contrast, Becker (2010) has shown that attention is not tuned to specific feature values, but only to the contextual properties that an item has relative to the features of the surrounding context (Becker, 2010). In the present study, we tested whether Becker's relational account can be extended to explain conjunction search tasks for stimuli that differ only in a combination of two features from irrelevant nontargets. Observers had to search for a target with a combination of a particular color and size (e.g., large aqua) among 4 nontargets that shared the target color or size (e.g., large green, medium aqua). To test whether attention would be tuned to the exact target features, the exaggerated target features or the target-nontarget relationships (e.g., bluer, larger), we presented an irrelevant distractor prior to the target display (conjunction cue), that was embedded in a context of three other cues that shared the cue's features (cue context). Behavioral and EEG measurements demonstrated that capture by the conjunction cue depended only on whether the cue's relative color (to the cue context) matched or mismatched the target's relative attributes, and was entirely independent on whether it had the same or a different color or size as the target. Conjunction cues whose cue-context relationship matched the target-nontarget relationship captured even when the conjunction cue was identical to the nontargets. These results invalidate current feature-based theories of attention and provide strong support for the relational account, that attention is biased towards the relative attributes of the target.
Meeting abstract presented at VSS 2016