Abstract
Representations of current target objects (attentional templates) guide attentional selection in visual search. To find out how such templates are acquired during object learning, we employed a cueing procedure. A word cue (e.g., "trousers") informed participants about the target object for a series of visual search displays. Each cue was followed by four search displays with line drawings of real-life objects. The target object appeared together with three different distractor objects, and participants had to respond to the location of the target (upper or lower visual field). To track attentional object selection in real time, we measured the N2pc component of the event-related brain potential. RTs were much slower to targets that immediately followed the word cue relative to the next three targets. N2pc components to the first target were also attenuated and delayed relative to the N2pc to the three subsequent targets. In contrast, there were no RT and N2pc differences between the second, third, and fourth target object in a run. These findings show that in general symbolic word cues are insufficient to guide attentional object selection efficiently. To fully establish attentional object templates, visual features of target objects need to be encountered at least once. However, additional analyses based on RT median splits revealed that for a subset of the target objects used, an early and large N2pc was already triggered on their first presentation. For these "highly imagible" objects, efficient attentional templates can be elicited by abstract cues.
Meeting abstract presented at VSS 2014