Abstract
Theories of attention posit the existence of an attentional template that holds target information in working memory and is used to prioritize sensory processing. In natural environments, however, target features are not the only source of information useful for search. Here, we investigated the effect of a learned association on attentional template representations and guided search. Participants first learned the relationship between four specific faces and four scene categories through an associative learning task in which participants were explicitly told about a pairing (e.g., Jenny is a ranger and thus she appears in the forest) and asked to indicate whether a pairing was “correct” until criterion (90% accuracy) was reached. After training was complete, participants (N=660) engaged in an online cued-face search task. On each trial, a single face cue was followed by a search display with two faces, each superimposed on one scene image. The target face could appear on the associated scene (valid trial) or an un-associated scene (invalid trial). Although the scene information was task-irrelevant, in experiment 1, participants were faster and more accurate at locating the cued face on valid trials when it occurred with its associated scene. The results provide evidence that learned associations modulate attention allocation and facilitate search of the target. In experiment 2, we used fMRI to examine the underlying proactive neural representation of the attentional template in advance of visual search. We found evidence of sensory representations for both the target face and the target-associated scene following the cue but prior to the onset of the visual search display. Taken together, our findings suggest that the attentional template can include information that is associated with the target, presumably because it can be used to help to find and identify the target more rapidly.