Invariant visual context provides an important spatial cue for the guidance of visual search and focal-attentional selection. Repeated exposure to the same arrangements of search displays facilitates reaction time (RT) performance, an effect that has been referred to as contextual cueing (Chun,
2000; Chun & Jiang,
1998; Chun & Nakayama,
2000). In their seminal paper, Chun and Jiang (
1998) had their observers search for a target letter “T” embedded in a set of distractor letters “L”. Unbeknown to participants, half of the presented displays contained identical configurations of target and distractor items (i.e., old displays), whereas the other half contained novel configurations (i.e., new displays). The main result was that of faster RTs to old relative to new displays (i.e., contextual cueing), an effect that developed after a short period of training. Interestingly, when observers were queried about repeated displays at the end of the search task in an “old-new” recognition test, their performance was only at chance level. From these findings, Chun and Jiang (
1998) concluded that (a) contextual cueing guides focal attention more rapidly to the target location (but see Kunar, Flusberg, Horowitz, & Wolfe [
2007], for evidence that contextual cueing might also aid postperceptual processes) and (b) the cueing effect derives from an implicit memory for the items' spatial arrangement. Since then, the cueing effect has been elaborated in a number of further studies (Chun,
2000; Chun & Jiang,
1998; Chun & Nakayama,
2000; Conci, Sun, & Müller,
2011; Conci & von Mühlenen,
2009,
2011; Geyer, Shi, & Müller,
2010; Jiang & Wagner,
2004; Kunar, Flusberg, & Wolfe,
2006). Jiang and Wagner (
2004; see also Brady & Chun,
2007, or Olson & Chun,
2002) showed that contextual cueing is supported by two distinct spatial memory systems for individual item locations (i.e., local learning) and, respectively, the entire configuration formed by the distractors (i.e., global learning). Further, Kunar et al. (
2006) showed that nonspatial attributes, too, such as background color, can facilitate RT performance. Contextual learning is also influenced by selective attention: Only the arrangement of some items, in particular, those sharing the target color, are learned over the course of an experiment (e.g., Geyer et al.,
2010; Jiang & Leung,
2005).