It has been hypothesized that the attentional selection of a target object among distractors is achieved through the matching of incoming visual input to a top-down attentional set that guides visual search to items containing task-relevant features (Duncan & Humphreys,
1989; Wolfe, Cave, & Franzel,
1989). There is currently great interest in how visual features activated in the attentional set, or “search template,” guide search. Studies of eye movements have found that searchers may mistakenly fixate distractors that share visual features with targets, which increases search times (Castelhano & Heaven,
2010; Castelhano, Pollatsek, & Cave,
2008; Pomplun,
2006). The more visual information provided by a cue, the better the search performance (Hwang, Higgins, & Pomplun,
2009; Schmidt & Zelinsky,
2009; Wolfe, Horowitz, Kenner, Hyle, & Vasan,
2004); therefore, an image cue that matches the target exactly is most effective in guiding search (providing the most visual information relevant to the target), while an image representing the target category or a feature-and-word cue (e.g., “blue car”) are both more effective than a word cue alone (e.g., “car”; Castelhano & Heaven,
2010; Malcolm & Henderson,
2009,
2010; Schmidt & Zelinsky,
2009; Vickery, King, & Jiang,
2005). At the same time, a search template that approximates rather than perfectly replicates an image allows for a certain amount of flexibility between the visual features of the cue and the target (Bravo & Farid,
2009; Vickery et al.,
2005). Some visual features may be preferentially represented in the search template. A previous study of feature search found that searchers are faster to detect targets based on color than orientation (Hannus, van den Berg, Bekkering, Roerdink, & Cornelissen,
2006), suggesting that some features are naturally weighted higher than others in the visual/attention system, thus biasing the search template.