Feature search and conjunction search are two classic types of visual search paradigms. In feature search, the target has one distinguishing feature from the distractors (e.g., a red line among green lines). In conjunction search, the target has a combination of features that distinguish it from the distractor (e.g., a red horizontal line among red vertical lines and green horizontal and vertical lines). Higher-order processing is needed to bind conjunction features together. In older serial models of visual search, feature search had been considered preattentive (Treisman,
1985; Treisman & Gelade,
1980). It was assumed that for certain features, such as orientation or color, processing of the visual scene occurs in parallel using low-level feature maps encoded by subpopulations of highly tuned neurons (Treisman,
1985; Treisman,
1986; Treisman & Gelade,
1980). Thus, RTs in feature searches were thought to be unaffected by the number of distractors. Conjunction search (or searches for features not represented in feature maps) was said to be performed in a serial fashion, where attention was directed to one item at a time. Thus, RTs increased linearly with an increase in the number of items. Some subsequent research, especially based on measuring accuracy with timed searches, suggested that all visual search might be done in a parallel fashion with RTs increasing due to the decreasing signal-to-noise ratio or a limited capacity of the system, which yielded parallel models of visual search (Cameron et al.,
2004; Eckstein et al.,
2000; Verghese,
2001). The serial/parallel debate is ongoing, and the exact mechanisms of visual search are still being elucidated (Moran, Zehetleitner, Liesefeld, Müller, & Usher,
2016; Thornton & Gilden,
2007). For the purposes of the current discussion, we adopted the Guided Search 4.0 (GS4) model proposed by Wolfe et al. (
2007). This is the latest version of a model that has been in development for the past 20 years. It combines both parallel and serial processes and can predict many aspects of visual search (Moran et al.,
2016; Wolfe,
2007). In this model, shown in a simplified schematic in
Figure 1, the parallel processing in early vision provides input to the object recognition process via a selective bottleneck governed by the allocation of visual attention. Preattentive processing in this model reflects any visual processing that occurs prior to the deployment of attention. The “guidance” system uses preattentive processing to signal potential areas of interest and to essentially rank items in terms of attentional priority. In simple feature search, which has a strong bottom-up component, this guidance is very effective, requiring almost no serial processing, so adding distractors does not increase RTs significantly. In a difficult conjunction search, the guidance is not as effective, and a more serial deployment of attention is required, generating increases in RTs per added distractor. Once items are selected for processing, information starts accumulating on all items in parallel. A target or a distractor determination is made when information reaches a threshold.