At any given moment, the amount of visual information available in the environment exceeds the processing capacities of the visual system. Hence, the perceptual system needs to decide on what information is to be selected for deeper and more explicit processing and what information is to be ignored. This decision is made on the basis of stimulus properties (stimulus-driven selection) and/or the internal “set” of the observer (top-down controlled selection). Stimuli that differ from their surround in one or more basic visual features (e.g., color contrast: a red item among green items or orientation contrast: a right-tilted bar among left-tilted bars) attract visual attention in a more or less automatic fashion. Such stimuli can be rapidly discerned, irrespective of the number of items in the field (the display size); phenomenally, they appear to “pop out” of the display. A number of feature dimensions have been shown to support such a spatially parallel (i.e., display size–independent) search, including orientation, size, color, motion, and stereo depth. Although computation of feature contrast within a given dimension proceeds largely automatically (e.g., by suppressive interactions among like-feature suppression within low-level feature maps; Li,
1999), there is evidence target selection is based on a higher-level representation: a “featureless” overall-saliency map of the visual field the units which integrate (i.e., sum) the local feature contrast signals computed in different dimensions (e.g., Krummenacher, Müller, & Heller,
2001,
2002; Wolfe,
1994; Zhaoping & May,
2007). Furthermore, there is evidence that not all dimensions contribute equally to the (integrated) overall-saliency signals; rather, feature contrast signals from dimensions that are more “relevant” will have a stronger impact or “weight” in the integration process. For instance, a feature dimension that supports successful target detection in a given trial is implicitly assumed to be more important in the future; accordingly, the weight assigned to feature contrast signals from this dimension is increased while the weights for other dimensions are correspondingly decreased (Found & Müller,
1996; Müller, Heller, & Ziegler,
1995). According to this “dimension-weighting account” (DWA) of visual search for singleton feature targets (e.g., Found & Müller,
1996; Müller, Heller, & Ziegler,
1995), the greater the weight assigned to the target dimension, the greater the rate at which evidence for a target actually defined within this dimension accumulates at the overall-saliency level and, accordingly, the faster the target can be detected. The (cross-dimensional) weight pattern established in a given trial persists into the next trial. This ensures fast and efficient target detection if the target-defining dimension is repeated across consecutive trials. In contrast, if the target-defining dimensions change across trials, target detection is slowed. These reaction time (RT) costs are primarily dimension-specific in nature, that is, changes of the target-defining feature across visual dimensions, as compared to changes within a given dimension, increase target detection times (Found & Müller,
1996; Müller, Heller, & Ziegler,
1995; see also the article by Rangelov, Müller, & Zehetleitner, 2013, in this Special Issue). These automatic weighting processes are also, to some extent, top-down modulable. For instance, when the target-defining dimension is cued in advance by a symbolic precue, detection of a singleton feature target is facilitated when it is defined in the cued (vs. an uncued) dimension, and the effect of a dimension change across trials is reduced compared to a neutral cueing condition (e.g., Müller, Reimann, & Krummenacher,
2003).