Abstract
How do we suppress distracting objects? When we search through our environment, for example looking for our friend in a crowded intersection, our attention commonly gets captured by events or things that are irrelevant to the task (like flashing traffic lights or brightly colored billboards). Until recently, the mechanisms behind distractor suppression were poorly investigated. A recent investigation (Gaspelin & Luck; 2017) directly compared three possible mechanisms: global salience suppression (only the salience signal is down-modulated), first-order feature suppression (the distractor feature value gets suppressed directly) and second-order feature suppression (feature discontinuities, i.e. distractor feature dimensions, get suppressed). Their evidence speaks in favor of first-order feature suppression models, but their study was limited to the color dimension. We investigated learned distractor suppression using the probability cueing paradigm: In repetitive visual searches, we can learn to reduce distractor interference when distractors predictably appear in certain regions (Goschy et al., 2014). This has been termed the location probability effect (Geng & Behrmann, 2002). We thereby directly compared learned suppression in the frequent distractor region versus near-maximal interference in the rare distractor region. To address issues raised by the reproducibility crisis, our pioneer study incorporated 184 participants. The results showed (1) a consistent target-location effect (faster RTs for targets in the frequent distractor region than in the rare distractor region) for same-dimension distractors but not different-dimension distractors; ruling out first-order feature suppression as an explanation. In further studies, we revealed (2) that differential mechanisms of such learned suppression are prevalent even over 24 hours, (3) evidence for this suppression using event-related potentials and (4) most importantly: the results generalize to other dimensions. Overall, our investigations speak largely in favor of inclusive second-order feature suppression models (like the dimension-weighting account), while not denying an element of first-order feature suppression.
Meeting abstract presented at VSS 2018