September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Mechanisms behind learned distractor suppression in visual search
Author Affiliations
  • Marian Sauter
    Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
  • Heinrich Liesefeld
    Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
  • Hermann Müller
    Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany Department of Psychological Sciences, Birkbeck College, University of London, London, UK
Journal of Vision September 2018, Vol.18, 631. doi:https://doi.org/10.1167/18.10.631
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Marian Sauter, Heinrich Liesefeld, Hermann Müller; Mechanisms behind learned distractor suppression in visual search. Journal of Vision 2018;18(10):631. https://doi.org/10.1167/18.10.631.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How do we suppress distracting objects? When we search through our environment, for example looking for our friend in a crowded intersection, our attention commonly gets captured by events or things that are irrelevant to the task (like flashing traffic lights or brightly colored billboards). Until recently, the mechanisms behind distractor suppression were poorly investigated. A recent investigation (Gaspelin & Luck; 2017) directly compared three possible mechanisms: global salience suppression (only the salience signal is down-modulated), first-order feature suppression (the distractor feature value gets suppressed directly) and second-order feature suppression (feature discontinuities, i.e. distractor feature dimensions, get suppressed). Their evidence speaks in favor of first-order feature suppression models, but their study was limited to the color dimension. We investigated learned distractor suppression using the probability cueing paradigm: In repetitive visual searches, we can learn to reduce distractor interference when distractors predictably appear in certain regions (Goschy et al., 2014). This has been termed the location probability effect (Geng & Behrmann, 2002). We thereby directly compared learned suppression in the frequent distractor region versus near-maximal interference in the rare distractor region. To address issues raised by the reproducibility crisis, our pioneer study incorporated 184 participants. The results showed (1) a consistent target-location effect (faster RTs for targets in the frequent distractor region than in the rare distractor region) for same-dimension distractors but not different-dimension distractors; ruling out first-order feature suppression as an explanation. In further studies, we revealed (2) that differential mechanisms of such learned suppression are prevalent even over 24 hours, (3) evidence for this suppression using event-related potentials and (4) most importantly: the results generalize to other dimensions. Overall, our investigations speak largely in favor of inclusive second-order feature suppression models (like the dimension-weighting account), while not denying an element of first-order feature suppression.

Meeting abstract presented at VSS 2018

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×