September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Distractor location frequencies better account for the instantiation of learned distractor suppression than do reinforcement learning prediction errors
Author Affiliations
  • Anthony W. Sali
    Wake Forest University
  • Catherine W. Seitz
    Wake Forest University
Journal of Vision September 2024, Vol.24, 1080. doi:https://doi.org/10.1167/jov.24.10.1080
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Anthony W. Sali, Catherine W. Seitz; Distractor location frequencies better account for the instantiation of learned distractor suppression than do reinforcement learning prediction errors. Journal of Vision 2024;24(10):1080. https://doi.org/10.1167/jov.24.10.1080.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Stimulus-driven attentional capture is reduced when a salient distractor regularly appears at a predictable spatial location (e.g., Wang & Theeuwes, 2018). This phenomenon is consistent with a growing body of work that suggests that selection history plays a powerful role in shaping the instantiation of attentional priority. However, the underlying mechanisms of distractor suppression learning remain poorly understood. In the current study, we fitted behavioral response time (RT) data from a variant of the additional singleton paradigm with a series of computational models to test how individuals harness previous experiences to guide attentional deployment. As in previous studies, we observed robust evidence of learned distractor suppression such that RTs were shorter when the distractor appeared at a high probability location than when it appeared at a low probability location. Furthermore, RTs were also longer when the target appeared at the high probability location relative to the low probability location. Next, we adjudicated whether distractor suppression was best explained by (a) the tracking of distractor location frequencies or (b) a reinforcement-learning (RL) prediction error mechanism. Under the location frequency account, individuals decrease the priority afforded to a particular location with each successive presentation of a distractor at that location. However, while participants did not receive explicit rewards, accurate performance is intrinsically rewarding. The RL account assumes that individuals attempt to maximize performance by increasing or decreasing the priority for a particular location depending on the size and direction of the trial-by-trial difference between expected and observed distractor location likelihoods. We used Hierarchical Bayesian Inference to simultaneously fit and compare models, finding that the distractor location frequency model best accounted for the data. Together, these results suggest that a simple frequency model outperforms models that nudge predictions up and down based on trial-by-trial outcomes.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×