December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Search Strategies Modulate Memory-Driven Capture
Author Affiliations & Notes
  • Bo Yeong Won
    University of California, Riverside
  • Weiwei Zhang
    University of California, Riverside
  • Footnotes
    Acknowledgements  NIMH grant (R01MH117132) to WZ
Journal of Vision December 2022, Vol.22, 3269. doi:https://doi.org/10.1167/jov.22.14.3269
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Bo Yeong Won, Weiwei Zhang; Search Strategies Modulate Memory-Driven Capture. Journal of Vision 2022;22(14):3269. https://doi.org/10.1167/jov.22.14.3269.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Working memory (WM) and attention often interact with each other. For instance, items maintained in WM can capture attention in visual search regardless of the task relevance of the memory items. The present study addresses whether memory-driven capture depends on search strategies. Specifically, we hypothesized that feature search mode that relies on top-down processes (e.g., goals) would produce more memory-driven capture than singleton search mode that relies on bottom-up processes (e.g., salience). Furthermore, memory-driven capture would be predicted by WM precision. Two experiments were conducted to encourage the feature search mode (Exp.1) in which the search target was defined by a specific shape (e.g., square) among three different shape distractors (e.g., circle, triangle, diamond) or the singleton search mode where the target was an odd shape (e.g., square) among three distractors (e.g., circles) with an identical shape. The visual search was inserted into the delay interval of a WM task in which the participants maintained a briefly presented color and then recalled it after the search trial. Critically, a color singleton appeared on some trials, producing three experimental conditions with different relationships between the singleton color and memory color. In the "Match" condition, the singleton color was the same as the memory color; in the "Nonmatch" condition, the singleton color was different from the memory color; in the "Absent" condition, no color singleton did appear. We found that the Match condition showed more robust attentional capture than the Nonmatch condition, suggesting a significant memory-driven capture, when participants adopted the feature search mode (Exp .1) but not the singleton search mode (Exp. 2). Furthermore, under the feature search mode, individuals with higher memory precision, measured from an independent memory recall task, showed more robust memory-driven capture. These results suggest that memory-driven capture is more involved in top-down than bottom-up attentional mechanisms.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×