August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Incidental capture from working memory depends on remembered feature dimension
Author Affiliations & Notes
  • Daniel Thayer
    University of California, Santa Barbara
  • Thomas Sprague
    University of California, Santa Barbara
  • Footnotes
    Acknowledgements  Research was sponsored by a Sloan Research Fellowship and a UCSB Academic Senate Research Grant
Journal of Vision August 2023, Vol.23, 5921. doi:https://doi.org/10.1167/jov.23.9.5921
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Daniel Thayer, Thomas Sprague; Incidental capture from working memory depends on remembered feature dimension. Journal of Vision 2023;23(9):5921. https://doi.org/10.1167/jov.23.9.5921.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Working memory (WM) and attention are tightly integrated processes. For instance, goal-relevant information is maintained in WM to guide attention during visual search. Several studies have shown that the influence of WM is strong enough to direct attention towards items that are irrelevant to current search goals. These results have typically been observed when remembering color or shape stimuli, but, in principle, should similarly occur using other guiding features such as motion. However, recent work suggests that neural representations of remembered motion directions are transformed into spatial coordinates. If this is the case, then remembered stimuli which can easily be spatially recoded (e.g., a remembered motion direction) should not interfere with visual search when feature-matching distractors are present any more than interference caused by a salient distractor. We tested this by implementing a dual-task paradigm: on each trial, participants first memorized either a color or motion direction. They were then shown a search array entirely comprised of grayscale random-motion dot stimuli (circles/squares), where they had to report the orientation of a line inside a target shape (e.g., square). On some trials, the feature value of one item in the array matched the remembered motion/color, or was a motion/color singleton (single feature ‘pop-out’). Finally, participants reported the memorized feature. We replicated the traditional findings for color: distractors matching the remembered color captured attention more than feature singletons, as shown by slower orientation-discrimination RTs. In contrast, when participants were required to remember a motion direction, capture was equal across distractor conditions: salient distractors captured attention, but no additional capture was observed when the remembered motion direction matched the distracting motion direction. Thus, some features in WM are more likely to impact attentional processing than others. This finding is consistent with the possibility that remembered motion directions are transformed into a spatial code.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×