August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Inter-item competition during encoding and maintenance
Author Affiliations & Notes
  • Janna Wennberg
    University of California, San Diego
  • John Serences
    University of California, San Diego
  • Footnotes
    Acknowledgements  This work was supported by a NIH-NEI grant awarded to John Serences (RO1-EY025872)
Journal of Vision August 2023, Vol.23, 5789. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Janna Wennberg, John Serences; Inter-item competition during encoding and maintenance. Journal of Vision 2023;23(9):5789.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

When multiple items are held in working memory, they interfere with one another and behavioral performance suffers. Models of working memory (WM) suggest that inter-item interference either arises due to feature-specific competition in specialized sensory networks (Sprague, Ester, and Serences 2014; Bays 2014; Schurgin, Wixted, and Brady 2020) or due to feature-general competition in more flexible networks (Swan and Wyble 2014; Bouchacourt and Buschman 2019). In a previous behavioral experiment, we found support for a feature-specific contribution to inter-item interference. However, in most prior work, items have been presented in retinotopically separate spatial locations. In a pre-registered experiment (n=40), we manipulated whether memory items appeared at the same or different retinotopic coordinates to further investigate feature-specific vs. feature-general interference. We predicted effects of both set size and heterogeneity (i.e., there is more interference when the cued items are from the same feature space). Participants completed a working memory task in which two foveal, colorful oriented bars were sequentially presented. After the offset of the second item, participants were retro-cued about what to hold in mind (the color, the orientation, or both features on one or both stimuli). As expected, we found that remembering two features was harder than remembering one feature. Surprisingly, however, we found no effect of feature similarity on precision; performance was comparable between remembering two colors and remembering one color and one orientation. The lack of a feature-similarity effect with retro-cues suggests that competition during encoding may explain our previously-observed heterogeneity advantage. However, further work should assess the effect of sequentially presented displays, as our previously-observed heterogeneity advantage was with simultaneous displays and retinotopically-separate items. Thus, the results are also consistent with accounts suggesting more overlap between the neural populations encoding disparate features like color and orientation (Garg et al. 2019).


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.