December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Evidence for object-based encoding into visual working memory
Author Affiliations
  • William Ngiam
    University of Chicago
  • Krystian Loetscher
    University of Chicago
  • Edward Vogel
    University of Chicago
  • Edward Awh
    University of Chicago
Journal of Vision December 2022, Vol.22, 4297. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      William Ngiam, Krystian Loetscher, Edward Vogel, Edward Awh; Evidence for object-based encoding into visual working memory. Journal of Vision 2022;22(14):4297.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Given sharp capacity limits in visual working memory (WM), it is important to understand how limited storage resources are distributed across the items in a relevant scene. On the one hand, information about distinct feature values (e.g., color and shape) could be independently encoded, such that storing the color of an item did not predict whether shape would also be stored. On the other hand, information could be encoded in an object-based fashion, such that color and shape tend to be encoded from the same objects in the display. Here, we examined this question using a whole-report task that enabled us to measure the recall of every stored feature value on each trial. With single-feature objects (color, orientation or shape), above-chance recall was limited to about 2-3 of the items in a 6 item display. The key question, however, was how this capacity limit would play out with dual-feature (i.e., color/shape or color/orientation) objects. Is the recall of multiple features also constrained to 2-3 items, or will independent encoding enable above-chance recall of information from a larger number of items? In line with past observations, subjects showed an “object-based benefit” such that a larger number of feature values were stored in the dual-feature compared to the single-feature conditions. Nevertheless, even though subjects stored approximately 60% more feature values in the dual-feature condition, the recalled information came from only 2-3 objects in the display, just as in the single feature condition. Moreover, individual capacity limits were highly correlated across single-feature and dual-feature stimuli, possibly because performance in both conditions was subject to an object-based ceiling on storage in visual working memory.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.