Abstract
The ability to detect a change to a remembered array of visual objects has become the predominant experimental measure in the study of working memory, whether investigating its basis in neurophysiology, its development and decline over the lifespan, or its impairment due to brain damageor disease. The dominance of this methodology relies on a simple interpretation of the frequency of errors as reflecting a limit on the number of items (K) that can simultaneously be maintained in visual memory.Here we show that performance on the change detection task does not measure a fixed maximum capacity of working memory, but instead reflects methodological details of the experimental design. Parametrically manipulating the distance in feature space between changed and unchanged items causes the estimate of capacity to vary from K < 1 to K > 5 items. The results of previous influential studies that have estimated capacity at about 3 items can be directly predicted from the stimulus distances employed in those tasks. While inconsistent with a fixed item limit, our results are accurately described by a Bayesian implementation of a shared-resource model of working memory, in which all items are stored but with a variability that increases with total memory load. This model provides a superior fit to a range of previous results, including the variability in change detection between individuals, changes in performance during development, and classic results from “whole report” of visual or auditory arrays.