Abstract
We present a novel experimental paradigm examining the effects of set size on the encoding of spatial location in visual short-term memory (VSTM). The nature of the structure of VSTM has recently been the subject of intense debate. One group of researchers (e.g. Zhang and Luck, 2008) have argued that changes in performance as a function of set size reflect limits in the number of items that can be encoded in VSTM. In contrast, we along with others (Wilken and Ma, 2004; Bays and Husain, 2008) argue that VSTM performance is limited by internal noise, which itself grows with set size. In our experiment, observers viewed randomly positioned Gaussian “blobs”, presented for 100 ms. After a 1000 ms ISI, a second display was shown that was identical to the first except that one blob was missing. The observer's task was to report the memorized location of this “missing” (target) blob. Logically, reports could fall into two categories: (1) those based on a noisy internal representation of the true location of the target; and (2) those in which no information about the target location was used. Accordingly, we fitted a mixture model consisting of a target-centered, bivariate Gaussian and a second, approximately uniform distribution centered on the fixation point. The proportion of responses that were assigned to the first distribution decreased as a function of set size, though significantly more slowly than would be predicted by a “4-item” slot model (e.g., Cowan, 2001). Importantly, we found that the precision of spatial localization decreased monotonically as a function of set size, independent of eccentricity. These results are consistent with a model of spatial VSTM in which responses are limited by a continuous resource that is distributed across all items.