Abstract
Human actions rely on memory to guide behavior. Memory is inherently noisy, thus a single memory representation (e.g., a color hue) accords to several nearby values in feature space. To translate memory into concrete behavior often a point estimate has to be drawn from this set of possible values. Current accounts of visual working memory assume that this point decision is based on a maximum-likelihood estimate derived from a bell-shaped probability distribution of possible output values. Here we tested an alternative model of visual working memory. Namely, that memory-informed behavior results from a random choice within an equal-probability set of neighboring exemplars (e.g., a range of similar colors). Fifty-eight participants conducted a standard color working memory task (Zhang & Luck, 2008) with variable set size (1, 2, 4) and one probed item per trial. In half of the trials, participants responded with a standard point estimate on a color wheel. In the other half, participants indicated an interval (i.e., consideration set) in which they considered the target color to be. Response conditions were randomly interleaved and unknown to participants during encoding and delay but cued only prior to report. We calculated the accuracy (mean absolute deviation) of the point estimate responses and compared them to the accuracy of simulated point estimates drawn randomly from each interval, thereby simulating the equal-probability model. Assuming the maximum-likelihood model we would expect higher accuracy for point estimates. In contrast, Bayesian analyses provided evidence that point estimates did not contain more accurate information about the memorized colors than randomly drawn values from the consideration sets. Our results challenge the view of memory representations as continuously graded exemplar-based probability distributions but suggest to conceive memory as rather discrete sets of informationally equivalent response options.