Purchase this article with an account.
Christopher Ackerman, Susan Courtney; Comparing Working Memory for Visual Item versus Relational Information. Journal of Vision 2010;10(7):710. doi: 10.1167/10.7.710.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Past research has probed the working memory representations of visual object features, but less is known about how visual relational information is stored in working memory. To investigate this, we employed a delayed recognition behavioral paradigm using visual object features that also afford relational comparisons between objects: specifically, the magnitude dimensions of size and luminance. In a series of experiments, we examined whether the working memory capacity for relations is similar to the capacity for objects and their features, whether memory for relational information is decomposed into magnitude and direction components, and whether relative magnitudes (eg, “X much bigger than”) are encoded similarly to absolute magnitudes (eg, “X big”). Results for object features reproduce earlier findings for nonscalar visual dimensions. Accuracy was equally high when subjects had to encode either size or luminance as when they had to encode both size and luminance, indicating that multiple features of an item can be remembered as well as a single feature of an item. Relational magnitude, however behaved differently. When subjects had to encode both the size differential and the luminance differential of two pairs of objects, accuracy was significantly lower than when they had to encode only the size or luminance differential of two pairs of objects. This suggests that visual comparative relations are not maintained in separate feature stores, nor are they automatically bound into an integrated “object-like” multidimensional relational representation. Rather, size and luminance relational representations compete with each other for a limited shared memory resource, and the capacity of this resource is similar to that found for objects. These behavioral results are consistent with fMRI results from our lab comparing the neural representations of item-specific and relational information along the dimensions of size and luminance.
This PDF is available to Subscribers Only