August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
Comparing Working Memory for Visual Item versus Relational Information
Author Affiliations
  • Christopher Ackerman
    Department of Neuroscience, Johns Hopkins University
  • Susan Courtney
    Department of Neuroscience, Johns Hopkins University
    Department of Psychological and Brain Sciences, Johns Hopkins University
Journal of Vision August 2010, Vol.10, 710. doi:https://doi.org/10.1167/10.7.710
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Christopher Ackerman, Susan Courtney; Comparing Working Memory for Visual Item versus Relational Information. Journal of Vision 2010;10(7):710. https://doi.org/10.1167/10.7.710.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Past research has probed the working memory representations of visual object features, but less is known about how visual relational information is stored in working memory. To investigate this, we employed a delayed recognition behavioral paradigm using visual object features that also afford relational comparisons between objects: specifically, the magnitude dimensions of size and luminance. In a series of experiments, we examined whether the working memory capacity for relations is similar to the capacity for objects and their features, whether memory for relational information is decomposed into magnitude and direction components, and whether relative magnitudes (eg, “X much bigger than”) are encoded similarly to absolute magnitudes (eg, “X big”). Results for object features reproduce earlier findings for nonscalar visual dimensions. Accuracy was equally high when subjects had to encode either size or luminance as when they had to encode both size and luminance, indicating that multiple features of an item can be remembered as well as a single feature of an item. Relational magnitude, however behaved differently. When subjects had to encode both the size differential and the luminance differential of two pairs of objects, accuracy was significantly lower than when they had to encode only the size or luminance differential of two pairs of objects. This suggests that visual comparative relations are not maintained in separate feature stores, nor are they automatically bound into an integrated “object-like” multidimensional relational representation. Rather, size and luminance relational representations compete with each other for a limited shared memory resource, and the capacity of this resource is similar to that found for objects. These behavioral results are consistent with fMRI results from our lab comparing the neural representations of item-specific and relational information along the dimensions of size and luminance.

Ackerman, C. Courtney, S. (2010). Comparing Working Memory for Visual Item versus Relational Information [Abstract]. Journal of Vision, 10(7):710, 710a, http://www.journalofvision.org/content/10/7/710, doi:10.1167/10.7.710. [CrossRef]
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×