Purchase this article with an account.
Hrag Pailian, Elizabeth Tran, George Alvarez; Constraints on Information Compression in Visual Working Memory. Journal of Vision 2016;16(12):356. doi: https://doi.org/10.1167/16.12.356.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
In standard working memory tasks, people can remember ~3 random colors. However, when colors are highly correlated (e.g., red often appears next to blue), people learn these regularities and use them to store more items in memory (Brady, Konkle, Alvarez, 2009). This learning occurs even when participants do not notice the correlation, and is consistent with a model in which participants optimally compress redundant information. Here, we investigated whether the efficiency of learning and information compression is constrained by 1) the number of to-be-compressed color pairs, and 2) the amount of feature overlap across these color pairs. Participants saw displays of 8 colors, arranged into 4 concentric pairs (e.g., a red ring with a blue circle in the middle). Certain color pairs co-occurred more frequently than others (high-probability pairs, HPPs). In Experiment 1, 80% of the color pairs in each display were high-probability pairs, chosen from a set of 4 possible HPPs for one group of subjects, and 8 possible HPPs for another group. We measured performance across blocks of trials (10 blocks x 60 trials), then modeled learning rates (α) using a Bayesian model, and compression using Huffman coding. The model results suggest that each group learns at a similar rate (4-HPP-group, α=.31, model fit r=-.70; 8-HPP-group α=.36, model fit r=-.65). In Experiment 2, we replicated the 8HPP condition, but for this group colors could repeat across pairs. For example, one HPP pair might have red-outside/blue-inside, and another might have red-outside/green-inside. Learning rates for this group were slower (α=.50 , model fit r=-.63), even though the total amount of information stored was not different across groups (p=.53). Combined, these results suggest that it is possible to learn and compress information across many items, but that feature overlap amongst those items can reduce the efficiency of learning and compression.
Meeting abstract presented at VSS 2016
This PDF is available to Subscribers Only