Visual input originates from multiple objects that all have many features, such as orientation, color, and motion. To deal with this deluge of information, it is useful to have a short-term buffer—visual short-term memory (VSTM). VSTM for objects that have more than one feature has been an area of enduring interest in cognitive psychology. One prominent question has been whether multifeature objects get stored in VSTM as entire objects or as loose collections of features. This question has multiple aspects, one of which is whether all features of a task-relevant object are stored automatically, regardless of the relevance of each individual feature (Alvarez & Cavanagh,
2004; Bays, Wu, & Husain,
2011; Fougnie, Asplund, & Marois,
2010; Jiang, Olson, & Chun,
2000; Lee & Chun,
2001; Luck & Vogel,
1997; Vogel, Woodman, & Luck,
2001; Wheeler & Treisman,
2002). If VSTM is object-based, then one could surmise that encoding a task-relevant feature of an object automatically causes irrelevant features of that object to be encoded as well (Hyun, Woodman, Vogel, Hollingworth, & Luck,
2009; Luria & Vogel,
2011; Shen, Tang, Wu, Shui, & Gao,
2013; Vogel et al.,
2001; Yin et al.,
2012).
This hypothesis has mostly been tested by examining whether the addition of an irrelevant feature decreases performance. Studies employing orientation-color change detection (Vogel et al.,
2001) and color-shape change detection (Luria & Vogel,
2011) showed no effect, suggesting that people do not encode irrelevant features. However, these results can also be explained by the irrelevant feature having an independent pool of memory resource rather than sharing resources with the relevant feature. In fact, Hyun et al. (
2009) found the opposite result: Subjects were more error-prone when one object changed in its irrelevant feature. When the authors introduced changes in all objects, they found an even stronger impairment, leading them to conclude that irrelevant features are encoded. Recent studies (Shen et al.,
2013; Yin et al.,
2012) found similar effects and interpreted them as evidence that VSTM is object-based.
Although the origin of the differences between the results of these studies remains unclear, even the positive results leave open the question of how well the irrelevant feature is stored in VSTM, and in particular if it is stored with the same precision as when that feature is relevant. To address these questions, it is insufficient to only measure performance on trials in which the relevant feature is probed: Data must be collected on irrelevant-feature trials to make a comparison. This, however, brings about a problem: As soon as a subject experiences a surprise trial on which the irrelevant feature is probed, that feature becomes relevant. Therefore, each subject can be tested on only a single irrelevant-feature trial.
To solve this problem, we crowdsourced data from the Amazon Mechanical Turk, an online platform for data collection. We used stimuli that each had both an orientation and a color. We crossed two experimental paradigms (change localization and delayed estimation) with two options for which feature was irrelevant (orientation and color), for a total of four experiments. We found that people could recall the irrelevant feature, suggesting that it is encoded automatically.