Abstract
The human visual system has been extensively trained with millions of natural images, giving it the opportunity to develop robust strategies to identify exemplars of familiar categories. While it is known that memory capacity for visual images is massive (Standing, 1973), the fidelity of these representations was untested. In a recent study, we discovered that observers are able to remember specific details about thousands of objects (Brady et al., 2008). This suggests a massive memory capacity for object details, but it remains unclear whether this is a general property of memory that will also hold for scenes. Here we showed observers 3000 exemplars of real-world scenes, representing hundreds of common categories of visual environments. Embedded within the stream, there were 1, 4, 16, or 64 exemplars of different scene categories (e.g., warehouses, beaches, streets). Each image was shown only once, for three seconds each. At test, observers were shown an old and a new exemplar of a basic level category (e.g., two streets, two cafes, two beaches) and had to choose which image they saw. As expected, as the number of exemplars in the study stream increased, memory performance decreased. However, the degree of interference from multiple exemplars was minimal, averaging only a 2% drop in memory performance with each doubling of the number of the exemplars in memory: with 64 scene exemplars from a category in mind, observers could still distinguish one of those from a 65th exemplar with 76% accuracy. Even more remarkably, the drop in memory performance was identical to performance in a similar memory experiment involving images of real-world objects. These results suggest that high fidelity storage is a general property of visual long-term memory, and that categorical interference effects are similar for objects and scenes.
Funded by an NSF CAREER award (0546262) to A.O, an NSF Graduate research fellowship to T.F.B, and an NDSEG fellowship to T.K.