Abstract
Can remembering ever be as fast as seeing? Prior work suggests not, as both serial (Sternberg, 1966) and logarithmic (Wolfe, 2013) temporal access costs have been observed in memory search for identity information. Here, we ask whether there are analogous costs for accessing spatial representations of objects’ locations. Participants used keypresses to move a “selection window” around a 4x4 grid of images of real-world objects. Their task was to sequentially select four target objects on each trial, though only the upcoming target was displayed on-screen. While targets changed between trials, the grid itself remained stable for 60 trials, enabling participants to learn the objects’ locations over time. In the Vision condition, all 16 grid objects were simultaneously visible. In the Memory condition, only the object currently inside the selection window was visible at each moment. Initially, participants were slower in Memory than Vision, but after ~40 trials performance became equally fast (and statistically indistinguishable) across the two conditions (reaching a stable plateau at ~8.5secs/trial), implying that in well-learned environments, searching through memory can become just as fast as visual search. A second experiment used the same design and stimuli, but now all four targets were visible throughout each trial (though participants had to select them in a pre-determined order). While Memory performance matched Study 1, Vision was now significantly faster (~7.5secs/trial), and the Condition-by-Study interaction was highly significant. While visual search may use parallel selection mechanisms to plan efficient “routes” between multiple upcoming targets, memory search appears to be inherently serial, as seemingly only a single memory target can be selected at once. These results constitute evidence for a form of ultrafast human “cache” memory, by analogy to the sort of memory typically integrated into computer CPUs, and subsequent experiments will explore whether this form of memory is similarly capacity-limited.
Meeting abstract presented at VSS 2015