Abstract
Imagine searching a new kitchen for spoons. Before you find them, your eyes dwell briefly on eggs. We would assume that looking AT eggs would tend to improve subsequent search FOR eggs. However, our experiments show that this intuition is oftentimes wrong. We recorded eye movements while observers searched the same, continuously visible scene for 15 different objects. Over approximately 30 seconds of search, search performance did not improve despite increasing scene familiarity: Finding the 15th object was no faster than finding the 1st. Incidental fixation on one object (e.g., eggs) while searching for another (e.g., spoons) did not benefit subsequent search for eggs. Moreover, a 30 second preview of the scene did not improve subsequent object search. Observers searched the scene as if they had never seen the scene before. Interestingly, when asked to search for the same object in a 2nd block, after hundreds of intervening searches in other scenes, the second search was 100 s of ms faster. Memory for targets can guide search.
Hollingworth (2006) has shown that observers develop memory for more than just search targets when viewing a scene. Why didn't observers use memory during our initial object searches? We hypothesize that “contextual guidance” (knowledge that in kitchens pots are often located on stoves) was fast and strong enough to render memory guidance useless. If that is so, memory might guide if context is weakened. Observers performed 15 successive searches in artificial indoor scenes that were contextually coherent (pot on stove) or incoherent (pot on floor) locations. Across 15 searches within a scene, RT decreased by 700 ms when context was incoherent but only 250 ms in coherent scenes. In contextually incoherent scenes, incidental object fixations predicted subsequent RT for those objects. These findings show that in incoherent scenes, when context fails, memory guides search.
This work was supported by grants to MV (DFG: VO 1683/1-1) and JMW (NEI EY017001, ONR N000141010278).