September 2011
Volume 11, Issue 11
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Does repeated search in scenes need memory? When contextual guidance fails, memory takes over
Author Affiliations
  • Melissa Vo
    Harvard Medical School, BWH
  • Jeremy Wolfe
    Harvard Medical School, BWH
Journal of Vision September 2011, Vol.11, 1299. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Melissa Vo, Jeremy Wolfe; Does repeated search in scenes need memory? When contextual guidance fails, memory takes over. Journal of Vision 2011;11(11):1299.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Imagine searching a new kitchen for spoons. Before you find them, your eyes dwell briefly on eggs. We would assume that looking AT eggs would tend to improve subsequent search FOR eggs. However, our experiments show that this intuition is oftentimes wrong. We recorded eye movements while observers searched the same, continuously visible scene for 15 different objects. Over approximately 30 seconds of search, search performance did not improve despite increasing scene familiarity: Finding the 15th object was no faster than finding the 1st. Incidental fixation on one object (e.g., eggs) while searching for another (e.g., spoons) did not benefit subsequent search for eggs. Moreover, a 30 second preview of the scene did not improve subsequent object search. Observers searched the scene as if they had never seen the scene before. Interestingly, when asked to search for the same object in a 2nd block, after hundreds of intervening searches in other scenes, the second search was 100 s of ms faster. Memory for targets can guide search.

Hollingworth (2006) has shown that observers develop memory for more than just search targets when viewing a scene. Why didn't observers use memory during our initial object searches? We hypothesize that “contextual guidance” (knowledge that in kitchens pots are often located on stoves) was fast and strong enough to render memory guidance useless. If that is so, memory might guide if context is weakened. Observers performed 15 successive searches in artificial indoor scenes that were contextually coherent (pot on stove) or incoherent (pot on floor) locations. Across 15 searches within a scene, RT decreased by 700 ms when context was incoherent but only 250 ms in coherent scenes. In contextually incoherent scenes, incidental object fixations predicted subsequent RT for those objects. These findings show that in incoherent scenes, when context fails, memory guides search.

This work was supported by grants to MV (DFG: VO 1683/1-1) and JMW (NEI EY017001, ONR N000141010278). 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.