September 2018
Volume 18, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2018
Two targets, held in memory, can guide search; four targets cannot.
Author Affiliations
  • Farahnaz Wick
    Harvard Medical SchoolBrigham and Women's Hospital
  • Gabriel Kreiman
    Harvard Medical SchoolBoston Children's Hospital
  • Jeremy Wolfe
    Harvard Medical SchoolBrigham and Women's Hospital
Journal of Vision September 2018, Vol.18, 288. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Farahnaz Wick, Gabriel Kreiman, Jeremy Wolfe; Two targets, held in memory, can guide search; four targets cannot.. Journal of Vision 2018;18(10):288.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

When you are looking for your car keys, umbrella and that favorite coat, how do you decide where to look next with all these items in your memory? In the experiments reported here, observers memorized grayscale pictures of 1, 2 or 4 targets and then searched for any one of these targets in a search array of 6 items. Observers responded with a keypress, indicating whether a target was present or absent. A single target appeared on 50% of the trials. Eye movements and keypress reaction times were recorded. The accuracy was >95% when memory load was 1 or 2 and >90% when memory load was 4. Keypress reaction times increased with memory load indicating that search became more difficult as more items were held in memory. If observers fixated on items at random, they would require 3.5 fixations on average (assuming sampling without replacement). On target present trials, when the memory load was 1, only 1.9 fixations were required, showing that fixations were "guided". With a memory load of 2, average fixation count was 2.2, still significantly less than 3.5. However, with a memory load of 4, 3.2 fixations were required, not significantly different from random. These results suggest that visual search can be guided by features from two targets simultaneously, despite the distinct shape features present in those targets. As the number of targets increase, we infer that the number of different features becomes too large for effective guidance, introducing a strong constraint for computational models of visual search. We develop a biologically inspired model of visual search constrained by these experimental observations to investigate the top-down mechanisms that modulate the attentional map in hybrid search tasks and directly compare the model versus the psychophysics measurements.

Meeting abstract presented at VSS 2018


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.