Abstract
When you are looking for your car keys, umbrella and that favorite coat, how do you decide where to look next with all these items in your memory? In the experiments reported here, observers memorized grayscale pictures of 1, 2 or 4 targets and then searched for any one of these targets in a search array of 6 items. Observers responded with a keypress, indicating whether a target was present or absent. A single target appeared on 50% of the trials. Eye movements and keypress reaction times were recorded. The accuracy was >95% when memory load was 1 or 2 and >90% when memory load was 4. Keypress reaction times increased with memory load indicating that search became more difficult as more items were held in memory. If observers fixated on items at random, they would require 3.5 fixations on average (assuming sampling without replacement). On target present trials, when the memory load was 1, only 1.9 fixations were required, showing that fixations were "guided". With a memory load of 2, average fixation count was 2.2, still significantly less than 3.5. However, with a memory load of 4, 3.2 fixations were required, not significantly different from random. These results suggest that visual search can be guided by features from two targets simultaneously, despite the distinct shape features present in those targets. As the number of targets increase, we infer that the number of different features becomes too large for effective guidance, introducing a strong constraint for computational models of visual search. We develop a biologically inspired model of visual search constrained by these experimental observations to investigate the top-down mechanisms that modulate the attentional map in hybrid search tasks and directly compare the model versus the psychophysics measurements.
Meeting abstract presented at VSS 2018