Abstract
To perform visual search, the primate visual system uses eye movements to direct the fovea at potential target locations in the environment. What are the rational eye movement strategies for a foveated visual system faced with the task of finding a target in a cluttered environment? Do humans employ rational eye movement strategies while searching? To answer these questions, we derived the Bayesian ideal visual searcher for tasks where a known target is embedded at an unknown location within a background of 1/f noise. Next, we measured the detectability (d') of our target across the human retina, and constrained the ideal searcher with the same d' map. We find that this ideal searcher displays many properties of human fixation patterns during search. For example, both the spatial distribution of human fixation locations and the distribution of human saccade lengths are similar to the ideal. We also find that humans achieve nearly optimal performance in our task, even though humans cannot integrate information perfectly across fixations. By analyzing the performance of the ideal searcher we show that, in fact, there is only a small benefit from integrating information perfectly across fixations—much more important is efficient parallel processing of information on each fixation and efficient selection of fixation locations. To test the importance of fixation selection we simulated searchers that do not select fixation locations optimally, but are otherwise ideal. We find that humans substantially outperform the searchers that select fixation locations at random (with or without replacement), allowing us to conclusively reject all possible random search models. The searcher that always fixates the most likely target location achieves near-optimal performance, but distributes its fixations across the search area in a spatial pattern that differs from human and ideal (which are very similar).
Supported by NIH grant R01EY02688.