Abstract
A classic covert search paradigm is to measure search accuracy as a function of the number of potential target locations at a fixed retinal eccentricity, which minimizes the differences in sensitivity across the potential locations. For well-separated targets there are many cases where the effect of the number of locations (the set size) is predicted by parallel unlimited processing (a Bayes optimal decision process). Here we measured search accuracy for 19 well-separated potential target locations that tiled the central 16 deg in a triangular array. The search display was presented for 250ms (the duration of a typical fixation in overt search). Each location contained a 3.5 deg patch of white noise. On half the trials there was no target, and on half the trials a small wavelet target was added to the center of one of the 19 locations. The task was to indicate the location of the target or that it was absent. To precisely characterize eccentricity effects, we measured in a separate experiment the detectability of the target at each location. Under the assumption of statistical independence, we found that human search accuracy slightly exceeded that of the Bayes optimum, and that the observers suffered a modest loss of sensitivity in the fovea (foveal neglect). Furthermore, the observers were able to do this even though the Bayes optimal decision process uses precise knowledge of the sensitivity (d’) at each potential location, which varied substantially across the search locations. These seemingly impossible results may be explained by two plausible factors. First, we show that a simple heuristic decision rule that assumes a fixed sensitivity at all potential locations is very close to optimal. Second, we show that intrinsic temporal variations in overall sensitivity could explain how search performance can be slightly above the optimal performance predicted assuming statistical independence.