Abstract
In laboratory visual search tasks, targets are typically presented on 50% of trials. However, in many important real-world search tasks (e.g X-ray screening at airports, surveillance, routine screening in radiology), target-present trials are rare. Miss errors on these tasks can have serious consequences. We mimicked this situation in the laboratory by having Os search for targets (tools) amongst other objects (not tools) and varying the percentage of target-present trials. When targets were present on only 1% of 2000 trials, Os missed 42% of targets, far more than the 6% missed when the same targets were present on 50% of trials. In order to help real-world searchers avoid these catastrophic miss rates, we need to understand how the structure of the task influences error rates. Models of target-absent trials propose that that Os set quitting criteria based on implicit and explicit feedback about their performance. They search longer after an error and terminate unsuccessful searches more quickly after accurate responses. These adaptive search termination rules become maladaptive when targets are rare. In our experiments, Os came to terminate target absent trials with average RTs that were shorter than the average time needed to find targets on target-present trials. Can we ameliorate this situation? Again, we asked Os to search for tools among other objects. As in the first experiment, the critical target tool (for example, a drill) only appeared on 1% of the trials. Other tools were targets on 49% of the trials. Under these conditions, in which Os were responding “yes” about as often as “no”, the error rate for critical rare targets dropped to 21% - a substantial improvement though far from ideal. Keeping Os' search termination criteria properly calibrated may lead to major increases in accuracy on tasks where accuracy really counts.