Abstract
In the typical visual search experiment, subjects search for targets that are present on half of the trials and absent on the other half. However, many real world tasks involve targets that are present only occasionally. Given this, it is worthwhile to ask how subjects learn to effectively perform such tasks when they can only rarely assess the salience of targets. One possibility is that they develop an awareness of the degree to which they have effectively completed a search through complex scenes, even when targets are absent. To test this hypothesis, we had subjects complete a relatively long (316 trial) search task in which only 5.3% of trials included targets. Stimuli were cluttered real-world scenes, mostly of messy desk drawers, and targets were defined by category (either some kind of medicine or a rubber band). To assess the degree to which each subject effectively terminated search trials based on a correct assessment of scene complexity, the mean correct-trial RT for all 300 target-absent scenes was computed across all subjects. These normalized item RTs were then correlated with each subjects' individual RTs for the same stimuli to assess the degree to which a given subject's search termination times were consistent with those of the group. These correlations were then used to predict each subjects' target-detection success in multiple regressions that also included both the subjects' mean RTs and the degree to which they slowed their RTs in response to hits. Both the search-termination correlations and mean RT predicted hits, while only the search termination correlations predicted false alarms. This suggests that an integral part of visual search is the need to calibrate search behavior to scenes of varying levels of complexity even when no targets are present.
Supported by NSF grant SES-0214969