Abstract
Duncan and Humphreys (Psychological Review, 96(3), 1989) predicted that visual search efficiency would vary as a function of both target-distractor and distractor-distractor similarity. However, applying such concepts to search for targets in images containing complex, naturalistic, objects is made difficult because it is hard to quantify the degree of similarity (or difference) between elements of the image. Given that we now have metrics which predict image differences reasonably well (Visual Difference Predictors, or VDPs, Parraga et.al. Vision Research, 45, 2005), we wish to be able to use the output of these metrics so that they predict search performance in scenes containing natural objects. We thus generate search images (consisting of a target and distractors in discrete locations on a uniform background, c.f. traditional search experiments) in which increases in target-distractor similarity or in distractor-distractor heterogeneity should both result in decreased search efficiency. The current study examines observers' (n=5) visual search efficiency for natural objects while manipulating these factors whilst also manipulating display size. Observers were shown a new target for each block of trials. Observer reaction times were modeled with neural-networks, the inputs of which were the VDP's predictions of visual similarity. This resulted in reliable predictions of search efficiency. A post-hoc examination of the neural-net activation patterns enabled reconstruction of the original Duncan and Humphreys' prediction of search efficiency as a function of target-distractor and distractor-distractor similarity. We have therefore demonstrated the possibility of using VDPs to predict search performance in natural images, showing the utility of the Duncan and Humphreys model for such scenes. Further work is needed to develop this method to be able to predict search performance in scenes in which the background is continuous.