Abstract
The COVER and FLNN algorithms were previously suggested for computer-vision visual search (Avraham and Lindenbaum, 2006). These computer-vision models capture the dependency of search difficulty on distracters' homogeneity and target-distracters similarity, as was suggested originally in Duncan and Humphreys (1989). In this study, we extended those models to account for internal-noise, and evaluated their ability to predict human search performance. In four experiments, observers searched for a tilted target presented among distracters of different orientations (orientation-search) or a gray target appearing among distracters of different colors (color-search). Distracters' homogeneity and target-distracters similarity were systematically manipulated. Search performance was then used to test our models. We compared our models to several prominent models of visual search including a SDT-based model (e.g., Palmer, Ames, and Lindsey 1993), the Temporal-Serial model (e.g., Bergen and Julesz 1983, Eckstein 1998), the saliency model (Rosenholtz 1999) and the Best-Normal model (Rosenholtz 2001). In comparison to these models of visual search, our models' predictions were the closest to human performance.