Abstract
The goal of this study is to develop a computational model of visual search that takes into account various effects of grouping. To that end, we present two models that predict the effects of elements similarity (e.g., distractor homogeneity, target-distractor similarity) on visual search, and a third, extended model that can also account for the effect of spatial proximity. The first model provides a measure of search difficulty, while the second model is an algorithmic search mechanism. Both are based on the distribution of the pair-wise feature differences between display elements. In a first set of experiments (involving orientation and color search) distractor homogeneity and target-distractor similarity were systematically manipulated. In these experiments the spatial locations of the elements were random. The comparison of these models' predictions to those of several prominent models of visual search revealed that our models' predictions were the closest to human performance. In our third, extended model the pair-wise feature differences are replaced by a distance measure that is a superposition of feature-wise difference and spatial distance: d = aDf + (1-a)Ds, where Df is the feature difference and Ds is the spatial distance, after normalization. This change enables the model to predict, for instance, that visual search is easier when stimuli with similar features are also spatially clustered than when the same stimuli are randomly located. In our second set of experiments we systematically manipulated both elements' feature similarity and spatial proximity. The findings suggest that the extended model can adequately predict human performance using the same “a” value for all participants. This “a” value (a = ∼0.4) suggests that the spatial distance between elements has a slightly stronger effect on search performance than that of the feature differences. This is consistent with previous findings regarding the combined effects of proximity and similarity on perceptual grouping.