September 2011
Volume 11, Issue 11
Vision Sciences Society Annual Meeting Abstract  |   September 2011
Modeling Combined Proximity-Similarity Effects in Visual Search
Author Affiliations
  • Tamar Avraham
    Computer Science Department, Technion I.I.T., Haifa, Israel
  • Yaffa Yeshurun
    Psychology Department, University of Haifa, Haifa, Israel
  • Michael Lindenbaum
    Computer Science Department, Technion I.I.T., Haifa, Israel
Journal of Vision September 2011, Vol.11, 1295. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tamar Avraham, Yaffa Yeshurun, Michael Lindenbaum; Modeling Combined Proximity-Similarity Effects in Visual Search. Journal of Vision 2011;11(11):1295.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

The goal of this study is to develop a computational model of visual search that takes into account various effects of grouping. To that end, we present two models that predict the effects of elements similarity (e.g., distractor homogeneity, target-distractor similarity) on visual search, and a third, extended model that can also account for the effect of spatial proximity. The first model provides a measure of search difficulty, while the second model is an algorithmic search mechanism. Both are based on the distribution of the pair-wise feature differences between display elements. In a first set of experiments (involving orientation and color search) distractor homogeneity and target-distractor similarity were systematically manipulated. In these experiments the spatial locations of the elements were random. The comparison of these models' predictions to those of several prominent models of visual search revealed that our models' predictions were the closest to human performance. In our third, extended model the pair-wise feature differences are replaced by a distance measure that is a superposition of feature-wise difference and spatial distance: d = aDf + (1-a)Ds, where Df is the feature difference and Ds is the spatial distance, after normalization. This change enables the model to predict, for instance, that visual search is easier when stimuli with similar features are also spatially clustered than when the same stimuli are randomly located. In our second set of experiments we systematically manipulated both elements' feature similarity and spatial proximity. The findings suggest that the extended model can adequately predict human performance using the same “a” value for all participants. This “a” value (a = ∼0.4) suggests that the spatial distance between elements has a slightly stronger effect on search performance than that of the feature differences. This is consistent with previous findings regarding the combined effects of proximity and similarity on perceptual grouping.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.