Purchase this article with an account.
Robert Alexander, Gregory Zelinsky; Visual Similarity Predicts Categorical Search Guidance. Journal of Vision 2010;10(7):1316. doi: https://doi.org/10.1167/10.7.1316.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
How a target category is represented and used to guide search is largely unknown. Of particular interest is how categorical guidance is possible given the likely overlap in visual features between the target category representation and different-category real-world objects. In Experiment 1 we explored how the visual similarity relationships between a target category and random-category distractors affects search guidance. A web-based task was used to quantify the visual similarity between two target classes (teddy bears or butterflies) and random-object distractors. We created displays consisting of high-similarity distractors, low-similarity distractors, and “mixed” displays with high, intermediate, and low-similarity items. Subjects made faster manual responses and fixated fewer distractors on low-similarity displays than on high-similarity displays. In mixed trials, first fixations were more frequently on high-similarity distractors (bear=49%; butterfly=58%) than low-similarity distractors (9%-12%). Experiment 2 used the same high/low/mixed similarity conditions, but now these conditions were created using similarity estimates from a computational model (Zhang, Samaras, & Zelinsky, 2008) that ranked objects in terms of color, texture, and shape similarity. The same data patterns were found, suggesting that categorical search is affected by visual similarity and not conceptual similarity (which might have played some role in the web-based estimates). In Experiment 3 we pit the human and model estimates against each other by populating displays with distractors rated as similar by: subjects (but not the model), the model (but not subjects), or both subjects and the model. Distractors ranked as highly-similar by both the model and subjects attracted the most initial fixations (31%-41%). However, when the human and model estimates conflicted, more first fixations were on distractors ranked as highly-similar by subjects (28%-30%) than the highly-similar distractors from the model (14%-25%). This suggests that the two different types of visual similarity rankings may capture different sources of variability in search guidance.
This PDF is available to Subscribers Only