August 2010
Volume 10, Issue 7
Free
Vision Sciences Society Annual Meeting Abstract  |   August 2010
Visual Similarity Predicts Categorical Search Guidance
Author Affiliations
  • Robert Alexander
    Department of Psychology, Stony Brook University
  • Gregory Zelinsky
    Department of Psychology, Stony Brook University
Journal of Vision August 2010, Vol.10, 1316. doi:https://doi.org/10.1167/10.7.1316
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Robert Alexander, Gregory Zelinsky; Visual Similarity Predicts Categorical Search Guidance. Journal of Vision 2010;10(7):1316. https://doi.org/10.1167/10.7.1316.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

How a target category is represented and used to guide search is largely unknown. Of particular interest is how categorical guidance is possible given the likely overlap in visual features between the target category representation and different-category real-world objects. In Experiment 1 we explored how the visual similarity relationships between a target category and random-category distractors affects search guidance. A web-based task was used to quantify the visual similarity between two target classes (teddy bears or butterflies) and random-object distractors. We created displays consisting of high-similarity distractors, low-similarity distractors, and “mixed” displays with high, intermediate, and low-similarity items. Subjects made faster manual responses and fixated fewer distractors on low-similarity displays than on high-similarity displays. In mixed trials, first fixations were more frequently on high-similarity distractors (bear=49%; butterfly=58%) than low-similarity distractors (9%-12%). Experiment 2 used the same high/low/mixed similarity conditions, but now these conditions were created using similarity estimates from a computational model (Zhang, Samaras, & Zelinsky, 2008) that ranked objects in terms of color, texture, and shape similarity. The same data patterns were found, suggesting that categorical search is affected by visual similarity and not conceptual similarity (which might have played some role in the web-based estimates). In Experiment 3 we pit the human and model estimates against each other by populating displays with distractors rated as similar by: subjects (but not the model), the model (but not subjects), or both subjects and the model. Distractors ranked as highly-similar by both the model and subjects attracted the most initial fixations (31%-41%). However, when the human and model estimates conflicted, more first fixations were on distractors ranked as highly-similar by subjects (28%-30%) than the highly-similar distractors from the model (14%-25%). This suggests that the two different types of visual similarity rankings may capture different sources of variability in search guidance.

Alexander, R. Zelinsky, G. (2010). Visual Similarity Predicts Categorical Search Guidance [Abstract]. Journal of Vision, 10(7):1316, 1316a, http://www.journalofvision.org/content/10/7/1316, doi:10.1167/10.7.1316. [CrossRef]
Footnotes
 NIMH grant 2 RO1 MH063748.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×