December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Visual search and quantitative stimulus similarity
Author Affiliations
  • Brett Bahle
    University of California - Davis
  • Steven J. Luck
    University of California - Davis
Journal of Vision December 2022, Vol.22, 4340. doi:https://doi.org/10.1167/jov.22.14.4340
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Brett Bahle, Steven J. Luck; Visual search and quantitative stimulus similarity. Journal of Vision 2022;22(14):4340. https://doi.org/10.1167/jov.22.14.4340.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Duncan and Humphreys (1989) provided evidence that visual search behavior can be described by target-distractor similarity and distractor-distractor similarity: In their view, as the former increases, search efficiency decreases; as the latter increases, search efficiency increases. Given this finding, they proposed a theory of attentional selection in which the similarity between a target template and items in the visual world predict search behavior. However, their similarity metric was qualitative, making it difficult to extrapolate their findings in a domain-general way. Recently, machine learning tools have been developed that allow for computing quantitative similarity scores between different stimuli. We utilized both previously computed representational spaces, as in Hebart et al. (2020), and novel representational spaces that were computed using multiple computational neural networks, to gather similarity scores between 1800+ images of natural objects. These images were then used as stimuli in a search task. On each trial, participants were cued to their target using a basic-level category label. While being eyetracked, they completed a present/absent search task for the cued target. Each display contained distractors that varied from high to low similarity to the target, and we observed that participants spent more time looking at items as their quantitatively estimated similarity to the target increased. Our approach provides a new quantitative model for predicting attentional dwell times during visual search.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×