August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Explaining the guidance of search for real-world objects using quantitative similarity
Author Affiliations
  • Brett Bahle
    University of California - Davis
  • Steven J. Luck
    University of California - Davis
Journal of Vision August 2023, Vol.23, 5519. doi:https://doi.org/10.1167/jov.23.9.5519
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Brett Bahle, Steven J. Luck; Explaining the guidance of search for real-world objects using quantitative similarity. Journal of Vision 2023;23(9):5519. https://doi.org/10.1167/jov.23.9.5519.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual attention is guided during search toward stimuli that correspond to the features of the target. These features, often termed the “attentional set”, are thought to be represented in the brain as a working memory representation, particularly when searching for a frequently changing target. But what features are represented in the attentional set, especially when the target is a complex, real-world object? Here, we utilized both computational approaches (such as ConceptNet) and crowd-sourced data (Hebart, et al. (2020)) to quantitatively model multiple levels of representational abstraction for search targets (and, correspondingly, search distractors). Specifically, we propose that the objects of search (both the known target and all possible distractor objects) can be defined as a vector of feature values at different levels of abstraction, from low-level, image-based features to high-level, semantic features. Moreover, the extent to which an item in a search display will attract attention directly scales with its quantitative similarity to the target’s features. Across different search tasks, we found evidence that the level of abstraction of a given representational space selectively explained variance in search behavior. Specifically, both pre-saccadic mechanisms (as indexed by probability of item fixation) and post-saccadic mechanisms (as indexed by item dwell times) were explained by quantitative similarity between the target of search and a given item in a search display. Our approach provides a new quantitative model for predicting attentional allocation during visual search.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×