September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Quantifying distraction in a visual search task
Author Affiliations
  • Daniela Teodorescu
    University of Alberta
  • Alona Fyshe
    University of Alberta
  • Dana Hayward
    University of Alberta
Journal of Vision September 2021, Vol.21, 2747. doi:https://doi.org/10.1167/jov.21.9.2747
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Daniela Teodorescu, Alona Fyshe, Dana Hayward; Quantifying distraction in a visual search task. Journal of Vision 2021;21(9):2747. https://doi.org/10.1167/jov.21.9.2747.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Variants of the visual search task have been used to provide key insights into how individuals search for relevant information amongst irrelevant content. Using this task, researchers have sought to answer questions about when attention is necessary for search, the role of inhibition of return during search, and whether social information is “more distracting” than other types of content. In spite of this widespread use, consideration of the semantic meaning of one’s chosen distractors has not been taken into account. Thus here we sought to quantify distraction at the level of semantic similarity, rather than the more common technique of controlling for low-level luminance information. We quantified semantic similarity through the use of word vectors. To do so, we employed vector models, which represent words as lists of numbers created by machine learning models trained on large collections of text. The more semantically and syntactically similar two words are, the closer their word vectors, allowing us to measure word similarity. We chose 5 target categories with 2 targets each, and then created 10 levels of varying similarity between the target and a main distractor. On each trial, participants were told which target item to search for amongst a display of 6 images, and were instructed to click on the image with a cursor as soon as they found the target. For half of the trials, the target and main distractor images were beside each other, and for the other half of trials they were across from each other. Accuracy declined as semantic similarity increased, and participants were faster to find the target when the main distractor was located beside as compared to across from the target. Together, our data suggest that the location and similarity between a target and distractor have an effect on attention.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×