Abstract
Variants of the visual search task have been used to provide key insights into how individuals search for relevant information amongst irrelevant content. Using this task, researchers have sought to answer questions about when attention is necessary for search, the role of inhibition of return during search, and whether social information is “more distracting” than other types of content. In spite of this widespread use, consideration of the semantic meaning of one’s chosen distractors has not been taken into account. Thus here we sought to quantify distraction at the level of semantic similarity, rather than the more common technique of controlling for low-level luminance information. We quantified semantic similarity through the use of word vectors. To do so, we employed vector models, which represent words as lists of numbers created by machine learning models trained on large collections of text. The more semantically and syntactically similar two words are, the closer their word vectors, allowing us to measure word similarity. We chose 5 target categories with 2 targets each, and then created 10 levels of varying similarity between the target and a main distractor. On each trial, participants were told which target item to search for amongst a display of 6 images, and were instructed to click on the image with a cursor as soon as they found the target. For half of the trials, the target and main distractor images were beside each other, and for the other half of trials they were across from each other. Accuracy declined as semantic similarity increased, and participants were faster to find the target when the main distractor was located beside as compared to across from the target. Together, our data suggest that the location and similarity between a target and distractor have an effect on attention.