October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
The Costly Influence of Task-Irrelevant Semantic Information on Attentional Allocation
Author Affiliations
  • Ellie Robbins
    The George Washington University
  • Joe Nah
    University of California, Davis
  • Dick Dubbelde
    The George Washington University
  • Sarah Shomstein
    The George Washington University
Journal of Vision October 2020, Vol.20, 1525. doi:https://doi.org/10.1167/jov.20.11.1525
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ellie Robbins, Joe Nah, Dick Dubbelde, Sarah Shomstein; The Costly Influence of Task-Irrelevant Semantic Information on Attentional Allocation. Journal of Vision 2020;20(11):1525. https://doi.org/10.1167/jov.20.11.1525.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

High-level features of objects, such as semantic information, have been shown to bias attention, even when task-irrelevant. However, it remains unclear the exact mechanism by which this attentional guidance is instantiated. We hypothesized that task-irrelevant semantic information organizes visual input through mechanisms of grouping. Similar to grouping by similarity in low-level features, we predict that semantic information organizes visual input by semantic relatedness. Specifically, when presented with multiple task-irrelevant objects, attention is guided, or prioritized, to a subset of objects that are semantically related, creating a grouping-like effect. In the present studies, participants were presented with an array of 4 or 6 objects. The objects were either colored squares (low-level information only) or grayscale real-world objects (high-level information only). On any given trial half of the objects were related to a single category (e.g., clothing or blue squares) while the other half was chosen randomly from other semantic or color categories, respectively. A target was presented randomly on one of the objects, independent of relatedness, rendering color and semantics task-irrelevant for each experiment. For both colored squares and real-world objects, when each group had equal number of members (grouped by color or by semantic relatedness), target identification was faster and more accurate when targets were presented on the color- or semantically-related objects. Interestingly, when the size of the related group increased (e.g., three related objects and one non related object in set size four) performance was slower for targets presented on the objects that shared membership in the larger group versus objects in the smaller group. Taken together, these results support the semantic grouping hypothesis such that semantic information, just as color, organizes visual input to enable efficient attentional allocation. Importantly, given task-irrelevant nature of semantic information, our results suggest that task-irrelevant semantic grouping is an automatic process.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×