August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Super-additive associative learning benefit for repeated task-relevant and task-irrelevant elements in visual search
Author Affiliations & Notes
  • Emma M. Siritzky
    The George Washington University, Department of Psychological and Brain Sciences
  • Samoni Nag
    The George Washington University, Department of Psychological and Brain Sciences
  • Chloe Callahan-Flintoft
    U.S. Army Research Laboratory
  • Stephen R. Mitroff
    The George Washington University, Department of Psychological and Brain Sciences
  • Dwight J. Kravitz
    The George Washington University, Department of Psychological and Brain Sciences
  • Footnotes
    Acknowledgements  This research was funded by US Army Research Office grant #​​W911NF-16-1-0274, US Army Research Laboratory Cooperative Agreements #W911NF-19-2-0260 & #W911NF-21-2-0179, and National Science Foundation grant #2022572.
Journal of Vision August 2023, Vol.23, 5865. doi:https://doi.org/10.1167/jov.23.9.5865
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emma M. Siritzky, Samoni Nag, Chloe Callahan-Flintoft, Stephen R. Mitroff, Dwight J. Kravitz; Super-additive associative learning benefit for repeated task-relevant and task-irrelevant elements in visual search. Journal of Vision 2023;23(9):5865. https://doi.org/10.1167/jov.23.9.5865.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Visual search—looking for targets among distractors—underlies many critical professions (e.g., aviation security, radiology, military operations), making it important to understand the mechanisms that govern performance. Feature repetition across trials benefits subsequent search performance, however this has not been thoroughly studied through the lens of associative learning, wherein relationships between temporally or spatially co-occurring stimuli are repeated and learned across consecutive search trials. Complex visual search tasks provide a window into associative learning that can potentially inform a debate about whether the learning operates over task-irrelevant information (e.g., backgrounds, distractors). The “associative blocking” account suggests only task-relevant, highly salient features bind with targets. Yet recent findings of trial sequence effects in search suggest that even task-irrelevant information impacts subsequent performance. Accordingly, the current study hypothesized that search performance is influenced by a mechanism of indiscriminate implicit learning wherein all information, regardless of task-relevance, is processed and available for learning. Performance was assessed for task-relevant and task-irrelevant features repeating both together and independently across consecutive trials pairs. Data were drawn from a massive (>3.8B trials, >15.5M participants) visual search dataset (Airport Scanner; Kedlin Co.). Contrary to the blocking account, the co-occurrence of both task-irrelevant and task-relevant information influenced performance. Specifically, the performance advantage for consecutive trials containing the same target and same irrelevant feature (e.g., bag-type) exceeded the summed benefit of each element repeating individually. Preliminary findings on relative Euclidean distance in the search arrays between the repeated targets provides possible evidence for an allocentric representation relative to the bag. The results suggest that learning may be a natural consequence of visual processing that is strengthened by, but not reliant on, relevance; suggesting that attentional selection may be unnecessary for associative learning. In sum, the current study supports that implicit learning, even of associations, could shape behavior without directed attention.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×