December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Learned associations bias the contents of the attentional template during visual search
Author Affiliations
  • Zhiheng Zhou
    University of California Davis
  • Joy Geng
    University of California Davis
Journal of Vision December 2022, Vol.22, 4102. doi:https://doi.org/10.1167/jov.22.14.4102
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Zhiheng Zhou, Joy Geng; Learned associations bias the contents of the attentional template during visual search. Journal of Vision 2022;22(14):4102. https://doi.org/10.1167/jov.22.14.4102.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Theories of attention posit the existence of an attentional template that holds target information in working memory and is used to prioritize sensory processing. In natural environments, however, target features are not the only source of information useful for search. Here, we investigated the effect of a learned association on attentional template representations and guided search. Participants first learned the relationship between four specific faces and four scene categories through an associative learning task in which participants were explicitly told about a pairing (e.g., Jenny is a ranger and thus she appears in the forest) and asked to indicate whether a pairing was “correct” until criterion (90% accuracy) was reached. After training was complete, participants (N=660) engaged in an online cued-face search task. On each trial, a single face cue was followed by a search display with two faces, each superimposed on one scene image. The target face could appear on the associated scene (valid trial) or an un-associated scene (invalid trial). Although the scene information was task-irrelevant, in experiment 1, participants were faster and more accurate at locating the cued face on valid trials when it occurred with its associated scene. The results provide evidence that learned associations modulate attention allocation and facilitate search of the target. In experiment 2, we used fMRI to examine the underlying proactive neural representation of the attentional template in advance of visual search. We found evidence of sensory representations for both the target face and the target-associated scene following the cue but prior to the onset of the visual search display. Taken together, our findings suggest that the attentional template can include information that is associated with the target, presumably because it can be used to help to find and identify the target more rapidly.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×