September 2019
Volume 19, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2019
Selection and Enhancement: Modeling Attentional Capture during Visual Search
Author Affiliations & Notes
  • Andrew Lovett
    U.S. Naval Research Laboratory
  • Will Bridewell
    U.S. Naval Research Laboratory
  • Paul Bello
    U.S. Naval Research Laboratory
Journal of Vision September 2019, Vol.19, 131b. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Andrew Lovett, Will Bridewell, Paul Bello; Selection and Enhancement: Modeling Attentional Capture during Visual Search. Journal of Vision 2019;19(10):131b.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Our goal is to explore the concrete mechanisms underlying visual attention. Previously, we developed a computational model of multiple-object tracking that relies on two attentional mechanisms: selection, which enables an object in the visual field to receive further processing, resulting in representations stored in visual short-term memory (VSTM); and enhancement, which improves sensitivity to stimuli in the spatial regions around recently selected objects, increasing the likelihood that those stimuli will be selected. Here, we generalize selection and enhancement to visual search. Recent work suggests three factors govern attentional capture during search: bottom-up salience; top-down goals, possibly provided by verbal cues that describe the search target; and selection history, as in intertrial priming, where a target similar to the previous trial’s target is easier to find. Although there are important distinctions between these factors, verbal cueing and intertrial priming may both be supported by featural enhancement. Similar to spatial enhancement during object tracking, featural enhancement increases the likelihood that stimuli with visual properties similar to a recently selected item will be selected. We developed a computational model that relies on three key components: segmentation, which divides the visual input into candidate objects for selection; salience, which scores each candidate based on its contrast to local and global surroundings; and enhancement, which scores each candidate based on featural similarity to recently selected objects represented in VSTM. Scores from salience and enhancement are combined to determine which candidate is selected. To support top-down goals, previously selected objects are represented in long-term memory (LTM). A verbal cue (e.g., “orange”) causes the model to retrieve an LTM representation matching that cue and add it to VSTM, so that its features will be enhanced. Preliminary findings suggest the model accounts for the contributions of all three factors governing attentional capture during search.

Acknowledgement: Office of Naval Research 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.