Abstract
Our goal is to explore the concrete mechanisms underlying visual attention. Previously, we developed a computational model of multiple-object tracking that relies on two attentional mechanisms: selection, which enables an object in the visual field to receive further processing, resulting in representations stored in visual short-term memory (VSTM); and enhancement, which improves sensitivity to stimuli in the spatial regions around recently selected objects, increasing the likelihood that those stimuli will be selected. Here, we generalize selection and enhancement to visual search. Recent work suggests three factors govern attentional capture during search: bottom-up salience; top-down goals, possibly provided by verbal cues that describe the search target; and selection history, as in intertrial priming, where a target similar to the previous trial’s target is easier to find. Although there are important distinctions between these factors, verbal cueing and intertrial priming may both be supported by featural enhancement. Similar to spatial enhancement during object tracking, featural enhancement increases the likelihood that stimuli with visual properties similar to a recently selected item will be selected. We developed a computational model that relies on three key components: segmentation, which divides the visual input into candidate objects for selection; salience, which scores each candidate based on its contrast to local and global surroundings; and enhancement, which scores each candidate based on featural similarity to recently selected objects represented in VSTM. Scores from salience and enhancement are combined to determine which candidate is selected. To support top-down goals, previously selected objects are represented in long-term memory (LTM). A verbal cue (e.g., “orange”) causes the model to retrieve an LTM representation matching that cue and add it to VSTM, so that its features will be enhanced. Preliminary findings suggest the model accounts for the contributions of all three factors governing attentional capture during search.
Acknowledgement: Office of Naval Research