Abstract
Selective attention prioritizes a subset of sensory input for further processing. What gets prioritized is partially experience-dependent: Locations that were important in the past are more likely to be attended in the future. However, when a person moves, their location relative to previously attended regions changes. How do moving observers acquire stable attentional biases in space? We tested participants in a visual search task performed on a display laid flat on a stand. Some locations of the display were more likely than others to contain the target. Unlike most previous work, participants moved on each trial. In two experiments, high-probability locations were stable in the environment, but unstable relative to the viewer due to movement. When participants were told where the high-probability locations were, they showed an attentional bias toward those locations. However, in the absence of explicit instructions, participants failed to acquire an attentional bias toward the high-probability locations even after several hundred trials. In two other experiments, high-probability locations were variable in the environment, but stable relative to the viewer (e.g., always to the viewer’s lower left). Participants failed to develop an attentional bias toward the high-probability locations, even when they were told where those locations were. Additional experiments showed that incidentally learning where a target is likely to be depends on the stability of both the viewer and the environment. However, once acquired, incidentally learned attentional biases move with the viewer. We conclude that attentional biases toward target-rich locations are directed by two mechanisms: incidental learning and endogenous (goal-driven) attention. Incidental learning codes attended locations in a viewer-centered reference frame. It is useful when viewer perspectives are limited. The use of a truly environment-centered attentional map may depend on endogenous control, which may rely on qualitatively different spatial representations than does incidental learning.
Meeting abstract presented at VSS 2013