Abstract
We present several experiments demonstrating that observers can represent precise visual information outside the focus of attention, but concerning objects as a group rather than as individuals. We call these group representations “ensembles,” because they represent accurate statistical characteristics of the group of elements as a whole while lacking an accurate representation of local object information.
First we investigated a simple ensemble feature, the center of mass of a group of objects. We used an adapted multiple object-tracking task where observers had to attentively track a subset of moving objects while ignoring others. During each trial, some targets or distractors would randomly disappear. Observers could accurately report the local position of a single missing target item but performed at chance when judging the local position of a distractor item. However, when estimating the center of mass, observers were well above chance for both targets and distractors, suggesting they accurately represented the center of mass of the set of unattended items. A second series of experiments investigated the representation of complex ensemble statistics by manipulating the spatial layout of a background texture. When tracking moving objects, observers were more likely to notice local orientation changes when they altered the global structure of the background texture than when they did not change the global structure.
To account for these data, we propose three main claims: the visual system measures ensemble statistics, which consist of compact, global featural representations that abstract away from the local details of an image; focal attention creates an isolation field that provides access to the local feature values at a particular spatial location; information outside the focus of attention remains consciously accessible in the form of an ensemble representation which lacks local detail, but nevertheless carries a precise statistical summary representation of the visual scene.
Author GAA was supported by NIH/NEI Grant #F32 EY016982.