Abstract
Ensemble perception allows the limited capacity visual system to efficiently consolidate noisy input from the environment to construct reliable representations, guide attention and classify features or objects (Whitney & Yamanashi, 2018). Despite the importance of ensemble perception, there are few computational models that provide a formal, process-level account of ensemble perception across a range of experimental conditions and stimuli. This is the goal of the current work; specifically, we propose a novel ensemble model, which is motivated by a recent signal detection model of memory (Schurgin, Wixted, & Brady, 2020). According to the proposed model, items evoke distributed patterns of familiarity over feature space (e.g., a color evokes a pattern of familiarity over all color channels), and ensemble representations reflect the global sum of these signals across all items -- a location-independent distribution over what features are present. We then assume that individuals report on their memory of the ensemble by selecting the feature value that generates the maximum familiarity signal after the signals are corrupted by noise. We leverage this set of minimal assumptions to capture the entire distribution of individuals’ errors on an ensemble continuous report task using solely those individuals’ estimates of performance on a separate VWM task. That is, we account for the full distribution of errors on the ensemble task with zero free parameters. The ensemble model was assessed in three experiments in which individuals were probed on their memory for ensemble color, where we varied the number of items, color range, and presence of an outlier. To assess the generalizability of our modeling results across stimuli spaces, we evaluated the model for ensemble processing of shapes. We discuss our model and results in the context of current theories of ensemble perception and population coding models of ensemble representation and memory.