Abstract
Visual search is heavily affected by recent selection history, as is evident from the phenomenon of intertrial priming: repeating target- and/or distractor features speeds selection of the target, relative to feature changes. Traditionally, this is interpreted as resulting from a boosting of target features and suppression of the distractor features. However, several findings have cast doubt on the idea that target- and distractor features independently contribute to priming. Instead, their relation in feature space may matter more than the absolute individual feature values. Such relational priming is observed for primitive features like color and luminance, but also for somewhat artificial features such as 'spikedness'. We modeled intertrial priming using an existing model of bottom-up visual processing (Attention through Information Maximization or AIM; Bruce & Tsotsos, 2009, Journal of Vision), which uses features that are derived from regularities in the visual environment. We extended this model with a mechanism that suppresses the feature gain of all stimuli present in a display, while specifically increasing the gain of target features. features. This mechanism allowed us to model a vast array of priming results, for visual searches in color, luminance, size, 'spikedness' and 'shape'. Moreover, it naturally yielded relational priming in both primitive and more complex feature dimensions. We additionally report a series of experiments that investigate what in a display elicits suppression, using reaction time and the saccade global effect as dependent measures. The model has consequences well beyond the priming literature by detailing how attentional deployments leave a prolonged mark on the visual system.
Meeting abstract presented at VSS 2015