Abstract
Incidentally learned regularities about the world, such as the typical location or color of objects, guide real-world behavior and are associated with faster response times in visual search tasks, even when we are not explicitly aware of these regularities. What stage of visual processing is affected by this type of incidental learning, and how quickly do these changes emerge? To explore this, we recorded EEG while participants (N=20) viewed a circular visual search array and discriminated the gap direction of a Landolt C with a left or right gap among seven distractors with top/bottom gaps. Critically, the four items on the left versus the right side of the display appeared in different colors, and unbeknownst to the participants, we manipulated the color probabilities to induce learned feature-based attention. Three colors were used across the experiment, selected from an equiluminant color wheel and spaced 120° apart, randomized across participants. On 60% of trials, the target item appeared in the ‘rich’ color (frequent, valid trials, 864 total); in another 20% of trials a ‘sparse’ colored item was the target while paired with rich-color distractors (invalid trials, 288 total); in the remaining 20% of all trials, the two sparse colors appeared together (neutral trials, 288 total). We observed a latency shift of the N2pc ERP component, a neural index of early attentional selection, such that it peaked approximately ~15ms earlier on the frequent, valid trials relative to neutral trials, and ~30ms earlier relative to invalid trials. This difference became apparent by the second block of 240 trials and persisted throughout the experiment, suggesting rapid and flexible tuning of attention to the frequent target color. These results provide neural evidence that learned feature-based attention can have strong influences on early stages of attentional selection in support of efficient search behavior.