Abstract
It is well-established that the visual system can extract statistical regularities in transitional probabilities between objects in a serial stream, making statistical learning paradigms well-suited to studying the neural mechanisms of learning. For example, previous fMRI studies have revealed that anticipating a stimulus that is predicted by statistical regularities in a sequence can produce item-specific activity even when the anticipated stimulus itself is not shown (Kok, Failing, & de Lange, 2014; Puri, Wojciulik, & Ranganath, 2009). However, little is known about the time course of these anticipatory effects at temporal resolutions finer than that afforded by fMRI. Thus, the present study used EEG with a statistical learning paradigm in which auditory syllables were paired with either faces or objects in an initial acquisition phase, such that certain syllables were always paired with specific faces, certain syllables were always paired with specific objects, and a subset of syllables were not assigned to any picture. Following acquisition, participants took part in a second session that was identical to acquisition except that 25% of (anticipated) face and object pictures were randomly omitted. The critical questions are whether anticipated face/object pictures still produce category-specific EEG activity (e.g., the face-specific N170 ERP) when they are unexpectedly omitted, and at what point in the EEG time course such category-specific activity emerges. Critically, in the absence of anticipated stimuli, the wave form elicited in response to the paired associate closely resembles the expected wave form when the anticipated stimulus is present. The results shed new light on the neural mechanisms underlying visual statistical learning and provide insight regarding the extent to which category-specific anticipatory activity is intentional versus automatic.
Meeting abstract presented at VSS 2017