Abstract
Categorization refers to the process of mapping continuous sensory inputs onto discrete concepts. It is a cornerstone of flexible behavior that allows organisms to generalize existing knowledge to novel stimuli and to discriminate between physically similar yet conceptually different stimuli. Humans and other animals can readily learn arbitrary novel categories, and this learning “distorts” perceptual sensitivity such that discrimination performance for categorically distinct exemplars is increased (acquired distinctiveness) and discrimination performance for categorically identical exemplars is reduced (acquired similarity). A recent imaging study reported a possible basis for these distortions by demonstrating that category learning biases neural representations of to-be-categorized stimuli at the earliest stages of the visual system (V1–V3). However, the temporal dynamics of these biases are poorly understood. On the one hand, category biases in reconstructed representations could reflect changes in early sensory processing, in which case they should manifest shortly after the appearance of a to-be-categorized stimulus. On the other hand, these biases could reflect changes in post-sensory processing (e.g., decision making or response selection), in which case they should appear shortly before the onset of a behavioral response. Here, we report data from three experiments designed to evaluate these alternatives. In each experiment, we recorded high-density EEG while participants learned to categorize simple objects (orientations and locations) into discrete groups, then used an inverted encoding model to reconstruct time-resolved representations of these objects on a trial-by-trial basis. In all three experiments, robust category-selective biases emerged within 100–200 ms of stimulus onset and persisted until participants’ behavioral report (typically 800–1200 ms after stimulus onset). Our findings indicate that category learning alters relatively early stages of visual processing.