Abstract
Category learning warps perceptual space by enhancing the discriminability of physically similar exemplars from different categories and minimizing differences between equally similar exemplars from the same category, but the neural mechanisms responsible for these changes are unknown. One possibility is that categorization alters how visual information is represented by sensory neural populations. Here, we used a combination of fMRI, EEG, and computational modeling to test this possibility. In Experiment 1, we used fMRI and an inverted encoding model (IEM) to estimate population-level feature representations while participants classified a set of orientations into two discrete groups (Freedman & Assad, 2006). We reasoned that if category learning alters representations of sensory information, then orientation-selective responses in early visual areas should be biased according to category membership. Indeed, representations of orientation in visual areas V1-V3 were biased away from the actual stimulus orientation and towards the center of the appropriate category. These biases predicted participants' behavioral choices and their magnitudes scaled inversely with the angular distance separating a specific orientation from the category boundary (i.e., larger biases were observed for orientations adjacent to the boundary relative orientations those further away from the boundary). In Experiment 2, we recorded EEG over occipitoparietal electrode sites while participants performed a similar categorization task. This allowed us to generate time-resolved representations of orientation and track the temporal dynamics of category biases. We observed biases as early as 50-100 ms after stimulus onset, suggesting that category learning alters how visual information is represented by sensory neural populations.
Meeting abstract presented at VSS 2017