Abstract
Models of visual cognition assume that the brain predicts specific features from the incoming input to facilitate its subsequent categorization. However, the prediction mechanisms have remained elusive, in part because we still need to trace the top-down predictions of specific features in neural signals. In our experiment, participants (N=10, inference performed within individual participants) were cued on each trial to one of two possible perceptions of Dali’s ambiguous painting Slave Market–i.e. “Nuns” vs. “Voltaire.” Specifically, each trial (T=3,150 per participant) comprised a Prediction stage, with three auditory cues. Two cues were associated with different distributions of Nuns vs Voltaire features, a third control cue had no predictive value. Next, in the Categorization stage, a stimulus was presented sampled from the cued distribution (uniform for uninformative cue). We concurrently measured participant’s MEG, later reconstructed on 8,196 sources. We trained separate classifiers to learn the multivariate representation of nuns and Voltaire features (trained on uninformative cue trials), as well as the cue sounds themselves (trained on localiser trials with no prediction). We then applied these classifiers to cross-decode features and cues during the trials on which participants made a prediction about the upcoming stimuli. Decoding analyses of the Prediction stage revealed (1) that auditory cues do not propagate beyond the temporal lobe and (2) that predicted “Nuns” and “Voltaire” features propagate top-down in the ventral pathway to right or left occipital cortex, with increasing contra-laterality to expected incoming feature location just before stimulus onset. At the Categorization stage, when the stimulus is shown cued trials sped up bottom-up occipito-ventral representations of “Nuns” or “Voltaire” features tuning the perception. Our results therefore trace top-down predictions of specific visual features that speed up their bottom-up processing for visual categorization.