August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Top-down predictions of specific visual features in the brain speed up their bottom-up categorizations for perceptual decision
Author Affiliations & Notes
  • Yuening Yan
    University of Glasgow
  • Robin A.A. Ince
    University of Glasgow
  • Jiayu Zhan
    University of Glasgow
  • Oliver Garrod
    University of Glasgow
  • Philippe Schyns
    University of Glasgow
  • Footnotes
    Acknowledgements  P.G.S. received support from the Wellcome Trust (Senior Investigator Award, UK; 107802) and the Multidisciplinary University Research Initiative/Engineering and Physical Sciences Research Council (USA, UK; 172046-01). R.A.A.I. was supported by the Wellcome Trust [214120/Z/18/Z].
Journal of Vision August 2023, Vol.23, 4869. doi:https://doi.org/10.1167/jov.23.9.4869
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yuening Yan, Robin A.A. Ince, Jiayu Zhan, Oliver Garrod, Philippe Schyns; Top-down predictions of specific visual features in the brain speed up their bottom-up categorizations for perceptual decision. Journal of Vision 2023;23(9):4869. https://doi.org/10.1167/jov.23.9.4869.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Models of visual cognition assume that the brain predicts specific features from the incoming input to facilitate its subsequent categorization. However, the prediction mechanisms have remained elusive, in part because we still need to trace the top-down predictions of specific features in neural signals. In our experiment, participants (N=10, inference performed within individual participants) were cued on each trial to one of two possible perceptions of Dali’s ambiguous painting Slave Market–i.e. “Nuns” vs. “Voltaire.” Specifically, each trial (T=3,150 per participant) comprised a Prediction stage, with three auditory cues. Two cues were associated with different distributions of Nuns vs Voltaire features, a third control cue had no predictive value. Next, in the Categorization stage, a stimulus was presented sampled from the cued distribution (uniform for uninformative cue). We concurrently measured participant’s MEG, later reconstructed on 8,196 sources. We trained separate classifiers to learn the multivariate representation of nuns and Voltaire features (trained on uninformative cue trials), as well as the cue sounds themselves (trained on localiser trials with no prediction). We then applied these classifiers to cross-decode features and cues during the trials on which participants made a prediction about the upcoming stimuli. Decoding analyses of the Prediction stage revealed (1) that auditory cues do not propagate beyond the temporal lobe and (2) that predicted “Nuns” and “Voltaire” features propagate top-down in the ventral pathway to right or left occipital cortex, with increasing contra-laterality to expected incoming feature location just before stimulus onset. At the Categorization stage, when the stimulus is shown cued trials sped up bottom-up occipito-ventral representations of “Nuns” or “Voltaire” features tuning the perception. Our results therefore trace top-down predictions of specific visual features that speed up their bottom-up processing for visual categorization.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×