December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Top-down predictions of visual features dynamically reverse their bottom-up processing in the occipito-ventral pathway to facilitate stimulus disambiguation and behavior
Author Affiliations & Notes
  • Yuening Yan
    University of Glasgow
  • Jiayu Zhan
    University of Glasgow
  • Robin A.A. Ince
    University of Glasgow
  • Philippe G. Schyns
    University of Glasgow
  • Footnotes
    Acknowledgements  Philippe G Schyns received support from the Wellcome Trust (Senior Investigator Award, UK; 107802) and the Multidisciplinary University Research Initiative/Engineering and Physical Sciences Research Council (USA, UK; 172046-01). Robin A.A. Ince was supported by the Wellcome Trust [214120/Z/18/Z].
Journal of Vision December 2022, Vol.22, 3218. doi:https://doi.org/10.1167/jov.22.14.3218
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yuening Yan, Jiayu Zhan, Robin A.A. Ince, Philippe G. Schyns; Top-down predictions of visual features dynamically reverse their bottom-up processing in the occipito-ventral pathway to facilitate stimulus disambiguation and behavior. Journal of Vision 2022;22(14):3218. https://doi.org/10.1167/jov.22.14.3218.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The prevalent conception of vision-for-categorization suggests an interplay of two dynamic flows of information within the occipito-ventral pathway. The bottom-up flow progressively reduces the high-dimensional input into a lower-dimensional representation that is compared with memory to produce categorization behavior. The top-down flow predicts category information (i.e. features) from memory that propagates down the same hierarchy to facilitate input processing and behavior. However, the neural mechanisms that support such dynamic feature propagation up and down the visual hierarchy and how they facilitate behavior remain unclear. Here, we studied them using a prediction experiment where in each trial participants (N = 11) were cued to the spatial location (left vs. right) and spatial frequency (SF, Low, LSF, vs. High, HSF) contents of an upcoming Gabor patch, followed by categorizing the SF of the shown Gabor patch. We also ran a pre-experiment localizer task to model the bottom-up representation of the Gabor patches. Using concurrent MEG recordings of each participant’s source-reconstructed neural activity on 12773 voxels, we compared the top-down flow of representation of the predicted Gabor contents (i.e. left vs. right; LSF vs. HSF) to their bottom-up flow. We show (1) that top-down prediction improves the speed of categorization in all participants, (2) the top-down flow of prediction reverses the bottom-up representation of the Gabor stimuli, going from deep right fusiform gyrus sources (~90ms-160ms post-SF cue) down to occipital cortex sources contra-lateral to the expected Gabor location (~160ms-250ms post-SF cue) and (3) when the stimulus is eventually shown, the predicted Gabors are better represented on occipital sources (~250ms post-Gabor) and pre-motor cortex (~400ms post-Gabor), leading to faster categorizations. Our results therefore trace the dynamic top-down flow of a predicted visual content that chronologically and hierarchically reversed bottom-up processing, further facilitates visual representations in early visual cortex and subsequent categorization behavior.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×