December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Contextual learning and inference in perceptual learning
Author Affiliations & Notes
  • Gabor Lengyel
    Central European University
    University of Rochester
  • Máté Lengyel
    Central European University
    University of Cambridge
  • József Fiser
    Central European University
  • Footnotes
    Acknowledgements  This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 726090 to M.L.) and the Wellcome Trust (Investigator Awards 212262/Z/18/Z to M.L.)
Journal of Vision December 2022, Vol.22, 3999. doi:https://doi.org/10.1167/jov.22.14.3999
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gabor Lengyel, Máté Lengyel, József Fiser; Contextual learning and inference in perceptual learning. Journal of Vision 2022;22(14):3999. https://doi.org/10.1167/jov.22.14.3999.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recent studies established that perceptual learning (PL) is influenced by strong top-down effects and shows flexible generalization depending on context. However, current computational models of PL utilize feedforward architectures and fail to capture parsimoniously these context-dependent and generalization effects in more complex PL tasks. We propose a Bayesian framework that combines sensory bottom-up and experience-based top-down processes in a normative way. Our model uses contextual inference to simultaneously represent multiple learning-contexts with their corresponding stimuli. It infers the extent to which each context might have contributed to a given trial, and gradually learns the transitions between contexts from experience. In turn, correctly inferring the current context supports efficient neural resource allocation for encoding the stimuli expected to occur in that context, thus maximizing discrimination performance and driving PL. In roving paradigms, where multiple reference stimuli are intermixed across trials, our model explains a broad range of previously described learning effects: (a) disrupted PL when the references are interleaved trial-by-trial, and (b) intact PL when the references are separated into blocks, or when (c) they are interleaved across the trials but follow a fixed temporal order. Our model also provides new predictions for learning and generalization in PL. First, the amount of PL should depend on the extent to which the structure is learnt, predicting more PL in roving paradigms that use more predictable temporal structures between reference stimuli. Second, rather than depending solely on the low-level perceptual similarities of stimuli, generalization in PL should also depend on the extent to which higher-order structural knowledge about contexts (e.g. their transition probabilities) generalizes across different tasks. These results demonstrate that higher-level structure learning is an integral part of any perceptual learning process and that a joint treatment of high- and low-level information about stimuli is required for capturing learning in vision.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×