September 2015
Volume 15, Issue 12
Free
Vision Sciences Society Annual Meeting Abstract  |   September 2015
A reward-driven reweighting model of perceptual learning
Author Affiliations
  • Grigorios Sotiropoulos
    School of Informatics, University of Edinburgh
  • Aaron Seitz
    Department of Psychology, University of California, Riverside
  • Peggy Seriès
    School of Informatics, University of Edinburgh
Journal of Vision September 2015, Vol.15, 1143. doi:https://doi.org/10.1167/15.12.1143
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Grigorios Sotiropoulos, Aaron Seitz, Peggy Seriès; A reward-driven reweighting model of perceptual learning. Journal of Vision 2015;15(12):1143. https://doi.org/10.1167/15.12.1143.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Perceptual learning (PL), the improvement of perceptual skills through practice, is known to be variably affected by the presence of external guidance, depending on the task. In various perceptual tasks, trial-by-trial feedback enhances learning and in some cases it is even necessary to induce learning. However, in some cases PL occurs just as well with block feedback or no feedback (Herzog and Fahle, 1997) or even without explicitly practicing a task (Seitz and Watanabe, 2003). A model of PL based on reweighting (“readout”) of representations in early visual cortex (Liu et al, 2014) was recently proposed to explain previous psychophysical findings on the effect of different types of feedback (trial-by-trial, block, manipulated, uncorrelated and no feedback) on performance in a Vernier acuity task. This augmented Hebbian reweighting model (AHRM) accounted for the effectiveness of the trial-by-trial and block feedback and the ineffectiveness of the other types by means of a top-down bias control system employed to balance the response frequencies in the modelled binary decision task. However, an alternative mechanism that could explain the same data is to incorporate a variable learning rate mechanism whereby the learning rate is adjusted by the difference between expected performance and performance indicated by feedback. Here, we extended the AHRM model based on established reinforcement learning concepts and found that the model can account for the aforementioned psychophysical data in a natural way, under a wide range of parameter values. Our results are in accord with the research showing a role of reinforcement in perceptual learning and with other models that explore the role of performance gradient in PL. Furthermore, the extended model makes testable predictions that may help optimize future PL approaches.

Meeting abstract presented at VSS 2015

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×