June 2004
Volume 4, Issue 8
Vision Sciences Society Annual Meeting Abstract  |   August 2004
Comparable perceptual learning with and without feedback in non-stationary context: Data and model
Author Affiliations
  • Alexander A. Petrov
    University of California, Irvine, USA
  • Barbara A. Dosher
    University of California, Irvine, USA
  • Zhong-Lin Lu
    University of Southern California, USA
Journal of Vision August 2004, Vol.4, 306. doi:https://doi.org/10.1167/4.8.306
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alexander A. Petrov, Barbara A. Dosher, Zhong-Lin Lu; Comparable perceptual learning with and without feedback in non-stationary context: Data and model. Journal of Vision 2004;4(8):306. https://doi.org/10.1167/4.8.306.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Learning was evaluated for orientation discrimination of peripheral Gabor targets (+/−10 deg) in two filtered noise “contexts” with predominate orientations at either +/−15 deg. The training schedule alternated two-day blocks of each context. 3 target contrast levels were tested. 18 observers received no feedback, yet improved both discriminability and speed within and across blocks. The initial and asymptotic d′ levels and learning dynamics were comparable to those obtained for observers with feedback (1). For both groups, performance dropped at each context switch, with approximately constant cost (about 0.3 d′) over 5 switches (10800 trials). In this situation, self-generated feedback seems sufficient for learning. A self-supervised model can account for these results via incremental channel reweighting with and without explicit feedback. Visual stimuli are first processed by standard orientation and frequency tuned units with contrast gain control via divisive normalization. Learning occurs only in the “read-out” connections to decision units; the stimulus representation never changes. An incremental Hebbian rule tracks the external feedback when available, or else reinforces the model's own response. An a priori bias to equalize the response frequencies stabilizes the model across switches. As accuracy is above 50%, self-generated feedback drives the weights in the right direction on average, though less efficiently than external feedback. Weights of task-correlated units gain strength while weights on irrelevant frequencies and orientations are reduced, producing a gradual learning curve. If the context shifts abruptly, the system lags behind as it works with suboptimal weights until it readapts, creating switch costs of approximately equal magnitude across successive context changes. Hebbian channel reweighting with no change of early visual representations can explain perceptual learning. 1. Petrov, Dosher & Lu, JOV 2003.

Petrov, A. A., Dosher, B. A., Lu, Z.-L.(2004). Comparable perceptual learning with and without feedback in non-stationary context: Data and model [Abstract]. Journal of Vision, 4( 8): 306, 306a, http://journalofvision.org/4/8/306/, doi:10.1167/4.8.306. [CrossRef]
 Supported by NIMH & NSF.

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.