Purchase this article with an account.
Alexander A. Petrov, Barbara A. Dosher, Zhong-Lin Lu; Comparable perceptual learning with and without feedback in non-stationary context: Data and model. Journal of Vision 2004;4(8):306. doi: https://doi.org/10.1167/4.8.306.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Learning was evaluated for orientation discrimination of peripheral Gabor targets (+/−10 deg) in two filtered noise “contexts” with predominate orientations at either +/−15 deg. The training schedule alternated two-day blocks of each context. 3 target contrast levels were tested. 18 observers received no feedback, yet improved both discriminability and speed within and across blocks. The initial and asymptotic d′ levels and learning dynamics were comparable to those obtained for observers with feedback (1). For both groups, performance dropped at each context switch, with approximately constant cost (about 0.3 d′) over 5 switches (10800 trials). In this situation, self-generated feedback seems sufficient for learning. A self-supervised model can account for these results via incremental channel reweighting with and without explicit feedback. Visual stimuli are first processed by standard orientation and frequency tuned units with contrast gain control via divisive normalization. Learning occurs only in the “read-out” connections to decision units; the stimulus representation never changes. An incremental Hebbian rule tracks the external feedback when available, or else reinforces the model's own response. An a priori bias to equalize the response frequencies stabilizes the model across switches. As accuracy is above 50%, self-generated feedback drives the weights in the right direction on average, though less efficiently than external feedback. Weights of task-correlated units gain strength while weights on irrelevant frequencies and orientations are reduced, producing a gradual learning curve. If the context shifts abruptly, the system lags behind as it works with suboptimal weights until it readapts, creating switch costs of approximately equal magnitude across successive context changes. Hebbian channel reweighting with no change of early visual representations can explain perceptual learning. 1. Petrov, Dosher & Lu, JOV 2003.
This PDF is available to Subscribers Only