Abstract
Growing evidence suggests that selective reweighting of the read-out connections from the sensory representations plays a major role in perceptual learning. Here we instantiate this idea in a computational model that takes grayscale images as inputs and learns on a trial-by-trial basis. The model develops the multi-channel perceptual template model (PTM, Dosher & Lu, 1998, PNAS) and extends it with a biologically plausible learning rule. The stimuli are processed by standard orientation- and frequency-tuned representational units, divisively normalized. Learning occurs only in the read-out connections to a decision unit; the stimulus representations never change. An incremental Hebbian rule tracks the task-dependent predictive value of each unit, thereby improving the signal-to-noise ratio of their weighted combination. Each abrupt change in the environmental statistics induces a switch cost in the learning curves as the system temporarily works with suboptimal weights. In this situation, self-generated feedback seems sufficient for learning. The model accounts for a complex pattern of context-induced switch costs in a non-stationary training environment.
A recent study (Petrov & Hayes, under review) found a strongly asymmetric pattern of transfer of learning between first- and second-order motion. Second-order training transferred fully to first-order test, whereas first-order training did not transfer significantly to second-order. This strong asymmetry challenges the simple reweighting model but is compatible with an augmented version in which the Fourier and non-Fourier processing channels are integrated by taking the maximum of the carrier-specific signals within a given direction of motion.