Abstract
Backus and colleagues (Haijiang, et al., PNAS 2006) demonstrated that the visual appearance of a stimulus could be made contingent, trial-by-trial, on the arbitrarily chosen value of an unrelated signal (a new cue) that was added to the display. To demonstrate the learning, the new cue was paired with long-trusted cues that perceptually disambiguated an otherwise ambiguous rotating Necker cube stimulus. Recruitment of stimulus position as a cue was particularly robust. This simple form of associative perceptual learning occurred rapidly and was long lasting, but many questions about the learning were unanswered. Subsequent experiments using this model system show learning at a variety of time scales, generalization to similar cues, strong reduction in learning rates if observers are first exposed to experimental stimuli, and strong modulation of learning by attention to spatial location. Different new cues were learned to varying extents (stimulus translation, hand motions, sound cues, etc). Some of these effects can be explained by changes to known receptive field properties of neurons in area MT, or to changes in the read-out of MT neurons, while other effects cannot. The importance of cue recruitment to normal vision remains uncertain.