Abstract
Performance gains obtained through perceptual learning are, surprisingly, specific to the trained condition. Recent research shows that specificity increases with training and with task precision (Jeter et al. 2009/10), and generalizes across tasks and features trained in temporal proximity (Yu and colleagues). Such results are expected if perceptual learning involves statistical modeling of the task at hand, with variations in brain anatomy (Mollon & Danilova, 1996), or neuronal response, limiting the reliability of the fitted data. When training is carried out with a limited set of stimuli (e.g. a single contrast), overfitting may gradually arise, thus predicting failures when new conditions are presented. In the contrast domain, learning is specific to the trained contrast, and much reduced when different contrasts are mixed during training (Adini et al., 2004; Yu et al., 2004), demonstrating that learning is nothing but overfitting. Overfitting may arise when learning involves the readout of sensory neurons (Lu & Dosher), reweighting responses according to the peculiarities of the trained condition. To test the generality of this theoretical approach, we re-examined the specificity of learning to retinal location. Using the texture discrimination task (Censor & Sagi, 2009), we had observers practicing a target positioned either at a fixed location (the traditional way) or at one of two locations. Against overfitting, we find equal learning in both conditions, but most surprisingly, in agreement with overfitting, while the 1-location training was specific as expected, the 2-locations training completely transferred to locations previously untrained, nor tested. Theoretical implications will be presented.