Where does perceptual learning occur? On which level of visual processing is learning operating? The last decade has seen a shift from early visual areas towards later processing stages. Recently, Pourtois, Rauss, Vuilleumier, and Schwartz (
2008) and Bao, Yang, Rios, He, and Engel (
2011) claimed that perceptual learning modulates the C1 component in the EEG, which was taken as evidence that perceptual learning occurs at early visual areas. In this issue, Zhang, Li, Song, and Yu (
2015) show that the C1 component can be modulated by top-down processing and thus may not be indicative for early visual learning, reopening the debate. In the time frequency domain, Bays, Visscher, Dantec, and Seitz (
2015) show that perceptual learning increases alpha activity in the EEG during prestimulus period, which may indicate that the task becomes more automatic. Interestingly, prestimulus alpha power gradually increased when perceptual bias decreased (Nikolaev, Gepshtein, & van Leeuwen,
2016), indicating a complex role of alpha band activity in perceptual learning.
Modeling in the last decade has provided powerful architectures, in which learning occurs mainly from the mapping of sensory evidence to decision making (Petrov, Dosher, & Lu,
2005). Improvements of performance can be due to both increases of sensitivity and adjusting response bias (Herzog, Ewald, Hermens, & Fahle,
2006). Disentangling the two is a computational challenge. Here, Liu, Dosher, and Lu (
2015) show that the augmented Hebbian reweighting model can flexibly cope with various feedback conditions, such as trial by trial and block feedback, and that it disentangles bias and sensitivity learning by computations located at the decision stage.
A very general challenge for modeling perceptual learning is presented by Grzeczkowski, Tartaglia, Mast, and Herzog (
2015). In 4,160 trials, the very same bisection stimulus was presented without an offset (i.e., the central line was always presented in the middle). Reminiscent of previous findings in the auditory system (Amitay, Irwin, & Moore,
2006), training with such identical stimuli improved the ability to discriminate left versus right offsets. This result contradicts predictions of all neural network models, which predict that learning occurs only when stimuli vary from trial to trial.
To better understand the mechanisms of perceptual learning, Yashar, Chen, and Carrasco (
2015) investigated learning when the target is crowded by flanking elements. Learning strongly improved mainly by learning to ignore the flankers. Hence, target perception can improve in other ways than just fine tuning of target related mechanisms.