Purchase this article with an account.
Barbara Anne Dosher, Zhong-Lin Lu; Threshold power laws of perceptual learning decouple improvements in noisy and noiseless conditions. Journal of Vision 2002;2(7):556. doi: 10.1167/2.7.556.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Previously, we reported improvements in perceptual task performance in a range of external noise contrasts [1–2] and claimed a mixture of approximately equal improvements in stimulus enhancement in low noise conditions and of improvements in external noise exclusion in high noise conditions. In this paper we describe a detailed analysis of improvements over blocks of training on a peripheral orientation discrimination task that clearly document performance improvements of different magnitude in a zero external noise and a high external noise condition. Observers identified a target S or 5 in a rapid letter string at fixation and also discriminated the orientation of a peripheral Gabor patch. Adaptive staircases measured contrast thresholds for 79.3% and 70.7% correct performance. After an initial pre-test of performance in zero and high noise conditions, observers were trained in different schedules of zero noise and high noise trials (high then low, or vice versa, etc.). (A) Training in either noise condition improved performance in both noise conditions. (B) Contrast threshold followed power laws of improvement (e.g., linear decreasing log contrast threshold as a function of log total practice blocks in all conditions) in both zero and high noise, but (C) the slopes of the functions differed sharply (larger reductions in high noise and smaller in low noise). This strong deviation in the power function slopes in zero and high noise with practice block is consistent with our original claims for decoupled stimulus enhancement and external noise exclusion. A simple linear amplifier model (LAM) with improvements due to improved ‘efficiency’ with practice requires identical slopes in zero and high noise, and is falsified. In contrast, an elaborated perceptual template model (PTM)  is supported. The results are consistent with perceptual learning through channel re-weighting [1–2].
This PDF is available to Subscribers Only