Abstract
The mathematical functions underlying learning have implications for the empirical understanding of learning phenomena as well as the underlying processes of change giving rise to learning. Most previous studies examining the functional form of learning have aggregated data across learners, learning events [trials], or both, thereby reducing the precision of parameter estimates and possibly biasing both parameters and estimates of error. Recently visual perceptual learning has been used as a model domain to demonstrate some such detrimental implications of aggregation. However, by-trial subject-level analyses have yet to be used systematically to compare specific learning functions. Here we report two perceptual learning experiments in which participants completed at least 1200 trials of training, followed by at least 400 trials of generalization, on either an oriented-line oddball-texture-detection task (n= 32) or a dot-motion delayed nonmatch-to-sample task (n=40). Tests of generalization allowed for a unified analysis of the functional form of initial learning as well as generalization thereof. We fit five nonlinear learning functions to participant- and trial-level data to determine the functional form of learning most appropriate. Learning functions were fit from two families: exponential (3-parameter exponential, 4-parameter “double” exponential, and 4-parameter Weibull) and power (3-parameter power and 4-parameter power). Information criteria were calculated for each functional form and these were compared to determine the relative evidence supporting each function. Texture-detection learning was best fit by the three-parameter exponential function in 29 participants; the remaining 3 participants were best fit by either the three-parameter power function or the Weibull function. Dot-motion learning was best fit by the 4-parameter Weibull function (30 participants) or the 3-parameter exponential function (10 participants). These results collectively repudiate the “power law of learning” while implicating, respectively, single (in texture detection) and dual (in motion discrimination) mechanisms of change during visual perceptual learning.