**The learning curve in perceptual learning is typically sampled in blocks of trials, which could result in imprecise and possibly biased estimates, especially when learning is rapid. Recently, Zhao, Lesmes, and Lu (2017, 2019) developed a Bayesian adaptive quick Change Detection (qCD) method to accurately, precisely, and efficiently assess the time course of perceptual sensitivity change. In this study, we implemented and tested the qCD method in assessing the learning curve in a four-alternative forced-choice global motion direction identification task in both simulations and a psychophysical experiment. The stimulus intensity in each trial was determined by the qCD, staircase or random stimulus selection (RSS) methods. Simulations showed that the accuracy (bias) and precision (standard deviation or confidence bounds) of the estimated learning curves from the qCD were much better than those obtained by the staircase and RSS method; this is true for both trial-by-trial and post hoc segment-by-segment qCD analyses. In the psychophysical experiment, the average half widths of the 68.2% credible interval of the estimated thresholds from the trial-by-trial and post hoc segment-by-segment qCD analyses were both quite small. Additionally, the overall estimates from the qCD and staircase methods matched extremely well in this task where the behavioral rate of learning is relatively slow. Our results suggest that the qCD method can precisely and accurately assess the trial-by-trial time course of perceptual learning.**

*d*-primes), contrast thresholds, and difference thresholds (Ball, Sekuler, & Machamer, 1983; C.-B. Huang, Zhou, & Lu, 2008; Karni & Sagi, 1991; Leek, 2001; Levi & Polat, 1996; Z. Liu, 1999; Pelli & Bex, 2013; Pelli & Farell, 1995; Petrov et al., 2005)—are often used to construct learning curves. Existing methods for assessing all three performance measures are based on blocks of measurements with relatively large numbers of trials. The basic assumption of many of these methods is that performance does not change within each measurement block and that some form of averaging can be used to gauge the performance in the block. However, because performance may change continuously during perceptual learning (Lu, Hua, Huang, Zhou, & Dosher, 2011; Mazur & Hastie, 1978; Petrov et al., 2005), even within each measurement block, especially in the early phase of learning, the resulting learning curves can be imprecise and the measurements may be biased; this in turn may, in some circumstances, lead to incorrect inferences about properties of perceptual learning.

_{10}units).

*λ*values (from 0.05 to 0.7), 50 log-linearly spaced

*γ*values (from 20 to 600), and 50 log-linearly spaced

*α*values (from 0.1 to 0.4). (The

*λ*and

*α*are in the units of the threshold measurement—here proportion of coherent dots—while the

*γ*values are in units of trials. Also, in the behavioral experiment, the discrete values of proportions of coherent dots that can be programmed depends on the total number of dots). For

*λ*, 0 was also included to account for no learning. (

*λ*

_{mode},

*γ*

_{mode},

*α*

_{mode}) = (0.36, 326, 0.26) are the modes of the respective secant functions; (

*λ*

_{confidence},

*γ*

_{confidence},

*α*

_{confidence}) = (5.81, 3.82, 12.67) are the spreads of the respective secant functions. A one-dimension space

*true*threshold, T(

*n*), was calculated using Equation 1. Then, the expected probability of making a correct response was calculated for the stimulus level

*n*). To determine if the observer's response is correct or not on that trial, we first drew a random number

*r*from a uniform distribution over the interval from 0 to 1, and then labeled the response as correct if

*r*<

*SD*) were computed (see details in Supplementary Appendix C: Evaluation Methods). We tested the qCD and staircase methods with the same starting levels: +25%, 0%, and −25% from the true threshold (i.e., proportion of coherent dots) in the first trial. Because the pattern of results with the three starting levels exhibited a similar trend, we only present the results with the 0% starting level in the main text. The results with the other two starting levels are presented in the Supplementary Appendix D. We did not vary the initial stimulus intensity level in the RSS method because it is randomly selected by the method.

*b*was used to predict the threshold of trial

_{10}units). As the block number increased, the biases from the staircase method significantly decreased and finally reached around 0.007 log

_{10}units. The estimated thresholds from the staircase method with 80 trials/block (SC80) were more accurate than those from SC160, but only in the early phase of learning where learned changes are most rapid; the two block sizes are similar later during the saturated phase of learning.

_{10}units). In comparison, the RMSE of the estimated block thresholds (e.g., at the points of the empirical thresholds) from SC80 and SC160 was 0.025 and 0.033 for Observer 1, 0.026 and 0.033 for Observer 2, 0.024 and 0.039 for Observer 3 (all in log

_{10}units). The RMSE of the estimated threshold from the RSS method was 0.084, 0.084 and 0.084 log

_{10}units for Observers 1, 2 and 3, respectively.

*SD*s of the estimated thresholds from the qCD and staircase methods from simulations with the 0% starting level and the RSS method are shown in Figure 4B. The estimated thresholds in the staircase method with larger block size (e.g., 160 trials) had smaller

*SD*s than those with smaller block size because there are more reversals in larger blocks. In the RSS method, the

*SD*s started with a large number, then decreased with training trials. In the qCD method, the

*SD*s of the estimated thresholds in the trial-by-trial and post hoc segment-by-segment analyses also significantly decreased as the trial number increased, but the post hoc segment-by-segment analysis provided smaller

*SD*s. Furthermore, the

*SD*s of the estimated post hoc segment-by-segment thresholds from the qCD method were always considerably smaller than those from SC80, SC160, and the RSS method for the three simulated observers. Averaged across the whole learning curve, the

*SD*of the estimated post hoc segment-by-segment thresholds from the qCD method were 0.009, 0.010, and 0.011 for the three simulated observers; the

*SD*s of the estimated block thresholds in SC80 and SC160 were 0.029 and 0.020 for Observer 1, 0.029 and 0.020 for Observer 2, and 0.028 and 0.020 for Observer 3; the

*SD*s of the estimated thresholds from the RSS method were 0.024, 0.025, and 0.026 for the three simulated observers (all in log

_{10}units, see Table 3). In summary, the precision of the thresholds estimated from the qCD method was much higher than those estimated from the staircase and RSS methods, based on these simulations. The

*SD*s of the estimated thresholds with starting levels ±25% (above or below) the true initial thresholds are shown in Supplementary Figure S3 of Supplementary Appendix D.

_{10}units after 160, 640, and 1,920 trials, respectively. Similarly, there is also a monotonic decrease in the 68.2% HWCI from the post hoc segment-by-segment analysis of the estimated learning curves (see the blue line in Figure 4C): The 68.2% HWCI averaged across the three simulated observers was 0.019, 0.008, and 0.008 log

_{10}units after 160, 640, and 1,920 trials, respectively. Detailed results of the 68.2% HWCIs with the three starting levels are summarized in Table 4 and Supplementary Figure S4 of Supplementary Appendix D.

*SD*s of the estimated parameters of the learning curves from the qCD and staircase methods with the 0% starting level and RSS method are shown in Figure 5. For the qCD method, the bias was computed from the post hoc segment-by-segment analysis. For the staircase method, we calculated the bias and

*SD*of the optimized parameters of the exponential model fit to the threshold estimates. For the RSS method, trial-by-trial data were fit with an exponential function with a maximum likelihood method to obtain the estimated parameters. The biases of the estimated parameters from the qCD method were much smaller than those from the staircase and RSS methods, especially for the simulated Observer 1 with a faster learning parameter. For example, when the time constant was 80 trials (Observer 2), the biases of the estimated

*λ*from the qCD, SC80, SC160, and RSS methods were −0.016, 0.210, 0.490, and −0.096 log

_{10}units, respectively. Similarly, the

*SD*s of the parameter estimates from the post hoc segment-by-segment qCD method were much lower than those derived from fitting the exponential to the staircase method thresholds and RSS method. For example, for Observer 2 the

*SD*s of the

*λ*s estimated from the qCD, SC80, SC160, and RSS methods were 0.081, 0.648, 0.884, and 0.259 log

_{10}units, respectively. Note that 0.1, 0.5, and 1 log

_{10}units denote about 25%, 300%, and 1,000% (ratio) deviation from the truth, respectively. Based on the simulations, the qCD method yielded higher accuracy and precision for estimated parameters than the staircase and RSS methods. Furthermore, both the staircase and RSS methods were less effective in estimating the parameters when learning was rapid (Observer 1), while the accuracy and precision on estimated parameters from the qCD method yielded good estimates in all cases (see Tables 5 and 6 for details). Furthermore, the different starting levels were more likely to affect the accuracy of the estimated parameters in the staircase method, while the biases of parameters estimated from the qCD method did not vary much with the starting level. For example, the biases of the estimated

*λ*from the qCD method with +25%, 0% and −25% starting levels were −0.057, −0.051, and −0.054 log

_{10}units, respectively, but were 0.221, 0.321, and 0.336 log

_{10}units with the SC80 method (see Tables 5 and 6, and Supplementary Figures S5 and S6 of Supplementary Appendix D for details).

*λ*was 0.114, 0.097, and 0.096 log

_{10}units after 1, 640, and 1,920 trials, respectively. The 68.2% HWCI of the estimated

*γ*was 0.237, 0.156, and 0.115 log

_{10}units after 1, 640, and 1,920 trials, respectively. The 68.2% HWCI of the estimated α was 0.088, 0.027, and 0.009 log

_{10}units after 1, 640, and 1,920 trials, respectively (see Table 7 and Supplementary Figure S7 of Supplementary Appendix D for details). The results indicate that the qCD method can estimate the parameters of the learning curve with relatively high precision.

*λ*+

*α*) and percent of improvements (PI = (

*SD*. For example, when the time constant was 80 trials (simulated Observer 2), the estimated IT was 0.652 ± 0.069 (

*M*±

*SD*) from the qCD method, 1.871 ± 2.792 from SC80, 3.818 ± 3.691 from SC160, and 0.674 ± 0.749 from RSS. The distributions of the estimated PI (Figure 7B) from the qCD method were also much narrower and closer to the true PI (=240%) compared to those from SC80, SC160, and RSS. For example, when the time constant was 80 trials (Observer 2), the estimated PI was 237% ± 25% (

*M*±

*SD*) from the qCD method, 665% ± 986% from SC80, 1,367% ± 1,327% from SC160, and 335% ± 1424% from RSS. The estimated PI from the staircase and RSS methods had large SDs and deviated from the true PI. These results demonstrate that the accuracy and precision of the estimated IT and PI from the qCD method were much higher than those from the staircase and RSS method. The means and SDs of the estimated IT and PI of the three observers with 3 starting levels are summarized in Table 8 (see Supplementary Figure S8 and S9 of Supplementary Appendix D for the distributions with +25% and −25% starting levels).

^{2}. Observers placed their head on a chin rest and viewed the displays binocularly. The display subtended 27.8° × 21.6° at a viewing distance of 0.69 m.

*λ*) from the qCD method was computed from both trial-by-trial and post hoc segment-by-segment analyses. In the trial-by-trial analysis of the qCD data, the estimated thresholds across all the trials were first fit by an exponential function, the estimated dynamic range was obtained by the parameters of the best fitting model. Averaged across five observers, the dynamic range was 0.272 ± 0.033 (

*M*±

*SE*). Similarly, the averaged dynamic range across five observers in the post hoc segment-by-segment analysis was 0.311 ± 0.051 (

*M*±

*SE*). There was no significant difference between the estimated dynamic ranges from the trial-by-trial and post hoc segment-by-segment analyses of the qCD data,

*t*(4) = −1.823,

*p*= 0.142).

*M*±

*SE*); (b) Based on the estimated thresholds of the first and last trial calculated from the best fitting exponential model to the block-by-block thresholds, the averaged dynamic range was 0.393 ± 0.072 (

*M*±

*SE*). The estimated PIs in this two different ways were significant different,

*t*(4) = −3.28,

*p*= 0.031).

*M*±

*SE*) log

_{10}units after 160, 640, and 1,760 trials. Similarly, the 68.2% HWCI of the estimated thresholds from the post hoc segment-by-segment analysis also decreased monotonically with trial number: The average 68.2% HWCI was 0.017 ± 0.001, 0.012 ± 0.001, and 0.013 ± 0.001 log

_{10}units (

*M*±

*SE*) after 160, 640, and 1,760 trials, respectively. These results indicated that the qCD method could precisely estimate the learning curve in the global motion direction identification task.

*λ*started at 0.197 in the first trial, and decreased to 0.130 after 320 trials, to 0.083 after 640 trials, and 0.052 after 1280 trials; the average 68.2% HWCI of

*γ*started at 0.277 in the first trial, and decreased to 0.171 after 320 trials, to 0.121 after 640 trials, and 0.087 after 1,280 trials; the average 68.2% HWCI of

*α*started at 0.101 in first trial, and decreased to 0.074 after 320 trials, to 0.063 after 640 trials, and 0.033 after 1,280 trials, all in log

_{10}units. These results indicated that the qCD method could precisely estimate parameters of perceptual learning with low speed.

*M*±

*SD*). We also computed the RMSE between the estimated thresholds from the two methods. The RMSE was 0.086 ± 0.024 log

_{10}units. Results from both the correlation analysis and RMSE calculation suggest that the estimated thresholds from the two methods matched quite well.

*λ*) and time constant (

*γ*) of learning.

*SD*s, and if used, might yield more imprecise estimates of the transfer index. On the other hand, the qCD method might yield accurate and precise estimates of the initial threshold, percent of improvements, and time constant, and therefore much improved estimates of the transfer index.

*a priori*knowledge about the time course of perceptual sensitivity change for the studied population. The more informative the prior is—that is, the more we know about the properties of the population under study—the faster the posterior converges (Gu et al., 2016; Kim, Pitt, Lu, Steyvers, & Myung, 2014).

*, 36 (21), 3487–3500.*

*Vision Research**, 387 (6631), 401–406.*

*Nature**, 25 (5), 724–730.*

*Memory & Cognition**, 39 (2), 176–188.*

*IEEE Transactions on Engineering Management**, 218 (4573): 697–698.*

*Science**, 27 (6), 953–965.*

*Vision Research**, 23 (3), 229–238.*

*Vision Research**, 10, 433–436.*

*Spatial Vision**, 4 (5), 519–525.*

*Nature Neuroscience**, 15 (10): 11, 1–16, https://doi.org/10.1167/15.10.11. [PubMed] [Article]*

*Journal of Vision**, 95 (23), 13988–13993.*

*Proceedings of the National Academy of Sciences**, 39 (19), 3197–3221.*

*Vision Research**, 102 (14), 5286–5290.*

*Proceedings of the National Academy of Sciences, USA**, 18 (6), 531–539.*

*Psychological Science**, 3 (1), 343–363.*

*Annual Review of Vision Science**, 4 (10): 4, 879–890, https://doi.org/10.1167/4.10.4. [PubMed] [Article]*

*Journal of Vision**, 33 (3), 397–412.*

*Vision Research**, 35 (21), 3003–3013.*

*Vision Research**, 6 (3), 292–297.*

*Current Biology**, 287 (5777): 43–44.*

*Nature**, 87 (4), 1867–1888.*

*Journal of Neurophysiology**, 402 (6758), 176–178.*

*Nature**, 49 (1), 585–612.*

*Annual Review of Psychology**(pp. 1–47). American Cancer Society. New York, NY: John Wiley & Sons, Inc.*

*Stevens' handbook of experimental psychology and cognitive neuroscience**, 16 (6): 15, 1–17, https://doi.org/10.1167/16.6.15. [PubMed] [Article]*

*Journal of Vision**, 7, 1055.*

*Nature Neuroscience**, 7 (2), 185–207.*

*Psychonomic Bulletin & Review**, 37 (15), 2133–2141.*

*Vision Research**, 15 (9): 2, 1–18, https://doi.org/10.1167/15.9.2. [PubMed] [Article]*

*Journal of Vision**, 61, 25–32.*

*Vision Research**, 105 (10), 4068–4073.*

*Proceedings of the National Academy of Sciences**, 17 (6): 7, 1–10, https://doi.org/10.1167/17.6.7. [PubMed] [Article]*

*Journal of Vision**, 34 (25), 8423–8431.*

*The Journal of Neuroscience**, 50 (19), 1928–1940.*

*Vision Research**, 9 (3): 1, 1–13, https://doi.org/10.1167/9.3.1. [PubMed] [Article]*

*Journal of Vision**, 88 (11), 4966–4970.*

*Proceedings of the National Academy of Sciences**, 27 (6), 840–846.*

*Current Biology**, 17 (11): 3, 1–16, https://doi.org/10.1167/17.11.3. [PubMed] [Article]*

*Journal of Vision**, 26 (11), 2465–2492.*

*Neural Computation**, 39 (16), 2729–2737.*

*Vision Research**, 33 (16), 2287–2300.*

*Vision Research**, 63 (8), 1279–1292.*

*Perception & Psychophysics**, 46 (19), 3160–3176.*

*Vision Research**, 10 (3): 17, 1–21, https://doi.org/10.1167/10.3.17. [PubMed] [Article]*

*Journal of Vision**, 6, 1070.*

*Frontiers in Psychology**, 93 (13), 6830–6834.*

*Proceedings of the National Academy of Sciences**, 49 (2), 467–477.*

*Journal of the Acoustical Society of America**, 15 (10): 1, 1–9, https://doi.org/10.1167/15.10.1. [PubMed] [Article]*

*Journal of Vision**, 7 (1): 7421.*

*Scientific Reports**, 10 (10): 29, 1–14, https://doi.org/10.1167/10.10.29. [PubMed] [Article]*

*Journal of Vision**, 61 (61), 15–24.*

*Vision Research**. Princeton, NJ: NEC Research Institute.*

*Stimulus specificity in perceptual learning: Is it a consequence of experiments that are also stimulus specific**, 96 (24), 14085–14087.*

*Proceedings of the National Academy of Sciences, USA**, 40 (1), 97–109.*

*Vision Research**, 46 (15), 2315–2327.*

*Vision Research**. Cambridge, MA: The MIT Press.*

*Visual psychophysics*:*From laboratory to theory**, 95 (2), 145–151.*

*Neurobiology of Learning and Memory**, 20 (8), 561–563.*

*Trends in Cognitive Sciences**, 50 (4), 375–390.*

*Vision Research**, 18 (10): 256, https://doi.org/10.1167/18.10.256. [Abstract]*

*Journal of Vision**, 97 (3f), 1137–1149.*

*Perceptual and Motor Skills**, 85 (6), 1256–1274.*

*Psychological Bulleting**, 27 (42), 11401–11411, https://doi.org/10.1523/JNEUROSCI.3002-07.2007.*

*Journal of Neuroscience**, 10 (4), 437–442.*

*Spatial Vision**, 90, 10–14.*

*Vision Research**(pp. 29.21-29.13). New York, NY: McGraw-Hill.*

*Handbook of Optics, 2nd ed., I**, 112 (4), 715–743.*

*Psychological Review**, 256 (5059), 1018–1021.*

*Science**, 101 (17), 6692–6697.*

*Proceedings of the National Academy of Sciences, USA**, 7 (7), 461–467.*

*Current Biology**, 35 (4), 519–527.*

*Vision Research**, 43 (12), 1365–1374.*

*Vision Research**, 51 (13), 1552–1566.*

*Vision Research**, 11 (1), 53–60. https://doi.org/10.1038/nrn2737*

*Nature Reviews Neuroscience**, 412, 549.*

*Nature**, 27 (13), R631–R636.*

*Current Biology**, 61 (5), 700–707.*

*Neuron**, 49 (21), 2574–2585.*

*Vision Research**, 26 (12), 1854–1862, https://doi.org/10.1177/0956797615598976.*

*Psychological Science**, 13 (9): 249, https://doi.org/10.1167/13.9.249. [Abstract]*

*Journal of Vision**, 41, 782–287.*

*Journal of the Acoustical Society of America**, 35 (17), 2503–2522.*

*Vision Research**, 33 (2), 113–120.*

*Perception & Psychophysics**, 18 (24), 1922–1926.*

*Current Biology**, 18 (8): 11, 1–21, https://doi.org/10.1167/18.8.11. [PubMed] [Article]*

*Journal of Vision**, 18 (10): 1068, https://doi.org/10.1167/18.10.1068. [Abstract]*

*Journal of Vision**, 58 (8), 5633–5633.*

*Investigative Ophthalmology & Visual Science**, 154, 21–43.*

*Vision Research**, 46 (5), 739–750.*

*Vision Research*