Abstract
Perceptual learning improves perceptual sensitivity through training. The learning curve is typically sampled in blocks of trials because of the number of trials required for each estimation. This results in imprecise and biased estimates of learning. Recently, Zhao et al (2017) developed a Bayesian adaptive quick Change Detection (qCD) method to accurately, precisely, and efficiently assess the time course of perceptual sensitivity change based on the framework of Lesmes, et al. (2009). It selects the optimal stimulus, and updates, trial by trial, a joint probability distribution of the parameters quantifying change in perceptual sensitivity. Here, we implemented and tested the qCD in a 4-alternative forced-choice (4AFC) global motion direction identification task. Five subjects performed 960 trials of the quick CD method interleaved with 960 trials of a 3-down/1-up staircase, with feedback. In each trial, a random dot kinematogram (RDK) moved in a direction (45, 135, 225, or 315 degrees), with coherence on the next trial determined by the qCD or the staircase. On average, training reduced coherence thresholds by 57.3%± 2.1% and 59.9%± 3.0%, estimated with the qCD and staircase, respectively. The qCD could estimate the learning curve either trial-by-trial or as a single exponential learning curve. In the trial-by-trial analysis, the average 68.2% half width of the credible interval (HWCI) of the estimated threshold was 0.031±0.001, 0.024±0.001 and 0.015±0.001 log units after 80, 320 and 880 trials. The averaged HWCI estimated from the entire exponential learning curve was 0.013±0.000 log units, or 0.055±0.003, 0.074±0.003, and 0.018±0.001 log units for the magnitude of learning, time constant and asymptotic level. Additionally, the overall estimates from the two methods matched extremely well, (average r of 0.903±0.022, all p< 0.05). The quick CD method precisely and accurately assesses trial-by-trial threshold changes, showing great promise in more precisely characterizing perceptual learning.
Meeting abstract presented at VSS 2018