Purchase this article with an account.
Zhong-Lin Lu, Yukai Zhao, Jiajuan Liu, Barbara Dosher; Hierarchical Bayesian modeling of mixed training accuracy effects in perceptual learning. Journal of Vision 2021;21(9):2219. doi: https://doi.org/10.1167/jov.21.9.2219.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Liu et al. (2012) found that mixing high and low accuracy training led to significant perceptual learning without feedback in a Gabor orientation identification task based on block-by-block learning curves. In this study, we developed and fit a hierarchical Bayesian model (HBM) to the trial-by-trial data in all six groups (mixtures of high-high, high-low, and low-low training accuracies, with and without feedback) in Liu et al. (2012) to estimate the posterior distributions of the parameters and hyperparameters of the learning curves as well as their covariances at both the subject and group levels. The learning curves were modeled as exponential functions with three parameters: time constant (TC), and initial and asymptotic thresholds. We computed the distributions of the means (M) of the parameters of the learning curve as well as learned threshold reduction (d'=M/SD) for each group. Based on the 95% confidence interval of the d' distributions, we found significant learning in the high-high with (0.17±0.04 log10 units; d': 6.00±2.59; TC: 377±73 trials) and without feedback (0.25±0.07 log10 units; d': 3.70±1.60; TC: 427±67 trials), high-low with (0.17±0.05 log10 units; d': 4.30±1.81; TC: 418±69 trials) and without feedback (0.22±0.06 log10 units; d': 4.80±1.90; TC: 419±69 trials), and low-low with feedback (0.18±0.05 log10 units; d': 4.8±2.11; TC: 368±74 trials) groups, but no significant learning in the low-low without feedback (0.08±0.07 log10 units; d': 1.7±1.43) group. In addition, the magnitudes of learning and time constants were not significantly different among the five groups with significant learning. Although the results were qualitatively consistent with Liu et al. (2012), the new trial-by-trial analysis yields the joint posterior distributions and specifies both group- and subject-level performance and variability, and characterizes the individual and the group learning in one unified model that accounts for the full data set.
This PDF is available to Subscribers Only