Abstract
Contrast sensitivity tests represent an important diagnostic tool for evaluating retinal and central visual function. Current algorithms tend to identify contrast sensitivity thresholds either with classic staircase designs or adaptive parametric function fits. Both methods have distinct advantages but are relatively inflexible in their ability to trade off accuracy versus efficiency. Recently, nonparametric Bayesian estimators have been developed for visual and auditory psychometric function estimation that allow fine tuning between accuracy and efficiency without compromising either in absolute terms. A Gaussian process (GP) estimator for contrast sensitivity was developed and evaluated using simulated and retrospective human data. Generative models of gabor pattern detection as a function of spatial frequency and contrast simulated human responses for normal vision as well as canonical examples of amblyopia, cataract and multiple sclerosis. The GP estimator used simulated data acquired either by random sampling or by active machine learning to estimate a contrast sensitivity function (CSF) for each phenotype. The resulting CSFs were compared against the ground truth generative models to quantify accuracy (bias + variance) of the GP estimator as a function of observation count. Between 10 and 50 observations led to near convergence of CSF estimates in these test cases with no evidence of bias, confirming that nonparametric GP estimators are capable of delivering appropriate test results under these test conditions. These results were confirmed by sampling, without replacement, one observation at a time from retrospective human contrast sensitivity test data. Prior beliefs affect the trajectory of convergence, as would be expected for a Bayesian method, but do not appear to affect the ultimate estimate accuracy. Together, these results imply that a modern machine learning classification algorithm could play a useful role in accurate, efficient diagnostic procedures for estimating contrast sensitivity functions.