Abstract
Contrast Sensitivity Functions (CSFs) represent useful diagnostic adjuncts for helping assess both retinal and central visual functionality. Gaussian Process (GP) classifiers have been shown to efficiently estimate individual CSF models by leveraging active machine learning for optimal stimulus selection. Model convergence in these cases can be achieved with between 10 and 50 actively selected stimuli. By assuming model independence, this disjoint process requires sequential estimation to obtain CSF models for multiple eyes or stimulus conditions (e.g., luminance, eccentricity). Conjoint estimators, on the other hand, have now been developed to estimate multiple CSFs simultaneously using an active multitask implementation. In the current study, conjoint CSF estimator performance was compared to disjoint performance on simulated eyes using generative models created from human data. The high degree of expected similarity between CSFs originating from different eyes or conditions allows conjoint learning between the related models. This procedure is designed to enable faster convergence than sequential disjoint model learning. Indeed, conjoint CSF estimation does speed model convergence over disjoint estimation under commonly encountered scenarios. These findings confirm that incorporation of additional information beyond immediate behavioral responses into new machine learning models of vision functions may improve visual system assessment.