Purchase this article with an account.
Andrew B Watson; Modeling Visual Acuity. Journal of Vision 2016;16(4):36-37. doi: https://doi.org/10.1167/16.4.35.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Acuity is the most widely used measure of visual function employed in both research and clinical settings. It is an estimate of the minimal size at which a particular set of symbols (optotypes) can be identified reliably. To understand the role of optical and neural contributions, we have developed a computational model of visual acuity.
Our model includes rendering of the retinal image by an optical point-spread function, anisoplanatic filtering of the retinal image by an array of midget retinal ganglion cells, perturbation by ganglion cell noise, and classification using an optimal template-matching procedure. We call this the Neural Image Classifier (Watson & Ahumada, 2015).
This model builds on ideas from optical simulation (Artal et al., 1989), ideal observer models (Geisler, 1989), and letter identification (Beckmann & Legge, 2002; Chung et al., 2002; Dalimier & Dainty, 2008; Gold et al., 1999; Nestares et al., 2003; Parish & Sperling, 1991; Watson & Fitzhugh, 1989).
For a given optical and neural configuration, acuity values can be estimated by conducting psychophysical trials using Monte-Carlo simulation. The model relies on other models we have developed of pupil diameter (Watson & Yellott, 2012), optical point-spread (Watson, 2013), and distribution of retinal ganglion cells (Watson, 2014).
The model has been used to predict effects on acuity of particular wavefront aberrations (Watson & Ahumada, 2008), to predict acuity for optotypes varying in complexity (Watson & Ahumada, 2012), and to predict the effects of size on contrast thresholds for letter identification (Watson & Ahumada, 2015).
Here we describe elements of the model and illustrate how it is used to compute predictions of acuity.
This PDF is available to Subscribers Only