Open Access
Article  |   April 2024
Defining metrics of visual acuity from theoretical models of observers
Author Affiliations
Journal of Vision April 2024, Vol.24, 14. doi:https://doi.org/10.1167/jov.24.4.14
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Charles-Edouard Leroux, Conor Leahy, Justine Dupuis, Christophe Fontvieille, Fabrice Bardin; Defining metrics of visual acuity from theoretical models of observers. Journal of Vision 2024;24(4):14. https://doi.org/10.1167/jov.24.4.14.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Many experimental studies show that metrics of visual image quality can predict changes in visual acuity due to optical aberrations. Here we use statistical decision theory and Fourier optics formalism to demonstrate that two metrics known in the field of vision sciences are approximations of two different theoretical models of linear observers. The theory defines metrics of visual acuity to potentially predict changes in visual acuity due to optical aberrations, without needing a posteriori scale or offset. We illustrate our approach with experiments, using combinations of defocus and spherical aberration, and pure coma.

Introduction
Relating wavefront measurements to visual performance is a basic question in the field of spatial vision, with major implications for clinical applications of aberrometry. Among all kinds of proposed approaches, using a metric of visual image quality (Cheng, Thibos, & Bradley, 2003; Chen, Singer, Guirao, Porter, & Williams, 2005), that is, a single number directly computed from the wavefront, has emerged as the most practical approach to account for experimental measurements of visual performance (Marsack, Thibos, & Applegate, 2004; Cheng, Bradley, & Thibos, 2004) and to estimate an objective refraction based on aberration measurements (Guirao & Williams, 2003; Martin, Vasudevan, Himebaugh, Bradley, & Thibos, 2011; Kilintari, Pallikaris, Tsiklis, & Ginis, 2010). Several metrics relate to the Strehl ratio, with different approaches to take account of the neural processing of retinal images (Thibos, Hong, Bradley, & Applegate, 2004). The visual Strehl computed in the spatial domain (VSX) quantifies the fraction of the eye’s point spread function (PSF) that overlaps with a neural weighting function. The VSX metric has been shown to be reliable for objective refraction (Hastings, Marsack, Nguyen, Cheng, & Applegate, 2017) and can quantify optical quality after refraction (Hastings, Marsack, Thibos, & Applegate, 2018). The visual Strehl based on the optical transfer function (VSOTF) quantifies the peak of PSF after taking account of the neural contrast loss that is modeled by the neural transfer function (NTF). The VSOTF metric accounts for measurements of the eye’s depth of focus (Zheleznyak, Sabesan, Oh, MacRae, & Yoon, 2013; Zheleznyak, Jung, & Yoon, 2014; Yi, Iskander, & Collins, 2011) and the effect of optical aberrations on the accommodative response (Buehren & Collins, 2006). The visual Strehl ratio can also be computed from the modulation transfer function (VSMTF metric). The VSMTF metric predicts the accommodative response of the eye in the presence of aberrations (Tarrant, Roorda, & Wildsoet, 2010; López-Gil et al., 2013). 
For experiments in which the subject serves as their own control, metrics of visual Strehl (VSX, VSOTF, and VSMTF) correlate remarkably well with changes in visual acuity due to optical aberrations (Marsack et al., 2004). These metrics are also robust to the amplitude of aberration and pupil size (Ravikumar, Sarver, & Applegate, 2012), and can predict the effect of normal or keratoconic aberrations (Ravikumar, Marsack, Bedell, Shi, & Applegate, 2013). The correlation between metrics of visual image quality and absolute visual performance depends on experimental conditions such as the overall visual performance of the population (Villegas, Alcón, & Artal, 2008), light level, and optotype contrast (Applegate, Marsack, & Thibos, 2006). Light scattering and subject-dependent neural sensitivity are also fundamental aspects of the visual system that may lower the predictive ability of metrics of visual image quality in a clinical study (Bühren et al., 2009). Monte Carlo simulations of the subject performing the visual test have been used to model visual performance, as they make it possible to analyze the effect of the test protocol, the subject’s strategy to identify optotypes, and the neural and optical properties of their visual system (Nestares, Navarro, & Antona, 2003; Dalimier, Pailos, Rivera, & Navarro, 2009; Watson & Ahumada, 2005; Watson & Ahumada, 2008). 
In an attempt to bypass Monte Carlo simulations but still model the measurement protocol with great detail, Dalimier and Dainty (2008) used statistical decision theory to predict ratios (with/without aberrations) of contrast sensitivity measurements. They computed ratios of data separability using simulated visual images of the actual test optotypes. Similarly, Watson and Ahumada (2008) introduced an acuity metric to bypass Monte Carlo simulations. This acuity metric slightly differs from the concept of metrics of visual image quality, as commonly defined by the community, because it is computed using the set of simulated visual images and the corresponding templates that are used by the observer for letter identification. The main advantage is to predict ratios (with/without aberrations) of visual performance measurements without needing a posteriori scale or offset. In our previous work, we reckoned that the Dalimier and Dainty model could be further simplified using the “small letter approximation” to define a model-based metric of contrast sensitivity, M, which we computed directly from optical aberrations without actually simulating the visual images (Leroux, Fontvieille, Leahy, Marc, & Bardin, 2022). We described this metric as model based, as it inherited from the Dalimier and Dainty model. In this work, we choose as a starting point to adapt the Dalimier and Dainty model to visual acuity and use the small letter approximation. The Dalimier and Dainty model is based on a model of an ideal observer, and we also introduce a more realistic model of a “real observer” to define a second model-based metric of visual acuity. The two model-based metrics are compared to experimental measurements of letters lost with combinations of defocus and spherical aberrations, as well as coma. We demonstrate in this work that metrics of visual image quality can be defined from rigorous models and can be customized to the experimental conditions to predict visual acuity accurately. 
Theory of model-based metrics of visual acuity
The classification task of theoretical observers
In the framework of statistical decision theory, we model measurements of visual acuity as a classification task, for which the subject is asked to classify optotypes in well-known K classes, for example, a known set of Sloan letters. We model the observed data i(x, y) as visual images Ik, a(x, y) (k = 1 to K indexes the letter and a is the angular extent of the letter gap), with added independent and identically distributed Gaussian noise (n(x, y), of variance σ2) of physiological (mostly neural) origins.  
\begin{equation} i(x,y)=I_{k,a}(x,y)+n(x,y) \end{equation}
(1)
 
An observer is a theoretical model of the subject’s own “algorithm” that processes the observed data and classifies it in one of the K classes. Among the numerous observers, the ones that process data linearly are interesting models for their mathematical simplicity. To simplify a bit further the classification problem, we first limit our analysis to the binary classification problem (K = 2). 
Performance analysis of two linear observers in a binary classification problem
In a binary classification problem, a linear observer computes a scalar \(t_{{\rm lin}}\), known as the test statistic, as the scalar product of the data and a model template known as the discriminant T(x, y) (Barrett & Myers, 2003, p. 811):  
\begin{equation} t_{\rm lin}=\int\!\!\!\int i(x,y)T(x,y)dxdy \end{equation}
(2)
The observer chooses between the two classes by comparing \(t_{{\rm lin}}\) to a threshold. 
Among all kinds of linear observers, the ideal linear observer (known as the Hotelling observer) has full knowledge of the expected visual images Ik, a(x, y) and uses their difference as the discriminant. The ideal observer computes the following test statistic:  
\begin{equation} t^*=\int\!\!\!\int i(x,y)\big (I_{1,a}(x,y)-I_{2,a}(x,y)\big )dxdy \end{equation}
(3)
t* is the test statistic of the ideal observer. It is given by Equation 3 because of our hypothesis of independent and identically distributed Gaussian noise (Barrett & Myers, 2003, p. 836). The ideal observer makes the best use of the available information in the observed data to perform the classification efficiently: The performance of the ideal observer is only limited by noise. In the presence of optical aberrations, the ideal observer uses the aberrated visual images as templates to compute the discriminant. What makes the observer ideal is its ability to use templates that perfectly match the noise-free data I1, a(x, y) and I2, a(x, y). The ideal observer was found to predict the effect of optical aberrations on contrast sensitivity measurement with relatively large optotypes: Landolt C with a 3-arcminute gap (Dalimier, Dainty, & Barbur, 2008; Dalimier & Dainty, 2008) and Sloan letters with 2-arcminute gaps (Leroux et al., 2023). In comparison to other theoretical observers, the ideal observer will predict better visual acuity, especially when visual images are strongly altered by optical aberrations. In these conditions, which correspond to combinations of small letter and high amplitude of aberration, the ideal observer will also become less realistic, as most subjects may not be able to use the aberrated alphabet as image templates. 
Watson and Ahumada (2008) considered the model of an ideal observer too, but also considered models of visual acuity for which aberration-free images are used as templates by the observer. Similarly, we introduce a second theoretical observer, which compares the observed data i(x, y) to the unaberrated letters Ok(x, y) by computing the following test statistic:  
\begin{equation} t=\int\!\!\!\int i(x,y)\big (O_{1,a}(x,y)-O_{2,a}(x,y)\big )dxdy \end{equation}
(4)
t is the test statistic of this second observer. We will refer to this theoretical observer as the real observer, which uses as the discriminant T(x) = O1, a(x, y) − O2, a(x, y). The real observer uses unmatched templates to choose between the two classes. We will refer to this observer as real, because using aberration-free images of letters as templates is a plausible model when the experiment is performed with a known alphabet. Unlike the ideal observer, the real observer is not optimal because it does not know the optical aberrations. 
Equations 3 and 4 define the test statistics that is computed by each theoretical model. To identify a letter, each model compares its test statistics to a threshold value that we need not to specify in this work, as we seek to compute the theoretical performance of each observer without implementing them with Monte Carlo simulations. In statistical decision theory, t and t* are random variables because of noise. The theory quantifies the noise performance of a theoretical observer as the fraction f of correct response when the observer performs the binary classification task with fixed model parameters (aberrations, noise level σ, letter size, and contrast). For a binary classification task, the signal to noise ratio (SNR) associated with a test statistic is defined (Barrett & Myers, 2003, p. 819) as the difference between the mean of the test statistic under the hypothesis that the stimulus is of Class 1 or Class 2, divided by the standard deviation of t (which is approximately equal for each class). Statistical decision theory relates the SNR to the theoretical fraction f of correct classification achieved by the corresponding observer: \(f=(1+{\rm erf}(SNR/2))/2\) (Barrett & Myers, 2003, pp. 819–823). For a linear observer, the SNR is computed with standard “error propagation,” noting that i(x, y) is a Gaussian random variable of variance σ2 in Equation 1. The SNR of linear observers takes the generic form (Barrett & Myers, 2003, p. 852):  
\begin{eqnarray} SNR_{{\rm lin}}(a)=\frac{\left| \int\!\!\!\int T(x,y)\big (I_{1,a}(x,y)-I_{2,a}(x,y)\big )dxdy\right|}{\sigma \sqrt{ \int\!\!\!\int T^2(x,y)dxdy}}\!\!\!\!\!\! \nonumber\\ \end{eqnarray}
(5)
 
We obtain, using T(x, y) = I1, a(x, y) − I2, a(x, y) in Equation 5, for the ideal observer:  
\begin{equation} SNR_{t^*}(a)=\frac{1}{\sigma }\sqrt{\int\!\!\!\int \big (I_{1,a}(x,y)-I_{2,a}(x,y)\big )^2dxdy} \end{equation}
(6)
and we obtain, using T(x) = O1, a(x, y) − O2, a(x, y) in Equation 5, for the real observer :  
\begin{equation} SNR_{t}(a)=\frac{\begin{array}{l}\left| \int\!\!\!\int \big (I_{1,a}(x,y)-I_{2,a}(x,y)\big )\big (O_{1,a}(x,y)\right.\\[.5em] \quad-\left.O_{2,a}(x,y)\big )dxdy\right|\end{array}}{\sigma \sqrt{ \int\!\!\!\int \big (O_{1,a}(x,y)-O_{2,a}(x,y)\big )^2dxdy}} \end{equation}
(7)
 
For the binary classification, Equation 6 (ideal observer) or Equation 7 (real observer) directly gives the fraction f of correct response and can be used to compute the gap (or acuity) threshold if the noise variance σ2 is known. In practice, we do not know σ2 and only assume that it remains unchanged in all our experiments (for a given subject). For the binary classification problem, we could use Equation 6 or Equation 7 to model the acuity changes due to aberrations of a given subject performing measurements at a fixed fraction f of correct response at threshold. To do so, we would compute the letter gap a that maintains equal SNR when aberrations degrade the visual images of the two classes (I1, a and I2, a). 
Performance of the ideal observer in a K-class problem
\(SNR_t^*\) and SNRt can be numerically computed for pairs of letters. They both increase when letters are more different, and it is in principle important to take account of all the letters of the alphabet that are used for the visual test. To do so, statistical decision theory introduces data separability (Barrett & Myers, 2003, p. 852), which quantifies the optimal SNR achievable by the ideal observer in a so-called K-class problem (a visual acuity test with K letters). Dalimier and Dainty (2008) built their model of contrast sensitivity measurements using data separability. Data separability S*(a) is a metric of performance for the ideal observer and can be written as  
\begin{eqnarray} S^*(a)=\frac{2}{\sigma \sqrt{K}}\sqrt{\sum _{k=1}^{K} \int\!\!\!\int \big (I_{k,a}(x,y)-\overline{I}_a(x,y)\big )^2dxdy} \!\!\!\!\nonumber\\ \end{eqnarray}
(8)
where \(\overline{I}_a(x,y)=\sum _{k=1}^{K} I_{k,a}(x,y)\). As Dalimier and Dainty, we have scaled the definition of S* in order to have \(SNR_{t^*}(a)=S^*(a)\) for K = 2. We note that S* corresponds to the root mean square value of K values of \(SNR_{t^*}\), which correspond to K hypothetical binary classification problems of choosing between Ik, a and \(\overline{I}_a\). Data separability quantifies the overall difference between the visual images of each letter used for the test, as a single number that is normalized by the level of noise σ. 
Performance of the real observer in a K-class problem
By analogy, we formulate the S(a) metric of performance for the real observer in the K-class problem. We compute the root mean square value of K values of SNRt (Equation 7) that correspond to hypothetical binary classification problems of choosing between Ik, a and \(\overline{I}_a\):  
\begin{eqnarray} S(a)=\frac{2}{\sigma \sqrt{K}}\sqrt{\sum _{k=1}^{K} \frac{\begin{array}{l}\big( \int\!\!\!\int \big (I_{k,a}(x,y)-\overline{I}_a(x,y)\big )\\ \,\,\,\, \times \big (O_{k,a}(x,y)-\overline{O}_a(x,y)\big )dxdy\big)^2\end{array}}{ \int\!\!\!\int \big (O_{k,a}(x,y)-\overline{O}_a(x,y)\big )^2dxdy} } \!\!\!\nonumber\\ \end{eqnarray}
(9)
 
We also have S(a) = SNRt(a) for K = 2, and we will refer to S(a) as the data separability of the real observer. 
As long as gap threshold corresponds to a fixed percentage of correct response (Dalimier & Dainty, 2008), two measurements of visual acuity (with and without aberrations, corresponding to the inverse of letter gap threshold aB and a0, respectively) are related by equal data separability (\(S_0^*(a_0)=S_B^*(a_B)\) or S0(a0) = SB(aB), depending on which model of observer we rely on). The 0 index corresponds to the reference (aberration-free) condition, and the B index corresponds to the condition with aberration. Dalimier and Dainty (2008) noted that data separability is proportional to stimulus contrast and predicted the ratio (with/without aberration) of contrast sensitivity measurements as the ratio of data separability with unitary stimulus contrast. Our goal is to use data separability to predict the ratio of letter gap threshold, which is the inverse of decimal visual acuity. Because S* and S are not proportional to letter gap a, we introduce below the numerical method that we implemented for each of the two observers. 
Model of visual images
In this work, we model visual images using standard Fourier optics calculations. The visual images take the form of two-dimensional functions of spatial coordinates, and it is important to emphasize at this point that other approaches exist. One can, for instance, take account of the finite bandwidth of independent visual channels (Sachs, Nachmias & Robson, 1971) that are tuned to the spatial spectra of letters (Majaj, Pelli, Kurshan, & Palomares, 2002), and with this approach, model equations take a more algebraic form (Myers & Barrett, 1987; Dalimier, 2007). Here, the visual image that corresponds to the \(k{{\rm th}}\) letter choice can be written as  
\begin{eqnarray} I_{k,a}(x,y) &\,=&\mathcal {F}^{-1}\left\lbrace NTF(f_x,f_y)OTF(f_x,f_y)\right.\nonumber\\ &&\times \left.\mathcal {F}\left\lbrace O_{k,a}(x,y)\right\rbrace \right\rbrace \end{eqnarray}
(10)
Ok, a(x, y) models the luminance distribution of stimulus for the \(k{{\rm th}}\) letter choice and the a letter gap. OTF is the optical transfer function, which we will define in Equation 10 as either OTF0 (condition 0, without aberration) or OTFB (condition B, with aberration). NTF is the neural transfer function of the eye, which we defined using a generic model that combines different studies from the literature (Hastings, Marsack, Thibos, & Applegate, 2020). \(\mathcal {F}\) denotes the Fourier transform operator. We considered monochromatic (λ = 530 nm) and monocular vision in this work. 
Numerical method to predict letters lost for theoretical observers
Following pioneering studies of the effect of aberrations on visual acuity (Applegate, Marsack, Ramos, & Sarver, 2003; Applegate, Ballentine, Gross, Sarver, & Sarver, 2003), we aimed at predicting the number of letters lost, due to aberration, on the logMAR chart. As one line of the logMAR chart has five letters and corresponds to a 0.1 variation of the logarithm of letter gap, the number of letters lost equals −50 times the difference (with/without aberration) in logMAR visual acuity. The model of an ideal observer predicts a number L* of letters lost as  
\begin{equation} L^{*}=-50\log _{10}\bigg (\frac{a_B^*}{a_0}\bigg )\hbox{ with } S^*_B(a_B^*)=S^*_0(a_0) \end{equation}
(11)
and the model of a real observer predicts a number L of letters lost as  
\begin{equation} L=-50\log _{10}\bigg (\frac{a_B}{a_0}\bigg )\hbox{ with } S_B(a_B)=S_0(a_0) \end{equation}
(12)
 
We compute numerically data separability as a function of letter gap a by combining Equations 8 and 10 for the ideal observer, as well as Equations 9 and 10 for the real observer. This calculation is performed without aberration (functions \(S^*_0(a)\) and S0(a) for the ideal and real observers, respectively) and with aberration (functions \(S^*_B(a)\) and SB(a) for the ideal and real observers, respectively). Our numerical method requires to first arbitrarily set the gap threshold, a0, in the aberration-free condition. We set a0 = 1 arcminute for both observers, and we numerically find the letter gap \(a_B^*\) for which \(S_B^*(a_B^*)=S_0^*(a_0)\) (ideal observer) and aB for which SB(aB) = S0(a0) (real observer). To solve these two equations, we use linear fits of the \(S_B^*\), \(S_0^*\), SB, and S0 functions on a logarithmic scale. 
Figure 1 shows our numerical method in detail. Condition B here corresponds to +0.55 diopters of defocus for a 5-mm pupil diameter, and condition 0 corresponds to a diffraction-limited eye of pupil diameter 5 mm. We set a0 = 1 arcminute, and compute \(\log _{10}S_0^*(a_0)=1.24\) using the linear fit of \(\log _{10}S_0^*\) (dashed black line). We use the linear fit of \(\log _{10}S_B^*\) (dashed green line) to solve \(\log _{10}S_B^*(a_B^*)=1.24\) and obtain \(a_B^*=1.89\) arcminutes, which corresponds to L* = −50 × (log101.89) = −13.8 letters for the ideal observer. We use the same method for the real observer (linear fits on the logarithmic scale are the solid black line (S0) and the solid green line (SB)), and obtain L = −21.9 letters. For each observer, the linear fits are approximately parallel on the logarithmic scale, so that the predicted letters lost barely depends on the arbitrary reference value of a0 = 1 arcminute. 
Figure 1.
 
Logarithm of data separability of the ideal observer as a function of logarithm of letter gap a, with aberration (\(S_B^*\), open green circle) and without aberration (\(S_0^*\), open black circle). The linear fits of \(S_B^*\) and \(S_0^*\) (dashed green and dashed black lines, respectively), on the logarithmic scale, allow us to find \(a_B^*\) such that \(S^*_B(a_B^*)=S^*_0(a_0)\). We arbitrarily set log10a0 = 0. The same approach is implemented for the real observer (S0: filled black circle and SB: filled green circle), in order to solve SB(aB) = S0(a0). For the model of the ideal observer, letters lost L* = −13.8 (= 50 × the amplitude of the dashed arrow). For the model of the real observer, letters lost L = −21.9 (= 50 × the amplitude of the solid arrow).
Figure 1.
 
Logarithm of data separability of the ideal observer as a function of logarithm of letter gap a, with aberration (\(S_B^*\), open green circle) and without aberration (\(S_0^*\), open black circle). The linear fits of \(S_B^*\) and \(S_0^*\) (dashed green and dashed black lines, respectively), on the logarithmic scale, allow us to find \(a_B^*\) such that \(S^*_B(a_B^*)=S^*_0(a_0)\). We arbitrarily set log10a0 = 0. The same approach is implemented for the real observer (S0: filled black circle and SB: filled green circle), in order to solve SB(aB) = S0(a0). For the model of the ideal observer, letters lost L* = −13.8 (= 50 × the amplitude of the dashed arrow). For the model of the real observer, letters lost L = −21.9 (= 50 × the amplitude of the solid arrow).
Approximations of letters lost, using model-based metrics
Computing data separability for different letter gaps is relatively computationally expensive. It may not be practical for clinical applications of aberrometry, as it requires numerical computations of visual images that are cumbersome. Here we find the metric of visual image quality that approximates the letters lost for each theoretical model of observer. 
Metric based on the model of an ideal observer
We first detail the computational aspect of S* using Fourier optics formalism. We introduce \(\tilde{\Delta }_{k,a}(f_x,f_y)\), which is the Fourier transform of the difference between the stimulus letter Ok, a and the average (across letters) \(\overline{O}_a\):  
\begin{equation} \tilde{\Delta }_{k,a}(f_x,f_y)=\mathcal {F}\left\lbrace O_{k,a}(x,y)-\overline{O}_a(x,y)\right\rbrace \end{equation}
(13)
S* can be written as  
\begin{eqnarray} S^*(a)=\frac{2}{\sigma \sqrt{K}} \sqrt{\begin{array}{l}\sum\limits_{k=1}^{K} \int\!\!\!\int \big | \mathcal {F}^{-1}\left\lbrace NTF(f_x,f_y)OTF\right.\\[.5em] \quad \left.(f_x,f_y)\tilde{\Delta }_{k,a}(f_x,f_y)\right\rbrace \big |^2dxdy\end{array}}\;\, \end{eqnarray}
(14)
Using Parseval’s identity, we obtain the following formulation of S*:  
\begin{eqnarray} S^*(a)=\frac{2}{\sigma \sqrt{K}}\sqrt{\begin{array}{l}\sum\limits _{k=1}^{K} \int\!\!\!\int \big |NTF(f_x,f_y)OTF\\[.5em] \quad (f_x,f_y)\tilde{\Delta }_{k,a}(f_x,f_y)\big |^2df_xdf_y\end{array}} \quad \end{eqnarray}
(15)
 
The scaling property of Fourier transforms gives  
\begin{equation} \tilde{\Delta }_{k,a}(f_x,f_y)=a^2\tilde{\Delta }_{k,1}(af_x,af_y) \end{equation}
(16)
 
In the limit of small Sloan letters, the \(\tilde{\Delta }_{k,1}(af_x,af_y)\) spectrum can be approximated by a constant function \(\tilde{\Delta }_{k}\) that extends over the full domain of spatial frequency (and therefore does not depend either on (fx, fy) or on a). We had used this approximation to define a model-based metric, M, which predicts contrast sensitivity changes with optical aberrations (Leroux et al., 2022). We obtain the approximated form of S*, which is a quadratic function of letter gap a:  
\begin{eqnarray} && S^*(a)\approx a^2\frac{2}{\sigma \sqrt{K}}\sqrt{\sum _{k=1}^{K}\left|\tilde{\Delta }_{k}\right|^2}\nonumber\\ &&\times\;\sqrt{ \int\!\!\!\int \left(NTF(f_x,f_y)MTF(f_x,f_y)\right)^2df_xdf_y} \qquad \end{eqnarray}
(17)
 
Using this approximation for \(S^*_0(a_0)\) and \(S^*_B(a_B^*)\), the \(S^*_0(a_0)=S^*_B(a_B^*)\) equality of Equation 11 can be written as  
\begin{eqnarray}&& a_0^2\sqrt{ \int\!\!\!\int \left(NTF(f_x,f_y)MTF_0(f_x,f_y)\right)^2df_xdf_y}\nonumber\\ &&\approx a_B^{*2}\sqrt{ \int\!\!\!\int \left(NTF(f_x,f_y)MTF_B(f_x,f_y)\right)^2df_xdf_y} \quad \end{eqnarray}
(18)
 
The letters lost \(L^*=-50\log _{10}(a_B^*/a_0)\) (Equation 11) can therefore be approximated as  
\begin{eqnarray} && L^*\approx \frac{50}{4}\log _{10}\nonumber\\ &&\left( \frac{ \int\!\!\!\int \left(NTF(f_x,f_y)MTF_B(f_x,f_y)\right)^2df_xdf_y}{ \int\!\!\!\int \left(NTF(f_x,f_y)MTF_0(f_x,f_y)\right)^2df_xdf_y}\right) \qquad \end{eqnarray}
(19)
The argument of the logarithm in Equation 19 is the square of the M metric, which we defined to predict ratios (with/without aberration) of contrast sensitivity measurements from the model of the ideal observer (Leroux et al., 2022; Leroux et al., 2023):  
\begin{eqnarray} M=\left( \frac{ \int\!\!\!\int \left(NTF(f_x,f_y)MTF_B(f_x,f_y)\right)^2df_xdf_y}{ \int\!\!\!\int \left(NTF(f_x,f_y)MTF_0(f_x,f_y)\right)^2df_xdf_y}\right)^{1/2}\; \end{eqnarray}
(20)
 
Hence, we obtain the approximation of letters lost for the ideal observer as a metric of visual image quality:  
\begin{equation} L^*\approx 25\log _{10}(M) \end{equation}
(21)
M is comparable to the VSMTF metric (Thibos et al., 2004), in the sense that it is computed as an integral that combines the MTF and the NTF. However, the power of 2 in Equation 20 is specific to M and does not appear in the definition of VSMTF. The consequence of this power is to give more weight to the spatial frequencies for which the MTF is high (i.e., the lower spatial frequencies). 
Metric based on the model of the real observer
For the model of a real observer, we insert \(\tilde{\Delta }_{k,a}(f_x,f_y)\) in Equation 9 to write data separability S as  
\begin{eqnarray} S(a)=\frac{2}{\sigma \sqrt{K}}\sqrt{\sum _{k=1}^{K} \frac{\begin{array}{l}\big ( \int\!\!\!\int \mathcal {F}^{-1}\left\lbrace NTF(f_x,f_y)OTF\right.\\ \,\,\,\, \left.(f_x,f_y)\tilde{\Delta }_{k,a}(f_x,f_y)\right\rbrace\\ \,\,\,\, \mathcal {F}^{-1}\left\lbrace \tilde{\Delta }_{k,a}(f_x,f_y) \right\rbrace dxdy\big )^2\end{array}}{ \int\!\!\!\int \big ( \mathcal {F}^{-1}\left\lbrace \tilde{\Delta }_{k,a}(f_x,f_y)\right\rbrace \big )^2 dxdy} } \quad \end{eqnarray}
(22)
 
We use Parseval’s theorem for both the numerator and denominator in Equation 22. Noting that \(\mathcal {F}^{-1}\left\lbrace \tilde{\Delta }_{k,a}(f_x,f_y)\right\rbrace =O_{k,a}(x,y)-\overline{O}_a(x,y)\) is real-valued, we obtain  
\begin{eqnarray} S(a)=\frac{2}{\sigma \sqrt{K}}\sqrt{\sum _{k=1}^{K} \frac{\begin{array}{l}\big ( \int\!\!\!\int NTF(f_x,f_y)OTF(f_x,f_y)\\ \,\,\,\,\left|\tilde{\Delta }_{k,a}(f_x,f_y)\right|^2 df_xdf_y\big )^2\end{array}}{ \int\!\!\!\int \left|\tilde{\Delta }_{k,a}(f_x,f_y)\right|^2 df_xdf_y} } \quad \end{eqnarray}
(23)
 
For the numerator of Equation 23, we use the small letter approximation that we used to approximate the ideal observer with the 25log10(M) metric (Equations 15 to 17). We obtain  
\begin{eqnarray} && S(a)\approx a^4\frac{2}{\sigma \sqrt{K}}\sqrt{\sum _{k=1}^{K} \frac{|\tilde{\Delta }_k|^4} {{ \int\!\!\!\int \left|\tilde{\Delta }_{k,a}(f_x,f_y)\right|^2 df_xdf_y} }}\nonumber\\ &&\quad\times \left| \int\!\!\!\int NTF(f_x,f_y)OTF(f_x,f_y) df_xdf_y \right| \qquad \end{eqnarray}
(24)
 
The denominator in Equation 24 still depends on a, so it needs to be rearranged. It is not well approximated by the small letter approximation, which would here diverge because there is no weighting function (other than \(\left|\tilde{\Delta }_{k,a}(f_x,f_y)\right|^2\)) in the integral (over \(\mathbb {R}^2\)). We make use of Equation 16, and with a change of variable. we find 
\( \int\!\!\!\int \left|\tilde{\Delta }_{k,a}(f_x,f_y)\right|^2 df_xdf_y=a^2\int\!\!\!\int \left|\tilde{\Delta }_{k,1}(f_x,f_y)\right|^2 df_xdf_y\)
We obtain the approximated form of S(a), which is a cubic function of letter gap a:  
\begin{eqnarray} && S(a)\approx a^3\frac{2}{\sigma \sqrt{K}}\sqrt{\sum _{k=1}^{K} \frac{|\tilde{\Delta }_k|^4}{{ \int\!\!\!\int \left| \tilde{\Delta }_{k,1}(f_x,f_y)\right|^2 df_xdf_y} }}\nonumber\\ &&\quad \times \left| \int\!\!\!\int NTF(f_x,f_y)OTF(f_x,f_y) df_xdf_y \right| \qquad \end{eqnarray}
(25)
 
Using this approximation for S0(a0) and SB(aB), the S0(a0) = SB(aB) equality (Equation 12) can be written as:  
\begin{eqnarray}&& a_0^3 \left| \int\!\!\!\int NTF(f_x,f_y)OTF_0(f_x,f_y) df_xdf_y \right|\approx\nonumber\\ &&\quad a_B^3 \left| \int\!\!\!\int NTF(f_x,f_y)OTF_B(f_x,f_y) df_xdf_y \right|\qquad \end{eqnarray}
(26)
 
The letters lost L = −50log10(aB/a0) (Equation 12) can therefore be approximated as  
\begin{eqnarray} L\approx \frac{50}{3}\log _{10}\left( \frac{ \left| \int\!\!\!\int NTF(f_x,f_y)OTF_0(f_x,f_y)df_xdf_y \right|}{ \left| \int\!\!\!\int NTF(f_x,f_y)OTF_B(f_x,f_y)df_xdf_y \right|}\right) \!\!\!\!\nonumber\\ \end{eqnarray}
(27)
The argument of the logarithm in Equation 27 is the modulus of the visual Strehl computed with the optical transfer function (VSOTF) metric, as primarily introduced by Thibos et al. (2004):  
\begin{equation} VSOTF=\frac{ \int\!\!\!\int NTF(f_x,f_y)OTF_0(f_x,f_y)df_xdf_y }{ \int\!\!\!\int NTF(f_x,f_y)OTF_B(f_x,f_y)df_xdf_y } \end{equation}
(28)
 
Hence, we obtain the approximation of letters lost for the real observer as a metric of visual image quality:  
\begin{equation} L\approx \frac{50}{3}\log _{10}|VSOTF| \end{equation}
(29)
 
Numerical examples
We show in Figure 2 the accuracy of the approximating model of letters lost with a metric of visual image quality, for the ideal observer (Equation 21: open circle for L* and dashed line for 25log10(M)) and for the real observer (Equation 29: filled circle for L and solid line for 50/3log10|VSOTF|). Black and green curves correspond to through-focus calculations with an additional fixed amount of Zernike spherical aberration \(z_4^0=0.1\) μm and \(z_4^0=0.2\) μm, respectively. We have used a 5-mm pupil size for the conditions with aberration (index B in the model equations) and without aberration (index 0 in the model equations). The overall root mean square error between the complete model and its metric is 0.52 letters for the ideal observer (25log10(M) − L*) and 1.02 letters for the real observer (50/3log10|VSOTF| − L). 
Figure 2.
 
Through-focus calculations of letters lost, for two fixed amplitudes of spherical aberration. Black curves: \(z_4^0=0.1\) μm. Green curves: \(z_4^0=0.2\) μm. L* (open circles) and L (filled circles) are the predictions of the complete model of the ideal and real observers, respectively. The corresponding approximations, as metrics of visual image quality, are 25log10(M) (dashed lines) and 50/3log10|VSOTF| (solid lines), respectively.
Figure 2.
 
Through-focus calculations of letters lost, for two fixed amplitudes of spherical aberration. Black curves: \(z_4^0=0.1\) μm. Green curves: \(z_4^0=0.2\) μm. L* (open circles) and L (filled circles) are the predictions of the complete model of the ideal and real observers, respectively. The corresponding approximations, as metrics of visual image quality, are 25log10(M) (dashed lines) and 50/3log10|VSOTF| (solid lines), respectively.
Methods
Stimulus display
We have used the computational approach to measure the effect of optical aberrations on visual acuity (Burton & Haig, 1984; Applegate, Marsack et al., 2003). The displayed letters were convolved with a numerical point spread function, which we defined with Zernike aberrations for a 5-mm pupil diameter at the 530-nm wavelength. Four experiments consisted of varying \(z_2^0\) Zernike defocus, with an additional fixed amount of Zernike spherical aberration (\(z_4^0=0,0.1,0.2,0.3\) μm). A fifth experiment consisted of varying \(z_3^{-1}\) Zernike coma alone. Each experiment consisted of seven charts with varying amplitude of defocus or coma, plus one control chart (without aberration). The eight charts appeared in a randomized order to limit the effect of blur adaptation on the measurements (Artal et al., 2004; Sawides et al., 2010; de Gracia, Dorronsoro, Marin, Hernandez, & Marcos, 2011; Ohlendorf, Tabernero, & Schaeffel, 2011; Sawides, de Gracia, Dorronsoro, Webster, & Marcos, 2011). Each subject only performed two experiments (2 × 8 charts) in order to limit learning and fatigue effects. We have used a custom-made program using MATLAB functions from the Psychophysics Toolbox (Brainard, 1997). 
We have used the standard set of letters for testing visual acuity in the United States (D, H, N, V, R, Z, S, K, O, C) (Sloan, 1959; Pelli, Robson, & Wilkins, 1988; Ricci, Cedrone, & Cerulli, 1998). We have used black Sloan letters on a green background, with the maximum contrast permitted by the 8-bit green channel. The spectrum of the green channel was measured with a spectrometer (HR2000+, Ocean Optics) as Gaussian shaped (center at 530 nm, 43 nm full width at half maximum) and was estimated to be sufficiently narrow to neglect chromatic aberrations. Subjects performed the experiments with ambient light, which we measured with a calibrated luxmeter (RS Pro TES-1332- G, RS components). This light corresponded to 337 Td retinal illuminance, for the average 3.1-mm pupil of the experiment. We also calculated the retinal illuminance of the test chart alone, which was 706 Td for the average 3.1-mm pupil diameter. 
Subjects
Twenty informed, yet untrained, subjects took part in the study. Subjects wore their current refractive correction and monocularly looked at the test screen (Dell Ultrasharp U2720Q) with their dominant eye. We used the Porta test of sighting dominance. We measured the pupil diameter of each subject in the condition of the test with a ruler. The average age was 28 years (± 7 years standard deviation), the average pupil was 3.1 mm diameter (± 0.7 mm standard deviation). The average spherical equivalent correction was −0.14 diopters (± 1.1 diopters standard deviation), and the average cylindrical correction was −0.11 diopters (± 0.25 diopters standard deviation). Prior informed consent was obtained from the subjects. This study was reviewed by an independent ethical review board and conforms with the principles and applicable guidelines for the protection of human subjects in biomedical research. The experiment was performed according to the Declaration of Helsinki on human experimentation. 
Measurements of letters lost
To measure letters lost, we used the same termination rule as Applegate, Balentine et al. (2003): We counted letters read on a logMAR chart until five errors occurred cumulatively in the chart. Because of the size of the screen, we only displayed the last nine lines of the logMAR chart, which corresponded to visual acuity ranging from 20/63 to 20/10. 
Measurements of visual acuity in the control condition
We measured visual acuity with the aberration-free chart. We assigned a score of 0.02 logMAR for each letter read until five errors occurred cumulatively in the chart. Because our chart started at the 20/63 line (0.5 logMAR), the visual acuity was estimated as 0.6 − 0.02 × n0 logMAR when n0 letters were read. For each subject, we reported the average of two measurements. 
Data analysis
For each aberration level of each experiment, we averaged the number of letters lost across eight different subjects. As mentioned above, each subject completed only two experiments so we did not have all 20 subjects per experiment. The predictive performance of models and metrics was evaluated with respect to the intersubject averaged measurements to reduce the effect of measurement noise, which may have been high because subjects were not trained to the task of the experiment. Moreover, models and metrics were not customized to the subject’s visual system and did not aim at describing intersubject differences. We quantified the performance of models and metrics by computing the root mean square value ϵ of the (average measurement model) difference and we also computed the (α, β) parameters of the \((\hbox{average measurement}=\alpha \times {\rm model} +\beta )\) linear fit. 
Model predictions
We computed the 25log10(M) and 50/3log10|VSOTF| metrics using Equation 20 and Equation 28, respectively. For both metrics, we defined OTFB as the product of the transfer function that we used to numerically blur the displayed Sloan letters for the experiment, times another transfer function that modeled the process of viewing the Sloan letter with a supposedly diffraction-limited eye of pupil diameter 3.1 mm (the mean pupil size). This latter transfer function alone also defined OTF0 in the denominator of Equation 20 and Equation 28. The NTF was computed with the code given by Hastings et al. (2020), for the mean age of subjects (28 years) and the retinal illuminance of our experiment (706 Td). In this study, the model does not take account of the subject’s optical aberrations, as the B condition only refers to the numerical blur. This computation of the two metrics approximately matches our experiment that combines numerical blur (over a 5-mm pupil) and optical blur (over a 3.1-mm pupil on average), assuming that the subject’s optics are diffraction-limited for a 3.1-mm pupil with their current refraction. While this assumption is certainly optimistic (Hastings et al., 2018), we rely on the relative nature of the letters lost measurements to reduce the impact of the eye’s wavefront errors after correction (Applegate, Marsack et al., 2003). The computation of the L* (Equation 11 and Equation 15) and L (Equation 12 and Equation 23) models of letters lost were performed using the same transfer functions. 
Results
In the control condition (aberration free), the visual acuity was −0.21 ± 0.05 logMAR (mean ± standard deviation). All subjects had better than 20/20 visual acuity. The logMAR values were in the (− 0.27, −0.12) range. 
Figure 3 compares the two metrics (25log10(M): open circle; 50/3log10|VSOTF|: filled circle) to experimental measurements of letters lost. We show as error bars in Figures 3A–E the average ± standard deviation (across eight subjects) of the measured letters lost, as a function of the varying amplitude of aberration. Figures 3A–D corresponds to the experiments with varying Zernike defocus \(z_2^0\) and fixed spherical aberration: \(z_4^0=0\) (A), \(z_4^0=0.1~\mu\)m (B), \(z_4^0=0.2~\mu\)m (C), and \(z_4^0=0.3~\mu\)m (D). Figure 3E corresponds to the experiment with varying coma \(z_3^{-1}\). The corresponding scatter graphs of all (average measurement, metric) pairs are shown in Figure 3F. The dashed and solid lines show the corresponding linear fits for 25log10(M) and 50/3log10|VSOTF|, respectively. 
Figure 3.
 
Comparison of the two model-based metrics with experimental measurements of letters lost. Error bars show the average ± standard deviation (across eight subjects) of the measured letters lost as a function of varying defocus \(z_2^0\) and fixed spherical aberration: \(z_4^0=0\) (A), \(z_4^0=0.1~\mu\)m (B), \(z_4^0=0.2~\mu\)m (C), \(z_4^0=0.3~\mu\)m (D), and as a function of varying coma \(z_3^{-1}\) alone (E). The two model-based metrics are shown for each condition: 25log10(M) (open circle) and 50/3log10|VSOTF| (filled circle). (F) The corresponding scatter graphs of all (average measurement, metric) pairs. Dashed and solid lines show the corresponding linear fits for 25log10(M) and 50/3log10|VSOTF|, respectively. Fit parameters are given in Table 1.
Figure 3.
 
Comparison of the two model-based metrics with experimental measurements of letters lost. Error bars show the average ± standard deviation (across eight subjects) of the measured letters lost as a function of varying defocus \(z_2^0\) and fixed spherical aberration: \(z_4^0=0\) (A), \(z_4^0=0.1~\mu\)m (B), \(z_4^0=0.2~\mu\)m (C), \(z_4^0=0.3~\mu\)m (D), and as a function of varying coma \(z_3^{-1}\) alone (E). The two model-based metrics are shown for each condition: 25log10(M) (open circle) and 50/3log10|VSOTF| (filled circle). (F) The corresponding scatter graphs of all (average measurement, metric) pairs. Dashed and solid lines show the corresponding linear fits for 25log10(M) and 50/3log10|VSOTF|, respectively. Fit parameters are given in Table 1.
The parameters of the linear fit \((\hbox{average measurement}=\alpha \times {\rm model} +\beta )\) are given in Table 1, for each model and metric. The highest coefficient of determination is r2 = 0.92 for the 50/3log10|VSOTF| metric. The second highest coefficient of determination is r2 = 0.91 for the L model. The best agreement of fit parameters with the y = x perfect agreement line is for the L model (α = 0.94, close to unity, and β = 0.24 letters read). The second best agreement of fit parameters with the y = x perfect agreement line is for the 50/3log10|VSOTF| metric (α = 0.91 and β = 0.38 letters read). 
Table 1.
 
Comparison of models to measurements. ϵ is the root mean square value of the (average measurement model) difference, for each model (L*, L, 25log10(M), 50/3log10|VSOTF|). We give the coefficients of the (\(\hbox{average measurement}=\alpha \times {\rm model} +\beta\)) linear fit. r2 is the coefficient of determination of the fit, which is shown in Figure 3F for the 25log10(M) metric and the 50/3log10|VSOTF| metric.
Table 1.
 
Comparison of models to measurements. ϵ is the root mean square value of the (average measurement model) difference, for each model (L*, L, 25log10(M), 50/3log10|VSOTF|). We give the coefficients of the (\(\hbox{average measurement}=\alpha \times {\rm model} +\beta\)) linear fit. r2 is the coefficient of determination of the fit, which is shown in Figure 3F for the 25log10(M) metric and the 50/3log10|VSOTF| metric.
The overall root mean square value ϵ of the (average measurement model) difference is given in Table 1 for each model. The lowest value is ϵ = 2.26 letters for the L model. The second lowest value is ϵ = 2.71 letters for the corresponding model-based metric, 50/3log10|VSOTF|. 
Discussion
The main contribution of this work is to relate two existing metrics of visual image quality to the underlying theoretical models of linear observers. The two metrics predict the number of letters lost (negative number) on the logMAR chart, due to optical aberrations. The benefit of our theoretical approach is twofold. First, we aim at predicting letters lost (or acuity changes) without needing a posteriori scale or offset. Second, it gives the interpretation of each metric as a subject’s strategy to identify optotypes. 
Predicting letters lost (or acuity changes) without needing a posteriori scale or offset
In agreement with studies that correlate the VSOTF metric with visual acuity measurements (Marsack et al., 2004; Cheng et al., 2004; Ravikumar et al., 2013), we find that the 50/3log10|VSOTF| metric has a high coefficient of determination (r2 = 0.92 in Table 1). Our prediction of acuity changes makes it possible to go beyond mere analysis of the r2 coefficient, by comparing the parameters of the (measurement, model) linear fit with the y = x perfect agreement line. We find that the 50/3log10|VSOTF| metric has a slope near unity (α = 0.91) and intercept near zero (β = 0.38 letters; see Table 1). Metrics of letters lost can be converted to predict acuity change after multiplication by a factor of −1/50 (one lost line is −5 letters lost, or +0.1 logMAR). Hence, we predict logMAR acuity changes as −1/3log10|VSOTF|. This prediction approximately agrees with the (logMAR acuity changes, log10|VSOTF|) scatter graph reported by (Ravikumar et al., 2013, Figure 5) for a set of normal wavefront errors: Their linear fit is \({\rm logMAR} =-0.190 \log _{10}|VSOTF| +0.0420\). When comparing this linear fit to our theoretical prediction (−1/3log10|VSOTF|), the mean absolute value of the difference in predicted visual acuity is 0.083 logMAR for the aberrations studied in our work. 
Interpretation of each metric as a subject’s strategy to identify optotypes
In this work, the prediction with the 25log10(M) metric is poorer than with the 50/3log10|VSOTF| metric, both in terms of coefficient of determination (r2 = 0.64 vs r2 = 0.92) and parameters of the linear fit that differ from the y = x perfect agreement (α = 0.76 and β = −2.69 letters vs. α = 0.91 and β = 0.38 letters in Table 1). We hypothesize that the better prediction with the 50/3log10|VSOTF| metric originates from a more suitable model of a theoretical observer. As also shown in Table 1, the model of a real observer (L) better agrees with measurements than the ideal observer (L*). We recall that the real observer essentially projects visual images on a set of unaberrated letters, while the ideal observer uses aberrated letters as templates. Using aberrated images is an optimal strategy because they properly represent the observed visual images. Indeed, the ideal observer sets the upper bound of visual acuity for given experimental conditions (aberration, noise level, letter contrast). The model of the ideal observer predicts the optimal visual acuity, both with and without aberrations. Counterintuitively, this model can predict higher acuity loss with optical aberrations than the real observer. It is so in 40% of the aberration conditions analyzed in Figure 3. Similarity, the −25log10(M) metric, which is based on the model of an ideal observer, can predict more letters lost than the −50/3log10|VSOTF| metric. Watson and Ahumada (2008) used Monte Carlo simulations of acuity testing to compare the data agreement of two correlation-maximizing observers: the observer that uses unaberrated letters as a set of templates (XL observer in their Table 2) and the observer that uses aberrated letters (XA in their Table 2). They obtained a better prediction of absolute visual acuity (lower root mean square error) with aberrated templates when the noise level of the model maximized data agreement, but their results also show that unaberrated templates can give better prediction for other levels of noise (see their Figure 6). In that situation, the Watson and Ahumada model agrees with our results, as we obtain better prediction with unaberrated templates (L and 50/3log10|VSOTF|) than with aberrated templates (L* and 25log10(M)). The predicted acuity changes do not depend on the noise level σ, which cancels out when writing that data separability at threshold remains unchanged when aberrations change (Equation 18 and 26 for the ideal and real observers, respectively). Like Watson and Ahumada, we quantify the agreement between a model and measurements with the root mean square value of the (measurement model) difference (ϵ, see Table 1). With ϵ = 2.71 letters for the 50/3log10|VSOTF| metric, the root mean square difference corresponds to around 0.05 logMAR acuity, which is similar to the errors given by Watson and Ahumada (2008) in their Figure 6. 
Role of the phase transfer function
In this study, better prediction of acuity measurements with the 50/3log10|VSOTF| metric than with the 25log10(M) metric corroborates experimental evidence that the phase transfer function of the eye impacts visual acuity measurements (Piotrowski & Campbell, 1982; Sarver & Applegate, 2004; Ravikumar, Bradley, & Thibos, 2010), as VSOTF depends on the OTF while M only depends on its modulus (the MTF). 
Comparison with contrast sensitivity
In our previous study (Leroux et al., 2023), we reported on the prediction of contrast sensitivity measurements with similar metrics and found different results: higher r2, and better agreement with the y = x line, for M than for VSOTF. This result favored the model of an ideal observer for contrast sensitivity measurements, unlike the present study of visual acuity. We think that this difference can be explained by the specific effect of aberrations during each visual test. During a contrast sensitivity measurement, optical blur remains the same for optotypes of fixed size and varying contrast. Hence, the model of an ideal observer only requires one set of aberrated letters to classify letters that all have the same size. During a visual acuity measurement, the model of an ideal observer requires size-dependent sets of aberrated templates. Moreover, the effect of optical blur is exacerbated for small letters. Most subjects probably lose track during this “heavy computational task,” and their visual performance is not well modeled by an ideal observer. The discrepancy of human subjects with the ideal observer is probably specific to our study on the effect of aberrations on visual acuity. The comparison of a subject’s visual performance with the performance of the ideal observer is usually represented as a ratio named efficiency (Pelli, Burns, Farell, & Moore-Page, 2006; Watson & Ahumada, 2012, Watson & Ahumada, 2015). In vision science (Geisler, 2011) and for task-based assessment of image quality (Barrett & Myers, 2003), the model of an ideal observer is successfully used in many experimental studies. 
Experiments with/without the subject’s own natural aberrations
To account for the effect of optical aberrations, the model of an ideal observer is more realistic for contrast sensitivity than for visual acuity. However, for specific studies of visual acuity, it is conceivable that subjects behave like an ideal observer. For example, studies of the effect of the subject’s natural aberrations on visual acuity with adaptive optics correction (Marcos, Sawides, Gambra, & Dorronsoro, 2008; Li et al., 2009; Legras & Rouger, 2008) may favor the model of the ideal observer that uses aberrated templates and the 25log10(M) metric of visual image quality. In the present study, subjects were not familiar with optical aberrations that were not their natural aberrations. This experimental condition may favor the model of the real observer and the 50/3log10|VSOTF| metric. 
Choice of aberrations
The approximation of theoretical observers with metrics of visual acuity (Equations 21 and 29) is the central result of our work. We have experimentally illustrated our theory with a study of aberrations that partly resembles the through-focus study of Cheng et al. (2004), which later provided the experimental data to the landmark paper of Watson and Ahumada (2008). Future work includes the test of our metrics on a wider range of aberrations. 
Conclusions
In this work, we demonstrate that the VSOTF and the M metrics relate to two models of theoretical observers that classify letters of an acuity chart using, as templates, their unaberrated and aberrated images, respectively. Our approach scales the metrics to predict changes in visual acuity due to optical aberrations, without a posteriori scale or offset. We have illustrated this theory with experiments, in which we numerically introduced combinations of defocus and spherical aberration, and pure coma. We obtained better prediction of letters lost with the 50/3log10|VSOTF| metric. Here we have used the numerical approach that directly introduces optical aberrations by convolution of the displayed images of optotypes, and the metrics can be adapted with the appropriate optical transfer functions that correspond to the actual experimental conditions. We also expect that clinical studies can benefit from using our metrics to relate aberration measurements to visual acuity changes. 
Acknowledgments
Commercial relationships: C.-E. Leroux, None; C. Leahy, Carl Zeiss Meditec, Inc. (E); J. Dupuis, None; C. Fontvieille, None; F. Bardin, None. 
Corresponding author: Charles Leroux. 
Email: charles.leroux@unimes.fr. 
Address: Laboratoire MIPA, Université de Nîmes, Site des Carmes, Nîmes 30000, France. 
References
Applegate, R. A., Ballentine, C., Gross, H., Sarver, E. J., & Sarver, C. A. (2003). Visual acuity as a function of Zernike mode and level of root mean square error. Optometry and Vision Science, 80(2), 97–105, https://doi.org/10.1097/00006324-200302000-00005. [CrossRef]
Applegate, R. A., Marsack, J. D., Ramos, R., & Sarver, E. J. (2003). Interaction between aberrations to improve or reduce visual performance. Journal of Cataract and Refractive Surgery, 29(8), 1487–1495, https://doi.org/10.1016/s0886-3350(03)00334-1. [CrossRef] [PubMed]
Applegate, R. A., Marsack, J. D., & Thibos, L. N. (2006). Metrics of retinal image quality predict visual performance in eyes with 20/17 or better visual acuity. Optometry and Vision Science, 83(9), 635, https://doi.org/10.1097/01.opx.0000232842.60932.af. [CrossRef]
Artal, P., Chen, L., Fernández, E. J., Singer, B., Manzanera, S., & Williams, D. R. (2004). Neural compensation for the eye's optical aberrations. Journal of Vision, 4(4), 4, https://doi.org/10.1167/4.4.4. [CrossRef]
Barrett, H. H., & Myers, K. J. (2003). Foundations of image science. Nîmes: John Wiley & Sons.
Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10(4), 433–436. [CrossRef] [PubMed]
Buehren, T., & Collins, M. J. (2006). Accommodation stimulus–response function and retinal image quality. Vision Research, 46(10), 1633–1645, https://doi.org/10.1016/j.visres.2005.06.009. [CrossRef] [PubMed]
Bühren, J., Pesudovs, K., Martin, T., Strenger, A., Yoon, G., & Kohnen, T. (2009). Comparison of optical quality metrics to predict subjective quality of vision after laser in situ keratomileusis. Journal of Cataract and Refractive Surgery, 35(5), 846–855, https://doi.org/10.1167/12.10.1110.1016/j.jcrs.2008.12.039. [CrossRef] [PubMed]
Burton, G., & Haig, N. (1984). Effects of the seidel aberrations on visual target discrimination. Journal of the Optical Society of America A, 1(4), 373–385, https://doi.org/10.1364/JOSAA.1.000373. [CrossRef]
Chen, L., Singer, B., Guirao, A., Porter, J., & Williams, D. R. (2005). Image metrics for predicting subjective image quality. Optometry and Vision Science, 82(5), 358–369, https://doi.org/110.1097/01.opx.0000162647.80768.7f. [CrossRef]
Cheng, X., Bradley, A., & Thibos, L. N. (2004). Predicting subjective judgment of best focus with objective image quality metrics. Journal of Vision, 4(4), 7, https://doi.org/10.1167/4.4.7. [CrossRef]
Cheng, X., Thibos, L. N., & Bradley, A. (2003). Estimating visual quality from wavefront aberration measurements. Journal of Refractive Surgery, 19(5), S579–S584, https://doi.org/10.3928/1081-597X-20030901-14. [CrossRef]
Dalimier, E. (2007). Adaptive optics correction of ocular higher-order aberrations and the effects on functional vision (Unpublished doctoral dissertation). PhD thesis, National University of Ireland, Galway, Ireland.
Dalimier, E., & Dainty, C. (2008). Use of a customized vision model to analyze the effects of higher-order ocular aberrations and neural filtering on contrast threshold performance. Journal of the Optical Society of America A, 25(8), 2078–2087, https://doi.org/10.1364/josaa.25.002078. [CrossRef]
Dalimier, E., Dainty, C., & Barbur, J. L. (2008). Effects of higher-order aberrations on contrast acuity as a function of light level. Journal of Modern Optics, 55(4–5), 791–803, https://doi.org/10.1080/09500340701469641.
Dalimier, E., Pailos, E., Rivera, R., & Navarro, R. (2009). Experimental validation of a Bayesian model of visual acuity. Journal of Vision, 9(7), 12, https://doi.org/10.1167/9.7.12. [CrossRef] [PubMed]
de Gracia, P., Dorronsoro, C., Marin, G., Hernandez, M., & Marcos, S. (2011). Visual acuity under combined astigmatism and coma: Optical and neural adaptation effects. Journal of Vision, 11(2), 5, https://doi.org/10.1167/11.2.5. [CrossRef]
Geisler, W. S. (2011). Contributions of ideal observer theory to vision research. Vision Research, 51(7), 771–781, https://doi.org/10.1016/j.visres.2010.09.027. [CrossRef] [PubMed]
Guirao, A., & Williams, D. R. (2003). A method to predict refractive errors from wave aberration data. Optometry and Vision Science, 80(1), 36–42, https://doi.org/10.1097/00006324-200301000-00006. [CrossRef]
Hastings, G. D., Marsack, J. D., Nguyen, L. C., Cheng, H., & Applegate, R. A. (2017). Is an objective refraction optimised using the visual strehl ratio better than a subjective refraction? Ophthalmic and Physiological Optics, 37(3), 317–325, https://doi.org/10.1111/opo.12363. [CrossRef]
Hastings, G. D., Marsack, J. D., Thibos, L. N., & Applegate, R. A. (2018). Normative best-corrected values of the visual image quality metric vsx as a function of age and pupil size. Journal of the Optical Society of America A, 35(5), 732–739, https://doi.org/10.1364/JOSAA.35.000732. [CrossRef]
Hastings, G. D., Marsack, J. D., Thibos, L. N., & Applegate, R. A. (2020). Combining optical and neural components in physiological visual image quality metrics as functions of luminance and age. Journal of Vision, 20(7), 20, https://doi.org/10.1167/JOV.20.7.20. [CrossRef] [PubMed]
Kilintari, M., Pallikaris, A., Tsiklis, N., & Ginis, H. S. (2010). Evaluation of image quality metrics for the prediction of subjective best focus. Optometry and Vision Science, 87(3), 183–189, https://doi.org/10.1097/OPX.0b013e3181cdde32. [CrossRef]
Legras, R., & Rouger, H. (2008). Calculations and measurements of the visual benefit of correcting the higher-order aberrations using adaptive optics technology. Journal of Optometry, 1(1), 22–29, https://doi.org/10.3921/joptom.2008.22. [CrossRef]
Leroux, C., Fontvieille, C., Leahy, C., Marc, I., & Bardin, F. (2022). Predicting the effects of defocus blur on contrast sensitivity with a model-based metric of retinal image quality. Journal of the Optical Society of America A, 39(10), 1866–1873, https://doi.org/10.1364/JOSAA.464034. [CrossRef]
Leroux, C., Ouadi, S., Leahy, C., Marc, I., Fontvieille, C., & Bardin, F. (2023). Absolute prediction of relative changes in contrast sensitivity with aberrations using a single metric of retinal image quality. Biomedical Optics Express, 14(7), 3203–3212, https://doi.org/10.1364/BOE.487217. [CrossRef] [PubMed]
Li, S., Xiong, Y., Li, J., Wang, N., Dai, Y., Xue, L., … He, J. C. (2009). Effects of monochromatic aberration on visual acuity using adaptive optics. Optometry and Vision Science, 86(7), 868–874, https://doi.org/10.1097/OPX.0b013e3181adfdff. [CrossRef]
López-Gil, N., Martin, J., Liu, T., Bradley, A., Díaz-Muñoz, D., & Thibos, L. N. (2013). Retinal image quality during accommodation. Ophthalmic and Physiological Optics, 33(4), 497–507, https://doi.org/10.1111/opo.12075. [CrossRef]
Majaj, N. J., Pelli, D. G., Kurshan, P., & Palomares, M. (2002). The role of spatial frequency channels in letter identification. Vision Research, 42(9), 1165–1184, https://doi.org/10.1016/S0042-6989(02)00045-7. [CrossRef] [PubMed]
Marcos, S., Sawides, L., Gambra, E., & Dorronsoro, C. (2008). Influence of adaptive-optics ocular aberration correction on visual acuity at different luminances and contrast polarities. Journal of Vision, 8(13), 1, https://doi.org/10.1167/8.13.1. [CrossRef] [PubMed]
Marsack, J. D., Thibos, L. N., & Applegate, R. A. (2004). Metrics of optical quality derived from wave aberrations predict visual performance. Journal of Vision, 4(4), 8, https://doi.org/10.1167/4.4.8. [CrossRef]
Martin, J., Vasudevan, B., Himebaugh, N., Bradley, A., & Thibos, L. (2011). Unbiased estimation of refractive state of aberrated eyes. Vision Research, 51(17), 1932–1940, https://doi.org/10.1016/j.visres.2011.07.006. [CrossRef] [PubMed]
Myers, K. J., & Barrett, H. H. (1987). Addition of a channel mechanism to the ideal observer model. Journal of the Optical Society of America A, 4(12), 2447–2457, https://doi.org/10.1364/JOSAA.4.002447. [CrossRef]
Nestares, O., Navarro, R., & Antona, B. (2003). Bayesian model of Snellen visual acuity. Journal of the Optical Society of America A, 20(7), 1371–1381, https://doi.org/10.1364/JOSAA.20.001371. [CrossRef]
Ohlendorf, A., Tabernero, J., & Schaeffel, F. (2011). Neuronal adaptation to simulated and optically-induced astigmatic defocus. Vision Research, 51(6), 529–534, https://doi.org/10.1016/j.visres.2011.01.010. [CrossRef] [PubMed]
Pelli, D., Robson, J., & Wilkins, A. (1988). The design of a new letter chart for measuring contrast sensitivity. Clinical Vision Science, 2(3), 187–199.
Pelli, D. G., Burns, C. W., Farell, B., & Moore-Page, D. C. (2006). Feature detection and letter identification. Vision Research, 46(28), 4646–4674, https://doi.org/10.1016/j.visres.2006.04.023. [CrossRef] [PubMed]
Piotrowski, L. N., & Campbell, F. W. (1982). A demonstration of the visual importance and flexibility of spatial-frequency amplitude and phase. Perception, 11(3), 337–346, https://doi.org/10.1068/p110337. [CrossRef] [PubMed]
Ravikumar, A., Marsack, J. D., Bedell, H. E., Shi, Y., & Applegate, R. A. (2013). Change in visual acuity is well correlated with change in image-quality metrics for both normal and keratoconic wavefront errors. Journal of Vision, 13(13), 28, https://doi.org/10.1167/13.13.28. [CrossRef] [PubMed]
Ravikumar, A., Sarver, E. J., & Applegate, R. A. (2012). Change in visual acuity is highly correlated with change in six image quality metrics independent of wavefront error and/or pupil diameter. Journal of Vision, 12(10), 11, https://doi.org/10.1167/12.10.11. [CrossRef] [PubMed]
Ravikumar, S., Bradley, A., & Thibos, L. (2010). Phase changes induced by optical aberrations degrade letter and face acuity. Journal of Vision, 10(14), 18, https://doi.org/10.1167/10.14.18. [CrossRef] [PubMed]
Ricci, F., Cedrone, C., & Cerulli, L. (1998). Standardized measurement of visual acuity. Ophthalmic Epidemiology, 5(1), 41–53, https://doi.org/10.1076/opep.5.1.41.1499. [CrossRef] [PubMed]
Sachs, M. B., Nachmias, J., & Robson, J. G. (1971). Spatial-frequency channels in human vision. Journal of the Optical Society of America, 61(9), 1176–1186, https://doi.org/10.1364/JOSA.61.001176. [CrossRef] [PubMed]
Sarver, E. J., & Applegate, R. A. (2004). The importance of the phase transfer function to visual function and visual quality metrics. Journal of Refractive Surgery, 20(5), 504–507, https://doi.org/10.3928/1081-597X-20040901-19. [CrossRef]
Sawides, L., de Gracia, P., Dorronsoro, C., Webster, M., & Marcos, S. (2011). Adapting to blur produced by ocular high-order aberrations. Journal of Vision, 11(7), 21, https://doi.org/10.1167/11.7.21. [CrossRef] [PubMed]
Sawides, L., Marcos, S., Ravikumar, S., Thibos, L., Bradley, A., & Webster, M. (2010). Adaptation to astigmatic blur. Journal of Vision, 10(12), 22, https://doi.org/10.1167/10.12.22. [CrossRef] [PubMed]
Sloan, L. L. (1959). New test charts for the measurement of visual acuity at far and near distances. American Journal of Ophthalmology, 48(6), 807–813, https://doi.org/10.1016/0002-9394(59)90626-9. [CrossRef] [PubMed]
Tarrant, J., Roorda, A., & Wildsoet, C. F. (2010). Determining the accommodative response from wavefront aberrations. Journal of Vision, 10(5), 4, https://doi.org/10.1167/10.5.4. [CrossRef] [PubMed]
Thibos, L. N., Hong, X., Bradley, A., & Applegate, R. A. (2004). Accuracy and precision of objective refraction from wavefront aberrations. Journal of Vision, 4(4), 9, https://doi.org/10.1167/4.4.9. [CrossRef]
Villegas, E. A., Alcón, E., & Artal, P. (2008). Optical quality of the eye in subjects with normal and excellent visual acuity. Investigative Ophthalmology and Visual Science, 49(10), 4688–4696, https://doi.org/10.1167/iovs.08-2316. [CrossRef]
Watson, A. B., & Ahumada, A. J. (2005). A standard model for foveal detection of spatial contrast. Journal of Vision, 5(9), 6, https://doi.org/10.1167/5.9.6. [CrossRef]
Watson, A. B., & Ahumada, A. J. (2008). Predicting visual acuity from wavefront aberrations. Journal of Vision, 8(4), 17, https://doi.org/10.1167/8.4.17. [CrossRef]
Watson, A. B., & Ahumada, A. J. (2012). Modeling acuity for optotypes varying in complexity. Journal of Vision, 12(10), 19, https://doi.org/10.1167/12.10.19. [CrossRef] [PubMed]
Watson, A. B., & Ahumada, A. J. (2015). Letter identification and the neural image classifier. Journal of Vision, 15(2), 15, https://doi.org/10.1167/15.2.15. [CrossRef] [PubMed]
Yi, F., Iskander, D. R., & Collins, M. (2011). Depth of focus and visual acuity with primary and secondary spherical aberration. Vision Research, 51(14), 1648–1658, https://doi.org/10.1016/j.visres.2011.05.006. [CrossRef] [PubMed]
Zheleznyak, L., Jung, H., & Yoon, G. (2014). Impact of pupil transmission apodization on presbyopic through-focus visual performance with spherical aberration. Investigative Ophthalmology and Visual Science, 55(1), 70–77, https://doi.org/10.1167/iovs.13-13107. [CrossRef]
Zheleznyak, L., Sabesan, R., Oh, J.-S., MacRae, S., & Yoon, G. (2013). Modified monovision with spherical aberration to improve presbyopic through-focus visual performance. Investigative Ophthalmology and Visual Science, 54(5), 3157–3165, https://doi.org/10.1167/iovs.12-11050. [CrossRef]
Figure 1.
 
Logarithm of data separability of the ideal observer as a function of logarithm of letter gap a, with aberration (\(S_B^*\), open green circle) and without aberration (\(S_0^*\), open black circle). The linear fits of \(S_B^*\) and \(S_0^*\) (dashed green and dashed black lines, respectively), on the logarithmic scale, allow us to find \(a_B^*\) such that \(S^*_B(a_B^*)=S^*_0(a_0)\). We arbitrarily set log10a0 = 0. The same approach is implemented for the real observer (S0: filled black circle and SB: filled green circle), in order to solve SB(aB) = S0(a0). For the model of the ideal observer, letters lost L* = −13.8 (= 50 × the amplitude of the dashed arrow). For the model of the real observer, letters lost L = −21.9 (= 50 × the amplitude of the solid arrow).
Figure 1.
 
Logarithm of data separability of the ideal observer as a function of logarithm of letter gap a, with aberration (\(S_B^*\), open green circle) and without aberration (\(S_0^*\), open black circle). The linear fits of \(S_B^*\) and \(S_0^*\) (dashed green and dashed black lines, respectively), on the logarithmic scale, allow us to find \(a_B^*\) such that \(S^*_B(a_B^*)=S^*_0(a_0)\). We arbitrarily set log10a0 = 0. The same approach is implemented for the real observer (S0: filled black circle and SB: filled green circle), in order to solve SB(aB) = S0(a0). For the model of the ideal observer, letters lost L* = −13.8 (= 50 × the amplitude of the dashed arrow). For the model of the real observer, letters lost L = −21.9 (= 50 × the amplitude of the solid arrow).
Figure 2.
 
Through-focus calculations of letters lost, for two fixed amplitudes of spherical aberration. Black curves: \(z_4^0=0.1\) μm. Green curves: \(z_4^0=0.2\) μm. L* (open circles) and L (filled circles) are the predictions of the complete model of the ideal and real observers, respectively. The corresponding approximations, as metrics of visual image quality, are 25log10(M) (dashed lines) and 50/3log10|VSOTF| (solid lines), respectively.
Figure 2.
 
Through-focus calculations of letters lost, for two fixed amplitudes of spherical aberration. Black curves: \(z_4^0=0.1\) μm. Green curves: \(z_4^0=0.2\) μm. L* (open circles) and L (filled circles) are the predictions of the complete model of the ideal and real observers, respectively. The corresponding approximations, as metrics of visual image quality, are 25log10(M) (dashed lines) and 50/3log10|VSOTF| (solid lines), respectively.
Figure 3.
 
Comparison of the two model-based metrics with experimental measurements of letters lost. Error bars show the average ± standard deviation (across eight subjects) of the measured letters lost as a function of varying defocus \(z_2^0\) and fixed spherical aberration: \(z_4^0=0\) (A), \(z_4^0=0.1~\mu\)m (B), \(z_4^0=0.2~\mu\)m (C), \(z_4^0=0.3~\mu\)m (D), and as a function of varying coma \(z_3^{-1}\) alone (E). The two model-based metrics are shown for each condition: 25log10(M) (open circle) and 50/3log10|VSOTF| (filled circle). (F) The corresponding scatter graphs of all (average measurement, metric) pairs. Dashed and solid lines show the corresponding linear fits for 25log10(M) and 50/3log10|VSOTF|, respectively. Fit parameters are given in Table 1.
Figure 3.
 
Comparison of the two model-based metrics with experimental measurements of letters lost. Error bars show the average ± standard deviation (across eight subjects) of the measured letters lost as a function of varying defocus \(z_2^0\) and fixed spherical aberration: \(z_4^0=0\) (A), \(z_4^0=0.1~\mu\)m (B), \(z_4^0=0.2~\mu\)m (C), \(z_4^0=0.3~\mu\)m (D), and as a function of varying coma \(z_3^{-1}\) alone (E). The two model-based metrics are shown for each condition: 25log10(M) (open circle) and 50/3log10|VSOTF| (filled circle). (F) The corresponding scatter graphs of all (average measurement, metric) pairs. Dashed and solid lines show the corresponding linear fits for 25log10(M) and 50/3log10|VSOTF|, respectively. Fit parameters are given in Table 1.
Table 1.
 
Comparison of models to measurements. ϵ is the root mean square value of the (average measurement model) difference, for each model (L*, L, 25log10(M), 50/3log10|VSOTF|). We give the coefficients of the (\(\hbox{average measurement}=\alpha \times {\rm model} +\beta\)) linear fit. r2 is the coefficient of determination of the fit, which is shown in Figure 3F for the 25log10(M) metric and the 50/3log10|VSOTF| metric.
Table 1.
 
Comparison of models to measurements. ϵ is the root mean square value of the (average measurement model) difference, for each model (L*, L, 25log10(M), 50/3log10|VSOTF|). We give the coefficients of the (\(\hbox{average measurement}=\alpha \times {\rm model} +\beta\)) linear fit. r2 is the coefficient of determination of the fit, which is shown in Figure 3F for the 25log10(M) metric and the 50/3log10|VSOTF| metric.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×