We determined the accuracy and precision of 33 objective methods for predicting the results of conventional, sphero-cylindrical refraction from wavefront aberrations in a large population of 200 eyes. Accuracy for predicting defocus (as specified by the population mean error of prediction) varied from −0.50 D to +0.25 D across methods. Precision of these estimates (as specified by 95% limits of agreement) ranged from 0.5 to 1.0 D. All methods except one accurately predicted astigmatism to within ±1/8D. Precision of astigmatism predictions was typically better than precision for predicting defocus and many methods were better than 0.5D. Paraxial curvature matching of the wavefront aberration map was the most accurate method for determining the spherical equivalent error whereas least-squares fitting of the wavefront was one of the least accurate methods. We argue that this result was obtained because curvature matching is a biased method that successfully predicts the biased endpoint stipulated by conventional refractions. Five methods emerged as reasonably accurate and among the most precise. Three of these were based on pupil plane metrics and two were based on image plane metrics. We argue that the accuracy of all methods might be improved by correcting for the systematic bias reported in this study. However, caution is advised because some tasks, including conventional refraction of defocus, require a biased metric whereas other tasks, such as refraction of astigmatism, are unbiased. We conclude that objective methods of refraction based on wavefront aberration maps can accurately predict the results of subjective refraction and may be more precise. If objective refractions are more precise than subjective refractions, then wavefront methods may become the new gold standard for specifying conventional and/or optimal corrections of refractive errors.

*M*, the so-called spherical equivalent. Next, the eye’s astigmatism is corrected with a cylindrical lens, followed by a fine-tuning of the spherical lens power if necessary. This is the basis of most of the methods described below.

*equivalent quadratic*of a wavefront aberration map as that quadratic (i.e. a sphero-cylindrical) surface which best represents the map. This idea of approximating an arbitrary surface with an equivalent quadratic is a simple extension of the common ophthalmic technique of approximating a sphero-cylindrical surface with an equivalent sphere. Two methods for determining the equivalent quadratic from an aberration map are presented next.

*c*

_{n}

^{m}is the

*n*

_{th}order Zernike coefficient of meridional frequency

*m*, and

*r*is pupil radius. The power vector notation is a cross-cylinder convention that is easily transposed into conventional minus-cylinder or plus-cylinder formats used by clinicians (see eqns 22, 23 of (Thibos, Wheeler, & Horner, 1997).

*osculating quadric*. Fortunately, a closed-form solution exists for the problem of deriving the power vector parameters of the osculating quadratic from the Zernike coefficients of the wavefront (Thibos et al., 2002). This solution is obtained by computing the curvature at the origin of the Zernike expansion of the Seidel formulae for defocus and astigmatism. This process effectively collects all

*r*

^{2}terms from the various Zernike modes. We used the OSA definitions of the Zernike polynomials, each of which has unit variance over the unit circle (Thibos, Applegate, Schwiegerling, & Webb, 2000). The results given in Equation 2 are truncated at the sixth Zernike order but could be extended to higher orders if warranted.

*M*of that spherical correcting lens needed to maximize optical quality of the corrected eye. With this virtual spherical lens in place, the process can be repeated for through-astigmatism calculations to determine the optimum values of

*J*

_{0}and

*J*

_{45}needed to maximize image quality. If necessary, a second iteration could be used to fine-tune results by repeating the above process with these virtual lenses in place. However, the analysis reported below did not include a second iteration.

*M*,

*J*

_{0}and

*J*

_{45}required to maximize the metric. These objective refractions were then compared with conventional subjective refractions. A listing of acronyms for the various refraction methods is given in Table 1.

N | Acronym | Brief Description |
---|---|---|

1 | RMSw | Standard deviation of wavefront |

2 | PV | Peak-valley |

3 | RMSs | RMSs: std(slope) |

4 | PFWc | Pupil fraction for wavefront (critical pupil) |

5 | PFWt | Pupil fraction for wavefront (tessellation) |

6 | PFSt | Pupil fraction for slope (tessellation) |

7 | PFSc | Pupil fraction for slope (critical pupil) |

8 | Bave | Average Blur Strength |

9 | PFCt | Pupil fraction for curvature (tessellation) |

10 | PFCc | Pupil fraction for curvature (critical pupil) |

11 | D50 | 50% width (min) |

12 | EW | Equivalent width (min) |

13 | SM | Sqrt(2nd moment) (min) |

14 | HWHH | Half width at half height (arcmin) |

15 | CW | Correlation width (min) |

16 | SRX | Strehl ratio in space domain |

17 | LIB | Light in the bucket (norm) |

18 | STD | Standard deviation of intensity (norm) |

19 | ENT | Entropy (bits) |

20 | NS | Neural sharpness (norm) |

21 | VSX | Visual Strehl in space domain |

22 | SFcMTF | Cutoff spat. freq. for rMTF (c/d) |

23 | AreaMTF | Area of visibility for rMTF (norm) |

24 | SFcOTF | Cutoff spat. freq. for rOTF (c/d) |

25 | AreaOTF | Area of visibility for rOTF (norm) |

26 | SROTF | Strehl ratio for OTF |

27 | VOTF | OTF vol/ MTF vol |

28 | VSOTF | Visual Strehl ratio for OTF |

29 | VNOTF | CS*OTF vol/ CS*MTF vol |

30 | SRMTF | Strehl ratio for MTF |

31 | VSMTF | Visual Strehl ratio for MTF |

32 | LSq | Least squares fit |

33 | Curve | Curvature fit |

*M*=

*J*

_{0}=

*J*

_{45}= 0. The level of success achieved by the 33 methods of objective refraction described above was judged on the basis of precision and accuracy at matching these predictions (Figure 3). Accuracy for the spherical component of refraction was computed as the population mean of

*M*as determined from objective refractions. Accuracy for the astigmatic component of refraction was computed as the population mean of (Bullimore, Fusaro, & Adams, 1998) vectors. Precision is a measure of the variability in results and is defined for

*M*as twice the standard deviation of the population values, which corresponds to the 95% limits of agreement (LOA) (Bland & Altman, 1986). The confidence region for astigmatism is an ellipse computed for the bivariate distribution of

*J*

_{0}and

*J*

_{45}. This suggests a definition of precision as the geometric mean of the major and minor axes of the 95% confidence ellipse.

*M*to within 1/8 D and 24 methods were accurate to within 1/4 D. The method of paraxial curvature matching was the most accurate method, closely followed by the through-focus method for maximizing the wavefront quality metrics PFWc and PFCt. Least-squares fitting was one of the least accurate methods (mean error = −0.39 D).

*M*ranged from 0.5 to 1.0 D. A value of 0.5 D means that the error in predicting

*M*for 95 percent of the eyes in our study fell inside the confidence range given by the mean ± 0.5 D. The most precise method was PFSc (±0.49D), which was statistically significantly better than the others (

*F*-test for equality of variance, 5% significance level). Precision of the next 14 methods in rank ranged from ±0.58D to ±0.65D. These values were statistically indistinguishable from each other. This list of the 15 most precise methods included several examples from each of the three categories of wavefront quality, point-image quality, and grating-image quality. Rank ordering of all methods for predicting defocus is given in Table 2.

Accuracy | Precision | |||
---|---|---|---|---|

Rank | Metric | Mean | Metric | 2xSTD |

1 | PFCc | 0.2406 | PFSc | 0.4927 |

2 | Curv | −0.006 | AreaOTF | 0.5803 |

3 | PFWc | −0.0063 | VSOTF | 0.5806 |

4 | PFCt | −0.0425 | PFWc | 0.5839 |

5 | SFcMTF | −0.0425 | LIB | 0.5951 |

6 | LIB | −0.0681 | NS | 0.5961 |

7 | VSX | −0.0731 | VSMTF | 0.5987 |

8 | SFcOTF | −0.0737 | EW | 0.6081 |

9 | CW | −0.0912 | SRX | 0.6081 |

10 | EW | −0.1006 | AreaMTF | 0.6112 |

11 | SRX | −0.1006 | PFCt | 0.6213 |

12 | VSMTF | −0.1131 | STD | 0.63 |

13 | NS | −0.1144 | SFcMTF | 0.6343 |

14 | VOTF | −0.125 | VSX | 0.6391 |

15 | PFSc | −0.1281 | D50 | 0.6498 |

16 | VNOTF | −0.1575 | CW | 0.6558 |

17 | AreaMTF | −0.165 | PFWt | 0.6575 |

18 | STD | −0.1656 | PFSt | 0.6577 |

19 | VSOTF | −0.1794 | RMSw | 0.6702 |

20 | SROTF | −0.1875 | SFcOTF | 0.6786 |

21 | HWHH | −0.200 | SRMTF | 0.6888 |

22 | PFSt | −0.2162 | SROTF | 0.69 |

23 | AreaOTF | −0.2269 | ENT | 0.6987 |

24 | SRMTF | −0.2544 | LSq | 0.7062 |

25 | D50 | −0.2825 | HWHH | 0.7115 |

26 | PFWt | −0.3231 | RMSs | 0.7159 |

27 | ENT | −0.3638 | Curv | 0.7202 |

28 | RMSw | −0.3831 | SM | 0.7315 |

29 | LSq | −0.3906 | VNOTF | 0.7486 |

30 | RMSs | −0.425 | Bave | 0.7653 |

31 | SM | −0.4319 | PV | 0.7725 |

32 | PV | −0.4494 | VOTF | 0.8403 |

33 | Bave | −0.4694 | PFCc | 0.9527 |

Accuracy | Precision | |||
---|---|---|---|---|

Rank | Metric | Mean | Metric | 2xSTD |

1 | HWHH | 0.0155 | LSq | 0.3235 |

2 | LIB | 0.0164 | PFSc | 0.3315 |

3 | PFCt | 0.0192 | Bave | 0.3325 |

4 | AreaMTF | 0.0258 | RMSs | 0.3408 |

5 | ENT | 0.0273 | RMSw | 0.3429 |

6 | NS | 0.0281 | Curv | 0.3568 |

7 | VSX | 0.03 | PFWc | 0.3639 |

8 | PFSt | 0.0305 | PV | 0.4278 |

9 | AreaOTF | 0.0313 | VSMTF | 0.4387 |

10 | EW | 0.0343 | AreaMTF | 0.4423 |

11 | SRX | 0.0343 | NS | 0.4544 |

12 | SRMTF | 0.038 | PFCt | 0.4715 |

13 | VSMTF | 0.0407 | STD | 0.4752 |

14 | STD | 0.0422 | PFWt | 0.4923 |

15 | CW | 0.0576 | SM | 0.4967 |

16 | RMSs | 0.0589 | SRMTF | 0.5069 |

17 | VSOTF | 0.0594 | EW | 0.5181 |

18 | PFSc | 0.0608 | SRX | 0.5181 |

19 | D50 | 0.0665 | CW | 0.5287 |

20 | SM | 0.0668 | LIB | 0.535 |

21 | Bave | 0.0685 | AreaOTF | 0.5444 |

22 | SROTF | 0.0724 | SFcMTF | 0.5659 |

23 | PFWc | 0.0745 | VSX | 0.5813 |

24 | VOTF | 0.0787 | VSOTF | 0.6796 |

25 | LSq | 0.0899 | HWHH | 0.6796 |

26 | RMSw | 0.0909 | SROTF | 0.7485 |

27 | Curv | 0.0913 | PFSt | 0.7555 |

28 | PV | 0.098 | SFcOTF | 0.7821 |

29 | PFWt | 0.1039 | VNOTF | 0.816 |

30 | VNOTF | 0.1059 | D50 | 0.8416 |

31 | SFcOTF | 0.113 | ENT | 0.8751 |

32 | SFcMTF | 0.1218 | VOTF | 0.9461 |

33 | PFCc | 0.8045 | PFCc | 1.0005 |

*M*. The resulting correlation matrix is visualized in Figure 8. For example, the left-most column of tiles in the matrix represents the Pearson correlation coefficient

*r*between the first objective refraction method in the list (RMSw) and all other methods in the order specified in Table 1. Notice that the values of

*M*predicted by optimizing RMSw are highly correlated with the values returned by methods 3 (RMSs), 8 (Bave), 19 (ENT), and 32 (least-squares fit). As predicted, all of these metrics are grouped at the bottom of the ranking in Figure 7. To the contrary, refractions using RMSw are poorly correlated with values returned by methods 4 (PFWc), 9 (PFCt), 21 (VSX), 24 (SFcOTF), and 33 (Curvature fit). All of these metrics are grouped at the top of the ranking in Figure 7, which further supports this connection between accuracy and correlation. A similar analysis of the correlation matrix for astigmatism parameters is not as informative because there was very little difference between the various methods for predicting

*J*

_{0}and

*J*

_{45}.

*M*=+0.25D. However, this argument does not apply to the other two examples that are poorly correlated with most other metrics even though these other metrics produced similar refractions on average (e.g. 20 (NS), 7 (PFSc), and 23 (AreaMTF)). This result suggests that maximizing metrics VOTF and VNOTF optimizes a unique aspect of optical and visual quality that is missed by other metrics. In fact, these two metrics were specifically designed to capture infidelity of spatial phase in the retinal image.

*r*

^{2}) term. It also corresponds to a paraxial analysis since the

*r*

^{2}coefficient is zero when the paraxial rays are well focused. Although this method was one of the least accurate methods for predicting astigmatism, it nevertheless was accurate to within 1/8D. The curvature method was one of the most precise methods for predicting astigmatism but was significantly less precise than some other methods for predicting defocus. For this reason it was eliminated from the list of 5 most precise and accurate methods.

*curvature matching*(

*and several other metrics with similar accuracy*)

*is a biased method that successfully predicts a biased endpoint*. By the same argument, the biased curvature method is not expected to predict astigmatism accurately because conventional refractions are unbiased for astigmatism. Although this line of reasoning explains why the paraxial curvature method will locate a point beyond the hyperfocal point, we lack a convincing argument for why the located point should lie specifically at infinity. Perhaps future experiments that include measurement of the DOF as well as the hyperfocal distance will clarify this issue and at the same time help identify objective methods for determining the hyperfocal distance.

*et al.*(Cheng, Bradley, & Thibos, 2004) that the optimum focus lies somewhere between the more distant paraxial focus and the nearer RMS focus. Taken together, the least-squares and curvature fitting methods would appear to locate the two ends of the DOF interval. While perhaps a mere coincidence, if this intriguing result could be substantiated theoretically then it might become a useful method to compute the DOF from the wavefront aberration map for individual eyes.

*et al.*(Koomen, Scolnik, & Tousey, 1951) and Charman

*et al.*(Charman, Jennings, & Whitefoot, 1978) found that pupil size affects subjective refraction differently under photopic and scotopic illumination. They suggested that this might be due to different neural filters operating at photopic and scotopic light levels. A change in neural bandwidth of these filters would alter the relative weighting given to low and high spatial frequency components of the retinal image, thereby altering the optimum refraction. This idea suggests future ways to test the relative importance of the neural component of metrics of visual quality described here.

*M*across all metrics was only 1/8 D (0.29–0.42 D), indicating that the precision of all metrics was much the same. This suggests that the precision of objective refraction might be dominated by a single, underlying source of variability. That source might in fact be variability in the subjective refraction. Bullimore

*et al*found that the 95% limit of agreement for repeatability of refraction is ± 0.75D, which corresponds to a standard deviation of 0.375 D (Bullimore et al., 1998). If the same level of variability were present in our subjective refractions, then uncertainty in determining the best subjective correction would have been the dominant source of error. It is possible, therefore, that all of our objective predictions are extremely precise but this precision is masked by imprecision of the gold standard of subjective refraction. If so, then an objective wavefront analysis that accurately determines the hyperfocal point and the DOF with reduced variability could become the new gold standard of refraction.

*et al.*(Cheng et al., 2004) and Marsack

*et al.*(Applegate, Marsack, & Thibos, 2004) both used the same implementation of these metrics described below (see 1) to predict the change in visual acuity produced when selected, higher-order aberrations are introduced into an eye. The experimental design of the Cheng study was somewhat simpler in that monochromatic aberrations were used to predict monochromatic visual performance, whereas Marsack used monochromatic aberrations to predict polychromatic performance. Nevertheless, both studies concluded that changes in visual acuity are accurately predicted by the pupil plane metric PFSt and by the image plane metric VSOTF. Furthermore, both studies concluded that three of the least accurate predictors were RMSw, HWHH, and VOTF. In addition, the Cheng study demonstrated that, as expected, those metrics which accurately predicted changes in visual acuity also predicted the lens power which maximized acuity in a through-focus experiment. This was an important result because it established a tight link between variations in monochromatic acuity and monochromatic refraction.

*M*in a conventional refraction, which suggests that it would have accurately predicted

*M*in an optimum refraction. (This point is illustrated graphically in Figure 5 of the Cheng

*et al.*paper.) Furthermore, present results show that VSOTF is one of the most precise methods for estimating

*M*, which suggests it is very good at monitoring the level of defocus in the retinal image for eyes with a wide variety of aberration structures. It follows that this metric should also be very good at tracking the loss of visual performance when images are blurred with controlled amounts of higher-order aberrations, as shown by the Cheng and Marsack studies. Lastly, the Cheng and Marsack studies rejected RMSw, HWHH, and VOTF as being among the least predictive metrics. All three of these metrics were among the least precise metrics for predicting

*M*in the present study. It is reasonable to suppose that the high levels of variability associated with these metrics would have contributed to the poor performance recorded in those companion studies.

*Wavefront error*describes optical path differences across the pupil that give rise to phase errors for light entering the eye through different parts of the pupil. These phase errors produce interference effects that degrade the quality of the retinal image. An example of a wave aberration map is shown in Figure A-1. Two common metrics of wavefront flatness follow.

*RMS*

_{w}=

*root-mean-squared wavefront error computed over the whole pupil (microns)*where w(x,y) is the wavefront aberration function defined over pupil coordinates x,y,

*A*= pupil area, and the integration is performed over the domain of the entire pupil. Computationally, RMSw is just the standard deviation of the values of wavefront error specified at various pupil locations.

*Wavefront slope*is a vector-valued function of pupil position that requires two maps for display, as illustrated in Figure A-2. One map shows the slope in the horizontal (

*x*) direction and the other map shows the slope in the vertical (

*y*) direction. (Alternatively, a polar-coordinate scheme would show the radial slope and tangential slope.) Wavefront slopes may be interpreted as transverse ray aberrations that blur the image. These ray aberrations can be conveniently displayed as a vector field (lower right diagram). The base of each arrow in this plot marks the pupil location and the horizontal and vertical components of the arrow are proportional to the partial derivatives of the wavefront map. If the field of arrows is collapsed so that all the tails superimpose, the tips of the arrows represent a spot diagram (lower right diagram) that approximates the system point-spread function (PSF).

*Wavefront curvature*describes focusing errors that blur the image. To form a good image at some finite distance, wavefront curvature must be the same everywhere across the pupil. A perfectly flat wavefront will have zero curvature everywhere, which corresponds to the formation of a perfect image at infinity. Like wavefront slope, wavefront curvature is a vector-valued function of position that requires more than one map for display (Figure A-3). Curvature varies not only with pupil position but also with orientation at any given point on the wavefront.

*M*(

*x,y*) and Gaussian curvature G(

*x,y*) as follows. where the principal curvature maps

*k*

_{1}(

*x,y*),

*k*

_{2}(

*x,y*) are computed from

*M*and G using The Gaussian and mean curvature maps may be obtained from the spatial derivatives of the wavefront aberration map using textbook formulas (Carmo, 1976).

*M*), the normal component of astigmatism (

*J*

_{0}) and the oblique component of astigmatism (

*J*

_{45}). Experiments have shown that the length of the power vector, which is the definition of blur strength, is a good scalar measure of the visual impact of sphero-cylindrical blur (Raasch, 1995). Thus, a map of the length of the power-vector representation of a wavefront at each point in the pupil may be called a blur-strength map (Figure A-3).

*pupil fraction*. Pupil fraction is defined as the fraction of the pupil area for which the optical quality of the eye is reasonably good (but not necessarily diffraction-limited). A large pupil fraction is desirable because it means that most of the light entering the eye will contribute to a good-quality retinal image.

*critical diameter*, which can be used to compute the pupil fraction (critical pupil method) as follows

*PFW*

_{c}=

*PF*

_{c}

*when critical pupil is defined as the concentric area for which RMS*

_{w}<

*criterion (e.g. λ/4)*

*PFS*

_{c}=

*PF*

_{c}

*when critical pupil is defined as the concentric area for which RMS*

_{s}<

*criterion (e.g. 1 arcmin)*

*PFC*

_{c}=

*PF*

_{c}

*when critical pupil is defined as the concentric area for which B*

_{ave}<

*criterion (e.g. 0.25D)*

*PFW*

_{t}=

*PF*

_{t}

*when a good sub-aperture satisfies the criterion PV*<

*criterion (e.g. λ/4)*

*PFS*

_{t}=

*PF*

_{t}

*when a good sub-aperture satisfies the criterion horizontal slope and vertical slope are both*<

*criterion (e.g. 1 arcmin)*

*PFC*

_{t}=

*PF*

_{t}

*when a good sub-aperture satisfies the criterion B*<

*criterion (e.g. 0.25D)*

*x,y*), defined as where

*k*is the wave number (2π/wavelength) and

*A*(

*x,y*) is an optional apodization function of pupil coordinates x,y. When computing the physical retinal image at the entrance apertures of the cone photoreceptors, the apodization function is usually omitted. However, when computing the visual effectiveness of the retinal image, the waveguide nature of cones must be taken into account. These waveguide properties cause the cones to be more sensitive to light entering the middle of the pupil than to light entering at the margin of the pupil (Burns, Wu, Delori, & Elsner, 1995; Roorda & Williams, 2002; Stiles & Crawford, 1933). It is common practice to model this phenomenon as an apodizing filter with transmission

*A*(

*x,y*) in the pupil plane (Atchison, Joblin, & Smith, 1998; Bradley & Thibos, 1995; Zhang, Ye, Bradley, & Thibos, 1999).

*D50*=

*diameter of a circular area centered on PSF peak which captures 50% of the light energy (arcmin)*

*r*, where

*r*is defined implicitly by: where PSF

_{N}is the normalized (i.e. total intensity = 1) point-spread function with its peak value located at

*r*= 0. This metric ignores the light outside the central 50% region, and thus is insensitive to the shape of the PSF tails.

*EW*=

*equivalent width of centered PSF (arcmin)*

*x*

_{0},

*y*

_{0}are the coordinates of the peak of the PSF. In this and following equations,

*x,y*are spatial coordinates of the retinal image, typically specified as visual angles subtended at the eye’s nodal point. Note that although EW describes spatial compactness, it is computed from PSF contrast. As the height falls the width must increase to maintain a constant volume under the PSF.

*SM*=

*square root of second moment of light distribution (arcmin)*

*HWHH*=

*half width at half height (arcmin)*

*CW*=

*correlation width of light distribution (arcmin)*

*SRX*=

*Strehl ratio computed in spatial domain*

*LIB*=

*light-in-the-bucket*

_{N}is the normalized (i.e. total intensity = 1) point-spread function. The domain of integration is the central core of a diffraction-limited PSF for the same pupil diameter. An alternative domain of interest is the entrance aperture of cone photoreceptors. Similar metrics have been used in the study of depth-of-focus (Marcos, Moreno, & Navarro, 1999).

*STD*=

*standard deviation of intensity values in the PSF, normalized to diffraction-limited value*

*ENT*=

*entropy of the PSF*

*NS*=

*neural sharpness*

*g*(

*x,y*) is a bivariate-Gaussian, neural weighting-function. A profile of this weighting function (Figure A-6 shows that it effectively ignores light outside of the central 4 arc minutes of the PSF.

*VSX*=

*visual Strehl ratio computed in the spatial domain*.

*N*(

*x,y*) is a bivariate neural weighting function equal to the inverse Fourier transform of the neural contrast sensitivity function for interference fringes (Campbell & Green, 1965). With this metric, light outside of the central 3 arc minutes of the PSF doubly detracts from image quality because it falls outside the central core and within an inhibitory surround. This is especially so for light just outside of the central 3 arc minutes in that slightly aberrated rays falling 2 arc minutes away from the PSF center are more detrimental to image quality than highly aberrated rays falling farther from the center.

*SFcMTF*=

*spatial frequency cutoff of radially-averaged modulation-transfer function (rMTF)*

*f*(frequency) and φ (orientation). A graphical depiction of SFcMTF is shown in Figure A-8.

*SFcOTF*=

*spatial frequency cutoff of radially-averaged optical-transfer function (rOTF)*.

*f*(frequency) and φ (orientation). Since the OTF is a complex-valued function, integration is performed separately for the real and imaginary components. Conjugate symmetry of the OTF ensures that the imaginary component vanishes, leaving a real-valued result. A graphical depiction of SFcOTF is shown in Figure A-9.

*AreaMTF*=

*area of visibility for rMTF (normalized to diffraction-limited case)*.

*T*

_{N}is the neural contrast threshold function, which equals the inverse of the neural contrast sensitivity function (Campbell & Green, 1965). When computing area under rMTF, phase-reversed segments of the curve count as positive area (Figure A-8). This is consistent with our definition of SFcMTF as the highest frequency for which rMTF exceeds neural theshold. This allows spurious resolution to be counted as beneficial when predicting visual performance for the task of contrast detection. Metrics based on the volume under the MTF have been used in studies of chromatic aberration (Marcos, Burns, Moreno-Barriusop, & Navarro, 1999) and visual instrumentation (Mouroulis, 1999).

*AreaOTF*=

*area of visibility for rOTF (normalized to diffraction-limited case)*.

*T*

_{N}is the neural contrast threshold function defined above. Since the domain of integration extends only to the cutoff spatial frequency, phase-reversed segments of the curve do not contribute to area under rOTF. This is consistent with our definition of SFcOTF as the lowest frequency for which rOTF is below neural theshold. This metric would be appropriate for tasks in which phase reversed modulations (spurious resolution) actively interfere with performance.

*SRMTF*=

*Strehl ratio computed in frequency domain (MTF method)*

*SROTF*=

*Strehl ratio computed in frequency domain (OTF method)*

*VSMTF*=

*visual Strehl ratio computed in frequency domain (MTF method)*

_{N}, In so doing, modulation in spatial frequencies above the visual cut-off of about 60 c/deg is ignored, and modulation near the peak of the CSF (e.g. 6 c/deg) is weighted maximally. It is important to note that this metric gives weight to visible, high spatial-frequencies employed in typical visual acuity testing (e.g. 40 c/deg in 20/15 letters). Visual Strehl ratio computed by the MTF method is equivalent to the visual Strehl ratio for a hypothetical PSF that is well-centered with even symmetry computed as the inverse Fourier transform of MTF (which implicitly assumes. PTF=0). Thus, in general, VSMTF is only an approximation of the visual Strehl ratio computed in the spatial domain (VSX).

*VSOTF*=

*visual Strehl ratio computed in frequency domain (OTF method)*

*VOTF*=

*volume under OTF normalized by the volume under MTF*

*VNOTF*=

*volume under neurally-weighted OTF, normalized by the volume under neurally-weighted MTF*

_{poly}is a weighted sum of the monochromatic spread functions PSF(

*x,y,λ*), Given this definition, PSF

_{poly}may be substituted for PSF in any of the equations given above to produce new, polychromatic metrics of image quality. In addition to these luminance metrics of image quality, other metrics can be devised to capture the changes in color appearance of the image caused by ocular aberrations. For example, the chromaticity coordinates of a point source may be compared to the chromaticity coordinates of each point in the retinal PSF and metrics devised to summarize the differences between image chromaticity and object chromaticity. Such metrics may prove useful in the study of color vision.

_{poly}may be computed as the Fourier transform of PSF

_{poly}. Substituting this new function for OTF (and its magnitude for MTF) in any of the equations given above will produce new metrics of polychromatic image quality defined in the frequency domain. Results obtained by these polychromatic metrics will be described in a future report.