It is now possible to routinely measure the aberrations of the human eye, but there is as yet no established metric that relates aberrations to visual acuity. A number of metrics have been proposed and evaluated, and some perform well on particular sets of evaluation data. But these metrics are not based on a plausible model of the letter acuity task and may not generalize to other sets of aberrations, other data sets, or to other acuity tasks. Here we provide a model of the acuity task that incorporates optical and neural filtering, neural noise, and an ideal decision rule. The model provides an excellent account of one large set of evaluation data. Several suboptimal rules perform almost as well. A simple metric derived from this model also provides a good account of the data set.

*h*is Sloan letter height in minutes of arc.

*n*and frequency

*f*is

*Z*

_{n}

^{f}. We will also sometimes use the list notation {

*n*,

*f*} or when associated with a coefficient {

*n, f, c*}. Where several modes are present, we represent them as a list of lists,

*Z*= {{

*n*

_{1},

*f*

_{1},

*c*

_{1}}, {

*n*

_{2},

*f*

_{2},

*c*

_{2}},…}. Defocus and astigmatism are determined by second order modes {2, 0} and {2, ±2}, respectively. The reader is referred to Thibos, Hong, et al. (2002) for a more detailed discussion of the Zernike polynomials.

*M*

_{ e}, measured in diopters, and given by

^{2}(Thibos, Hong, et al., 2002).

^{2}at 556 nm. Appropriate optics were used to ensure that specific controlled aberrations could be introduced. The test objects were Sloan letters. Four observers participated. To simplify matters, the observers will be identified throughout by color names (Red, Green, Blue, Brown) and in figures by the corresponding color. Two observers (Red and Green) viewed a set of 45 aberrations, the other two (Blue and Brown) viewed a different set of 22 aberrations.

Observer | Low | High |
---|---|---|

Green and Red | {2, −2} | {4, −2} |

{2, 2} | {4, 2} | |

Brown and Blue | {2, 0} | {4, 0} |

*neural image*, which is then perturbed by additive noise. In our simulations, the noise was always zero-mean Gaussian white noise. The noisy neural image is then compared to a set of template images, one for each candidate letter, and the closest match selected.

*P*(

*x*,

*y*) is the pupil aperture image defined as 1 within the pupil and 0 elsewhere. The point-spread image is then computed as the squared modulus of the Fourier transform of the generalized pupil image. In order to obtain a desired resolution of the point-spread image, the generalized pupil image may first be embedded in a larger image of zeros. The OTF is obtained as the discrete Fourier transform (DFT) of the point-spread image. The letter image is then convolved with the point-spread image to obtain the retinal letter image. This convolution is implemented by multiplication of the OTF and the DFT of the letter image, followed by an inverse DFT. In all of the simulations in this paper, the pupil diameter is set to 5 mm. This is the value used to compute the images in the experiment of Cheng, Bradley, et al. (2004).

^{1}

*ϕ*, the frequency scale, that multiplies the two parameters

*f*

_{0}and

*f*

_{1}of the SCSF (Watson & Ahumada, 2005). This has the effect of shifting the SCSF horizontally in the log-log coordinates of Figure 4. This in turn has the effect of shifting the NTF to higher values. Higher values of the frequency scale correspond to higher values of acuity. Examples of the SCSF and NTF for a frequency scale of

*ϕ*= 2 are shown in pink and light blue, respectively, in Figure 4.

*ϕ*= 1.

*u*( x). We considered varying amounts of spatial uncertainty, but report results only for two special cases: no uncertainty or complete uncertainty.

*s*

_{k}and

*s*

_{j}are the sample and the candidate letter neural images, respectively, ⊗ is the cross-correlation operator, and

*n*is the Gaussian noise with standard deviation

*σ*. This quantity is computed for each candidate letter

*j*, and the value of

*j*for which

*g*

_{j}is largest identifies the letter. A summary of model notation is provided in 5.

*t*

_{ j}=

*s*

_{ j}), in which case the determinant for this minimum distance rule is given by

_{ j}is the normalized template for the letter indexed by

*j*. As discussed below, this may be the aberrated neural image, the original letter, or a diffraction-limited neural image. Note that the result of cross-correlating the test and candidate images is itself an image, in which the value at each pixel reflects the correspondence of the two images when one is shifted by the coordinates of that pixel. Taking the maximum selects the value at the shift with the greatest correspondence. Thus, this rule also accommodates spatial uncertainty, although the uncertainty here is uniform over the image. An uncertainty function could be introduced here, as in the ideal matching rule, but we have not done so.

Identifier | Rule | Templates | Uncertainty |
---|---|---|---|

ID | Ideal | Aberrated | Zero |

IU | Ideal | Aberrated | Infinite |

DA | Distance | Aberrated | Infinite |

XA | Cross-correlation | Aberrated | Infinite |

XD | Cross-correlation | Diffraction limited | Infinite |

XL | Cross-correlation | Letters | Infinite |

*β*(slope) of 4 and a

*γ*(lower asymptote) of 0.1 (Watson & Solomon, 1997). The value of

*β*= 4 was determined from a preliminary simulation, as described in 4. From the fit, acuity was defined as the value of LogMAR at which probability correct was

*P*= 0.66. An example of one simulation is shown in Figure 5. This acuity estimation procedure was applied to each of the 67 wavefront aberration conditions used by Cheng, Bradley, et al. (2004).

*P*= 0.55.

*σ*

_{ n}. We explored various shortcuts to estimation of

*σ*

_{ n}but ultimately determined that exhaustive testing of a range of alternative values was required. Thus, for each model, we tested a sequence of values of

*σ*

_{ n}that bracketed the best fitting value.

*σ*

_{ n}) separately for each observer. In Figure 6, we plot for each model the RMS error for each observer as a function of

*σ*

_{ n}. For all of these results, the frequency scale

*ϕ*= 1. Note that each point in this figure is based on 128 Monte-Carlo trials at each of 67 aberration conditions. From repeated measures of a number of the conditions (see model XD in Figure 6), we estimate the standard deviation of the RMS values to be 0.0024 LogMAR.

*σ*

_{ n}. In Figure 8, we plot the best estimates of

*σ*

_{ n}as a function of model, with each observer again represented by their designated color.

*σ*

_{ n}. In general, these estimates are ordered in agreement with the empirical differences in sensitivity: Red is the most acute observer, and Green and Blue are the least acute (see Figure 1).

*ϕ*(see Frequency scale section). Initially, we explored this parameter with the ID model, using the fast method for which it is an exact solution. Each condition was simulated with 512 trials. For each model and observer, the results at a broad range of noise levels were analyzed as in Figure 6 to estimate the minimum RMS error. The results are shown in Figure 10.

*ϕ*= 1 for any observer. For observers Red and Brown, the optimum is near to

*ϕ*= 2. For observers Green and Blue, the optimum is near to

*ϕ*= 1.3.

*ϕ*= 1.5 and 2. These results are shown in Figure 11 along with the earlier results at

*ϕ*= 1 and the ID results from Figure 10. It should be noted that the ID results (black points) are more accurate since they are derived from 512 trials per condition, while the other model results are based on only 128 trials per condition.

*ϕ*. The best fitting frequency scale appears to be approximately the same for all models.

*ϕ*= 1.32, for observers Blue and Green and 2 for observers Red and Brown.

*ϕ*, in Figure 12. The correlation between the two sets of values is 0.913. For comparison, the best correlation reported by Cheng, Bradley, et al. (2004) was 0.85. The total RMS is 0.056 LogMAR. The metrics considered by Cheng, Bradley, et al. (2004) did not report RMS error since they did not attempt prediction of absolute LogMAR values. Correlation coefficients and RMS values for the group and for the individual observers are shown in Table 3.

Observer | ϕ | RMS | r |
---|---|---|---|

Group | 0.056 | 0.913 | |

Blue | 1.32 | 0.046 | 0.933 |

Green | 1.32 | 0.048 | 0.905 |

Red | 2 | 0.074 | 0.868 |

Brown | 2 | 0.034 | 0.971 |

*ϕ*. These results are pictured for the ID model in Figure 13, which shows graphically how well the model tracks the variations in acuity with aberration.

*s*

_{ j}, where

*j*indexes the individual Sloan letter. We then consider a matrix

*r*

_{ j,k}consisting of the dot products of each neural image with each other. We normalize these dot products by the modulus of each neural image,

*σ*(in fact, if the noise is derived from noise at the input, there would be correlations, but we ignore that here for simplicity). If the letter with index

*j*is presented, then the observer would select the entry in the

*j*th row that is largest. The probability that the correct column is selected is equal to the probability that its entry is larger than each of the incorrect entries. To compute this probability, it is useful to first compute the difference between each column entry (

*r*

_{ j,k}) and the one corresponding to the correct answer (

*r*

_{ j,j}) and to divide these by the standard deviation

*σ*,

*x*, the probability that the correct entry equals

*x*, and that all the other entries are less than

*x*. We then integrate this over all possible values of

*x*. We note that in practice the integral may be taken over the range {−3, 3} without great loss in accuracy. This calculation is illustrated in Figure 15.

*ϕ*and the standard deviation

*σ*. We observed that a range of parameter values gave good fits. The best fit was obtained at

*ϕ*= 2,

*σ*= 0.38, at which point the RMS error was 0.07. The RMS error is plotted against frequency scale in Figure 17. In this figure, each point may have a different value of the noise parameter

*σ*. Note that the optimal value of the frequency scale

*ϕ*is roughly the same as that for our two more acute observers (Red and Brown), as estimated by the model.

*ϕ*= 2) than for the other two observers (

*ϕ*= 1.32). Both sets of observers were more acute than our “standard” observer (

*ϕ*= 1). It is perhaps not surprising that observers should differ in this regard, given the known variations in cone density (Roorda & Williams, 1999). With respect to the standard observer, it was derived from a population of 16 observers of unknown age and with uncertain optical aberrations (Watson & Ahumada, 2005). We must also acknowledge that the data from Cheng, Bradley, et al. (2004) used here come from only four observers, and we do not know where they lie relative to the larger population.

*v*is given by

*j*, so it is sufficient to use the discriminant

*t*

_{ j,k}( x) indicates template

*j*unshifted or shifted optimally for neural image

*k*. Here it is not necessary to compute the actual noise image

*n*( x) or its dot product with the template; instead we create

*J*random deviates that will have the same distribution as the results of that dot product. Specifically, consider the matrix C consisting of the correlations of each neural image with each shifted template. Then consider the SVD of C

*J*noise samples

*β*of the psychometric function. To discover this, we conducted 1024 simulated trials of the ideal observer model with zero uncertainty (ID) on the diffraction-limited condition (

*Z*= {}). We used a version of QUEST that scatters the trials somewhat about threshold. Strength is measured in units of LogMAR/20. The Weibull fit yields

*β*= 3.99. We have conducted additional simulations for other conditions and uncertainties, with similar results (

*β*between 3 and 5). In subsequent simulations, we have used a value of

*β*= 4 in the QUEST procedure and in subsequent fitting of the data.

g _{ k} | discriminant for letter _{ k} |
---|---|

t _{ k}( x) | template for letter _{ k} |

t ― _{ k}( x) | normalized template for letter _{ k} |

sn( x) | noisy neural image |

⊗ | cross correlation operator |

n( x) | noise image |

s _{ k}( x) | neural image for letter _{ k} |

σ | neural noise standard deviation |

u( x) | spatial uncertainty |

ϕ | frequency scale |

f | Normal probability density |

F | Normal probability distribution |

^{1}Cheng, Bradley, et al. (2004) computed images corresponding to a 5-mm pupil but had the observers view them through a 2.5-mm pupil in order to minimize intrusion of the observer's own aberrations. They state that they compensated the calculated images for the effects of the 2.5-mm pupil. However, the 2.5-mm pupil zeros frequencies beyond a limit of about 78.5 cycles/deg (half the limit passed by the 5-mm pupil), and for these frequencies no compensation is possible. Thus, the images we compute contain energy above 78.5 cycles/deg that was not present in the images seen by the Cheng, Bradley, et al. (2004) observers. However, this discrepant energy is always less than 0.5% of the contrast energy of each letter and is effectively removed by our neural transfer function, which at this frequency limit is less than 1% of its maximum.

*American national standard for ophthalmics—Methods for reporting optical aberrations of eyes*. (ANSI Z80.28.)

*Journal of Cataract and Refractive Surgery*, 29, 1487–1495. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics and Image Science*, 7, 1374–1381. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 1, (1):1, 1–8, http://journalofvision.org/1/1/1/, doi:10.1167/1.1.1. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 11, 459–474. [PubMed] [CrossRef] [PubMed]

*Journal of Physiology*, 181, 576–593. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Vision*, 4, (4):3, 272–280, http://journalofvision.org/4/4/3/, doi:10.1167/4.4.3. [PubMed] [Article] [CrossRef]

*Journal of Vision*, 4, (4):7, 310–321, http://journalofvision.org/4/4/7/, doi:10.1167/4.4.7. [PubMed] [Article] [CrossRef]

*Investigative Ophthalmology & Visual Science*, 45, 351–360. [PubMed] [Article] [CrossRef] [PubMed]

*Pattern classification and scene analysis*. New York: John Wiley.

*Journal of Experimental Psychology: Human Perception and Performance*, 10, 655–666. [PubMed] [CrossRef] [PubMed]

*A basic program on reading*.

*Optometry and Vision Science*, 80, 36–42. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 33, 15–20. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 4, (4):8, 322–328, http://journalofvision.org/4/4/8/, doi:10.1167/4.4.8. [PubMed] [Article] [CrossRef]

*Vision Research*, 39, 367–372. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 20, 1371–1381. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 31, 1399–1415. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 4, (12):12, 1136–1169, http://journalofvision.org/4/12/12/, doi:10.1167/4.12.12. [PubMed] [Article] [CrossRef]

*Nature*, 397, 520–522. [PubMed] [CrossRef] [PubMed]

*Nature*, 369, 395–397. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 4, (4):9, 329–351, http://journalofvision.org/4/4/9/, doi:10.1167/4.4.9. [PubMed] [Article] [CrossRef]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 19, 2329–2348. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 5, (9):6, 717–740, http://journalofvision.org/5/9/6/, doi:10.1167/5.9.6. [PubMed] [Article] [CrossRef]

*Society for Information Display Digest of Technical Papers*, 20, 360–363.

*Perception & Psychophysics*, 33, 113–120. [PubMed] [CrossRef] [PubMed]

*Spatial Vision*, 10, 447–466. [PubMed] [CrossRef] [PubMed]

*The mathematica book*. Champaign, IL: Wolfram Media.