**Contrast sensitivity (CS) is widely used as a measure of visual function in both basic research and clinical evaluation. There is conflicting evidence on the extent to which measuring the full contrast sensitivity function (CSF) offers more functionally relevant information than a single measurement from an optotype CS test, such as the Pelli–Robson chart. Here we examine the relationship between functional CSF parameters and other measures of visual function, and establish a framework for predicting individual CSFs with effectively a zero-parameter model that shifts a standard-shaped template CSF horizontally and vertically according to independent measurements of high contrast acuity and letter CS, respectively. This method was evaluated for three different CSF tests: a chart test (CSV-1000), a computerized sine-wave test (M&S Sine Test), and a recently developed adaptive test (quick CSF). Subjects were 43 individuals with healthy vision or impairment too mild to be considered low vision (acuity range of −0.3 to 0.34 logMAR). While each test demands a slightly different normative template, results show that individual subject CSFs can be predicted with roughly the same precision as test–retest repeatability, confirming that individuals predominantly differ in terms of peak CS and peak spatial frequency. In fact, these parameters were sufficiently related to empirical measurements of acuity and letter CS to permit accurate estimation of the entire CSF of any individual with a deterministic model (zero free parameters). These results demonstrate that in many cases, measuring the full CSF may provide little additional information beyond letter acuity and contrast sensitivity.**

*SD*= 10.55) were designated as having healthy vision with no history of ophthalmic disease, and 14 of the 43 (mean age = 57.71 years,

*SD*= 16.64) had prior history of some type of ophthalmic disease (seven cataracts, two glaucoma, one ocular hypertension, three age-related macular degeneration, and one amblyopia). Despite the prior history and later age of some individuals, this group collectively had a relatively mild degree of functional impairment, evidenced by the fact that all subjects had better than 20/40 vision except a single individual with 20/50 acuity. All subjects gave written informed consent in accordance with the study procedures approved by the institutional review board at the Western University of Health Sciences, and all research was done in adherence to the tenets of Declaration of Helsinki. All examinations were performed binocularly and were repeated on the same or following day, with the exception of two participants who returned within 1 week of the first examination. The order in which the examinations were completed was determined pseudorandomly for each subject with a random number generator.

*S*(

*f*) is defined by the following equations: where

*S*′(

*f*) defines the standard three-parameter log-parabola model in Equation 1 and the rules for low-frequency truncation are specified in Equation 2.

_{max}; units: CS); peak SF (sf

_{max}; units: c/°); bandwidth (

*β*; units: c/°), describing the width of the parabola; and truncation (

*δ*; units: logCS), describing the plateau of the function at the lowest SFs (Figure 1a). Curve fitting was performed in MATLAB using a built-in nonlinear least-squares regression algorithm (nlinfit.m). Parameterizing the CSF according to this function also permits estimation of the area under the log CSF (AULCSF), a summary statistic that quantifies the entire range of contrast visibility (Applegate, Hilmantel, & Howland, 1997), as well as the high-SF cutoff value (the SF where threshold is 100% contrast and therefore logCS = 0). To evaluate the quality of log-parabola model fits, we compute the root-mean-squared error (RMSE) between recorded data and model estimates evaluated at the same SFs (see Equation 8).

*M*= 1.57,

*SD*= 0.25) to retest (

*M*= 1.59,

*SD*= 0.25). In terms of high-contrast visual acuity, logMAR values for distance acuity ranged from −0.3 (20/10) to 0.34 (20/45), and near acuity ranged from −0.2 (20/12.5) to 0.3 (20/40). The median for both near and far acuity was −0.06 logMAR—slightly better than 20/20 vision—indicating a sample population with generally little to no deficit in high-contrast visual acuity.

_{max}),

*F*(2, 126) = 39.4,

*p*< 0.001, which was due primarily to much lower values for the qCSF in comparison to the other two tests. There was a significant effect for peak SF (sf

_{max}),

*F*(2, 126) = 127.8,

*p*< 0.001, due to significantly higher values for the CSV-1000 test. The effect for bandwidth (

*β*) was statistically significant,

*F*(2, 126) = 29.2,

*p*< 0.001, and the effect for low-SF truncation (

*δ*) was also significant,

*F*(2, 126) = 46.2,

*p*< 0.001, with the Sine test showing significantly higher values and more variance than the other two tests. The AULCSF was also significantly different among the test types,

*F*(2, 126) = 61.6,

*p*< 0.001. In particular, the qCSF showed much lower AULCSF measurements in comparison to the other two tests. These analyses highlight the fact that the shape of the CSF differs systematically as a function of test type (Moseley & Hill, 1994; Woods & Wood, 1995), and helps to couch these differences in terms of interpretable functional parameters.

*t*-tests reveals that only peak CS,

*t*(246) = 5.38,

*p*< 0.001, and peak SF,

*t*(246) = 2.51,

*p*= 0.012, had significantly worse RMSE values. Fixing bandwidth had a marginal impact,

*t*(246) = 1.72,

*p*= 0.09, and fixing truncation had a minimal impact on the RMSE of model fits,

*t*(246) = 0.98,

*p*= 0.33. The fact that model fitting is robust to cases in which select parameters are fixed to the group mean supports the idea that a template CSF may be adapted from group data to account for the global shape of the CSF and then subsequently adjusted according to just the two most relevant parameters (i.e., peak CS and peak SF) to potentially provide an accurate fit of individual data (Chung & Legge, 2016).

_{max}). We found a statistically significant correlation between letter CS and peak CS for two of the three tests, as shown in Figure 3. Far acuity had a significant negative correlation with peak SF and high-SF cutoff values for the Sine and qCSF tests but, again, not for the CSV-1000. While the CSV-1000 produced parameter estimates that were not significantly correlated with acuity and letter CS, the relationship between peak CS and far acuity trended in the expected direction.

*n*is the total number of subjects, subscript

*i*represents the

*i*th subject in the group, and subscript T indicates the parameter set defining the CSF template. Template parameters for peak CS and peak SF are not fixed to the group mean but instead vary according to independent measurements of letter CS and far visual acuity, respectively, following the linear models: where

*x*

_{letterCS}and

*x*

_{acuity}represent empirical measurements of performance on the Pelli–Toronto letter CS and ETDRS high-contrast acuity tests,

*m*is the slope of the best-fitting linear function, and

*c*is the intercept term. Least-squares regression is used to estimate the slope and intercept terms from group data, and hence used to predict template parameters of peak CS and peak SF for any subject not contained within the normative group data set. As such, the entire predictive model is determined by just six parameters—

*β*

_{T},

*δ*

_{T},

*n*is the number of measured frequency-specific data points (i.e., 1.5, 3.0, 6.0, 12.0, and 18.0 c/°),

*f*is an integer index for stepping through SF levels,

*x*(

*f*) represents actual measured data,

*S*(

*f*) represents frequency-specific values of the fitted CSF function (see Equations 1 and 2) and

*S*

_{template}(

*f*) represents frequency-specific values estimated from the predictive model (see Equations 3–5).

_{predicted}. For example, two-sample

*t*-tests revealed no significant difference (all

*p*s > 0.20 for each CSF test type) in RMSE

_{predicted}between subjects with a prior history of ophthalmologic disease (OD;

*n*= 14) and subjects (n = 29) without a prior history (HV;

*n*= 29). There are several examples of low- and high-error predictions within each subgroup, as shown by organization in rows within Figure 4.

_{test-retest}(the consistency of the test on repeated measurements), which would determine whether they are within the same range of reliability as the instrument or test itself, and RMSE

_{fitted}(with four free parameters to vary), which provides an effective upper bound on how well the truncated log-parabola functions can fit the data in the first place. The scatterplot in Figure 5a shows that all values of RMSE

_{predicted}fall above the unity line with RMSE

_{fitted}, highlighting the fact that the ability to predict individual performance with this model is constrained inherently by the quality of fit for the full four-parameter CSF model. In other words, individuals with high RMSE

_{fitted}will necessarily have greater RMSE

_{predicted}values due to the fact that the zero-free-parameter model is couched in the same parametric form (i.e., the truncated log parabola), and this model is shown to fit the particular subject only so well in the first place. The variability in the ability of the full CSF model to fit individual subjects is clearly a driving factor in explaining variability in the performance of the predictive model.

_{predicted}and RMSE

_{test-retest}values are uncorrelated across individuals, and in fact have a similar underlying mean distribution. For example, for the Sine test the mean RMSE

_{test-retest}across individuals was 0.25 (

*SD*= 0.13) and the mean RMSE

_{predicted}was 0.23 (

*SD*= 0.08), and a

*t*-test revealed no significant difference between these distributions,

*t*(42) = 1.02,

*p*= 0.32. Likewise, for the CSV-1000 the mean RMSE

_{test-retest}across individuals was 0.15 (

*SD*= 0.09) and the mean RMSE

_{predicted}was 0.14 (

*SD*= 0.07), and a

*t*-test revealed no significant difference between these distributions,

*t*(42) = 0.45,

*p*= 0.65. For the qCSF test the mean RMSE

_{test-retest}across individuals was 0.11 (

*SD*= 0.07) and the mean RMSE

_{predicted}was 0.14 (

*SD*= 0.08), and a

*t*-test revealed marginal statistical significance between the distributions,

*t*(42) = −2.0,

*p*= 0.051. Even accounting for the fact that error is introduced to model predictions directly via errors in parametric function fitting, as demonstrated in Figure 5a, this result demonstrates that predictive errors are nonetheless within the same range as measurement errors assessed on test and retest (Figure 5c).

*not*achieved via optimized least-squares fitting of the parametric CSF model to individual-subject data. In theory, the CSF for any number of future subjects can be predicted with this same model by simply collecting measurements for letter CS and high-contrast acuity under similar testing conditions and plugging them into Equations 3–5. Of course, for future application a larger set of normative data would be desirable to produce even more precise estimates of test-specific CSF templates. For reference, the six-parameter template CSF values for each test type are reported in Table 5.

*, 1, 98–101.*

*OSA Technical Digest Series**, 8 (2), 135–160.*

*Statistical Methods in Medical Research**, 178 (62), 769–771, doi: 10.1126/science.178.4062.769.*

*Science**, 78 (5), 264–269.*

*Optometry and Vision Science**, 83 (5), 290–298.*

*Optometry and Vision Science**, 197 (3), 551–566.*

*The Journal of Physiology**In*. International Society for Optics and Photonics.

*Electronic imaging*(pp. 140–151)*, 120 (10), 2160–2161.*

*Ophthalmology**, 42 (18), 2137–2152.*

*Vision Research**, 19, 399–404.*

*Journal of Cataract and Refractive Surgery**, 69 (2), 136–142.*

*British Journal of Ophthalmology**, 82 (11), 970–975.*

*Optometry and Vision Science**, 110, 953–959.*

*Archives of Ophthalmology**, 12 (3), 275–280.*

*Ophthalmic and Physiological Optics**, 61 (6), 403–407.*

*American Journal of Optometry and Physiological Optics**, 43 (2), 5–15.*

*International Ophthalmology Clinics**, 59 (1), 105–109.*

*American Journal of Optometry and Physiological Optics**, 17 (9), 1049–1055.*

*Vision Research**, 18 (1), 3–12.*

*Ophthalmic and Physiological Optics**, 15 (2), 141–148.*

*Journal of Cataract and Refractive Surgery**. 89 (8), 1172–1181.*

*Optometry and Vision Science**, 106 (1), 55–57.*

*Archives of Ophthalmology**, 91 (3), 291–296.*

*Optometry and Vision Science**, 25 (2), 239–252.*

*Vision Research**, 95 (2), 145–151.*

*Neurobiology of Learning and Memory**, 78, 795–797.*

*British Journal of Ophthalmology**, 16 (2), 171–177.*

*Ophthalmology Clinics of North America**, 2 (3), 187–199.*

*Clinical Vision Science**, 3 (13), P56.*

*Journal of the Optical Society of America A**, 105 (4), 735–754.*

*Brain**, 68, 885–889.*

*British Journal of Ophthalmology**, 100 (3), 563–579.*

*Brain**,*

*Journal of the Optical Society of America A**10*(7), 1591–1599.

*, 68 (11), 821–827.*

*British Journal of Ophthalmology**, 29 (1), 79–91.*

*Vision Research**, 103 (1), 51–54.*

*Archives of Ophthalmology**, 86 (3), 152–156.*

*Clinical & Experimental Optometry**(2), 43–57.*

*Clinical and Experimental Optometry, 78*