The contrast sensitivity function (CSF) predicts functional vision better than acuity, but long testing times prevent its psychophysical assessment in clinical and practical applications. This study presents the quick CSF (qCSF) method, a Bayesian adaptive procedure that applies a strategy developed to estimate multiple parameters of the psychometric function (A. B. Cobo-Lewis, 1996; L. L. Kontsevich & C. W. Tyler, 1999). Before each trial, a one-step-ahead search finds the grating stimulus (defined by frequency and contrast) that maximizes the expected information gain (J. V. Kujala & T. J. Lukka, 2006; L. A. Lesmes et al., 2006), about four CSF parameters. By directly estimating CSF parameters, data collected at one spatial frequency improves sensitivity estimates across all frequencies. A psychophysical study validated that CSFs obtained with 100 *qCSF* trials (∼10 min) exhibited good precision across spatial frequencies (*SD* < 2–3 dB) and excellent agreement with CSFs obtained independently (mean RMSE = 0.86 dB). To estimate the broad sensitivity metric provided by the area under the log CSF (*AULCSF*), only 25 trials were needed to achieve a coefficient of variation of 15–20%. The current study demonstrates the method's value for basic and clinical investigations. Further studies, applying the *qCSF* to measure wider ranges of normal and abnormal vision, will determine how its efficiency translates to clinical assessment.

*qCSF*) method, a computerized monitor-based test that provides the precision and flexibility of laboratory psychophysics, with a testing time comparable to clinical cards and charts. Relative to previous CS tests (Arden & Jacobson, 1978; Ginsburg, 2006; Owsley, 2003), the

*qCSF*uses a much larger stimulus space that exhibits both a broad range and fine resolution for sampling grating frequency and contrast. Whereas classical adaptive methods converge to a single threshold estimate in one stimulus condition (e.g., grating spatial frequency), the

*qCSF*concurrently estimates thresholds across the full spatial-frequency range. Before each trial, a one-step-ahead search evaluates the next trial's possible outcomes and finds the stimulus maximizing the expected information gain (Cobo-Lewis, 1996; Kontsevich & Tyler, 1999; Kujala & Lukka, 2006; Lesmes et al., 2006), about the parameters of the particular CSF under study. In this report, demonstration and simulation of the

*qCSF*method is followed by psychophysical validation.

*quick CSF*method

*qCSF*method greatly increases the efficiency of CSF testing by (1) imposing a functional form on the CSF; (2) defining a probability density function over a space of CSFs of that form, (3) updating this probability density (and parameter estimates) via Bayes Rule, given the results of previous trials, and (4) looking ahead to the possible outcomes of future trials, to find stimuli that further refine parameter estimates. Taken together, these features provide a flexible test that can efficiently sample grating stimuli from a broad stimulus space. Leveraging information acquired during the experiment with a priori knowledge about the CSF's general functional form greatly accelerates its estimation. By directly estimating the CSF parameters, trial outcomes from a single spatial frequency condition can better inform sensitivity estimates across all frequencies.

*S*(

*f*), represents sensitivity (1/threshold) as a function of grating frequency. Based on a review of nine parametric functions, Watson and Ahumada (2005) concluded that all provide a roughly equivalent description of the standard CSF. The

*qCSF*method uses one form, the

*truncated log-parabola*(see Figure 1), to describe the CSF with four parameters: (1) the peak gain (sensitivity),

*γ*

_{max}; (2) the peak spatial frequency,

*f*

_{max}; (3) the bandwidth

*β,*which describes the function's full-width at half-maximum (in octaves), and (4)

*δ,*the truncation level at low spatial frequencies. Without truncation, the

*log-parabola, S*′(

*f*), defines (decimal log) sensitivity as

*κ*= log

_{10}(2) and

*β*′ = log

_{10}(2

*β*). Figure 1 represents a

*log-parabola*(dotted line), which is truncated at frequencies below the peak with the parameter,

*δ*:

*double-exponential*and

*(untruncated) log-parabola*—were adequate for fitting aggregate CSF data but systemically misfit CSF data from individuals. The asymmetric

*double-exponential*misfits the symmetry typically observed near the CSF's peak, and the symmetric

*log-parabola*misfits the plateau typically observed on the peak's low-frequency side (Rohaly & Owsley, 1993). With an additional parameter to describe the low-frequency plateau, the

*truncated log-parabola*can deal with the issues of the CSF's symmetry and asymmetry. Other four-parameter descriptions, such as the

*difference of Gaussians,*provide equivalent fits to empirical CSFs, but their fitted parameters are not immediately interpretable. The interpretable parameter set provided by the truncated log-parabola will be especially useful for a potential normative CSF data set, which in turn can provide Bayesian priors for

*qCSF*testing. The current study adopts the

*truncated log-parabola*as the functional form of the CSF and develops an adaptive testing procedure to estimate its four parameters.

*qCSF*method estimates the CSF parameters using Bayesian adaptive inference, which was first applied in the landmark development of the QUEST method (Watson & Pelli, 1983), and is now widely used in psychophysics (Alcalá-Quintana & García-Pérez, 2007; García-Pérez & Alcalá-Quintana, 2007; King-Smith, Grigsby, Vingrys, Benes, & Supowit, 1994; King-Smith & Rose, 1997; Remus & Collins, 2007, 2008; Snoeren & Puts, 1997). Whereas QUEST was designed to solely measure the psychometric threshold, subsequently developed methods estimate the threshold and steepness of the psychometric function (Cobo-Lewis, 1996; King-Smith & Rose, 1997; Kontsevich & Tyler, 1999; Remus & Collins, 2007, 2008; Snoeren & Puts, 1997; Tanner, 2008), or even more complex behavioral functions (Kujala & Lukka, 2006; Kujala, Richardson, & Lyytinen, in press; Lesmes, Jeon, Lu, & Dosher, 2006; Vul & MacLeod, 2007). We have previously applied the Bayesian adaptive framework to develop methods for estimating threshold versus external noise contrast functions (Lesmes, Jeon et al., 2006) and sensitivity thresholds and response bias(es) in detection tasks (Lesmes, Lu, Tran, Dosher, & Albright, 2006). In addition to the conceptual description of the

*qCSF*method that follows in this section, a demonstration movie (1) is included in the next section, detailed pre- and post-trial analyses are described in 1, and MATLAB code (MathWorks, Natick, MA) for the demonstration is available for download (http://lobes.usc.edu/qMethods).

*qCSF*'s application of Bayesian adaptive inference requires two basic components: (1) a probability density function,

*p*(

*θ*), defined over a four-dimensional space of CSF parameters, and (2) a two-dimensional space of possible grating stimuli. The method's basic goal is to accelerate CSF estimation by efficiently searching the stimulus space for grating stimuli that improve the information gained over the CSF parameter space on each trial. For the current simulations, the ranges of possible CSF parameters are: 2 to 2000 for peak gain,

*γ*

_{max}; 0.2 to 20 cpd for peak frequency,

*f*

_{max}; 1 to 9 octaves for bandwidth,

*β*; and 0.02 to 2 decimal log units for truncation level,

*δ*. The possible ranges for stimuli were 0.1% to 100% for grating contrast and 0.2 to 36 cpd for grating frequency. The parameter and stimulus spaces are defined on log-linear grids (Kontsevich & Tyler, 1999).

*p*

_{ t = 1}(

*θ*), represents foreknowledge of the observer's CSF parameters. For each parameter, integrating this multivariate probability density over the other three parameters gives a 1-D marginal prior density. For the current simulations, the marginal priors were relatively flat and log-symmetric around the respective parameter modes (

*γ*

_{max}= 100,

*f*

_{max}= 2.5 cpd,

*β*= 2.5 octaves, and

*δ*= 0.25 log units), and the joint prior was their normalized product (see 1 for more details). An advantage of applying Bayesian methods is the use of priors (Kuss, Jäkel, & Wichmann, 2005), which can usefully influence the testing strategy, based on other vision test results (Turpin, Jankovic, & McKendrick, 2007) or demographic data.

*t,*the evidence provided by the observer's response,

*r*

_{ t}, is used to update the knowledge about CSF parameters, i.e.,

*p*

_{ t}(

*θ*) is updated to

*p*

_{ t+1}(

*θ*), via Bayes Rule:

*p*(

*r*

_{ t}∣

*θ*), representing the probability of observing

*r*

_{ t}given the CSFs comprising the parameter space, is generated via a model psychometric function (see 1). This psychometric function, defined as a bivariate function of grating frequency and contrast, is translatable on log contrast (i.e., exhibits invariance of its steepness parameter across spatial frequencies). Following the Bayesian inference step, the updated estimates of CSF parameters are calculated by the marginal posterior means. The posterior,

*p*

_{ t+1}(

*θ*), serves as the prior for the next trial.

*qCSF*selects the grating frequency and contrast for the next trial using a one-step-ahead search and a criterion of minimum expected entropy (Cobo-Lewis, 1996; Kontsevich & Tyler, 1999; Kujala & Lukka, 2006; Lesmes et al., 2006), or equivalently, maximum expected information gain (Kujala & Lukka, 2006). The entropy of

*p*(

*θ*),

*p*(

*θ*) is uniform over the parameter space and minimal when the observer's CSF is perfectly certain:

*p*(

*θ*) = 1 for one set of CSF parameters and 0 otherwise. Because information is defined as a difference between entropies (Cover & Thomas, 1991; Kujala & Lukka, 2006)—in this case, between prior and posterior entropies—a strategy that minimizes the expected entropy of

*p*(

*θ*) is one that maximizes the information gained about CSF parameters on a trial-to-trial basis (Kujala & Lukka, 2006). By effectively simulating the next trial for each possible stimulus, and evaluating possible stimuli for their expected effects on the posterior, the method avoids large regions of the stimulus space that are not likely to be useful to the given experiment.

*γ*

_{max}= 200, peak frequency = 3.5 cpd, bandwidth (FWHM) = 3 octaves, and low-frequency truncation at 0.6 decimal log units below peak. 1 demonstrates the

*qCSF*applied to estimate this model CSF in a 2AFC task. To demonstrate how the

*qCSF*'s estimation of CSF parameters evolves over the course of an experiment, the demo presents the trial-to-trial updating of 1-D marginal densities for each parameter, in addition to two 2-D joint densities of (1) peak gain and peak frequency, and (2) bandwidth and truncation.

*qCSF*'s expected accuracy and precision, the same demo was repeated for 1000 iterations. Figure 2 summarizes the results. Even as few as 25–50 trials (distributed over 12 possible spatial frequencies) provide a general assessment of the CSF's shape, although estimates obtained with such few trials are not very precise: mean variability ≈4–6 dB (where 1 dB = 0.05 decimal log units = 12.2%). With 100–300 trials of data collection, CSF estimates are unbiased and reach precision levels (2–3 dB) typical of laboratory CSF measurements.

*AULCSF*; Applegate et al., 2000, 1998; Oshika, Klyce, Applegate, & Howland, 1999; Oshika, Okamoto, Samejima, Tokunaga, & Miyata, 2006; van Gaalen et al., 2009), which Campbell (1983) described as “our visual world.” Figure 2c presents the bias of

*AULCSF*estimates (in percent), calculated as (true

*AULCSF*− estimated

*AULCSF*)/true

*AULCSF,*as a function of trial number. These results demonstrate that the mean bias of

*AULCSF*estimates decreases below 5% after 25 trials, and that

*AULCSF*variability (evaluated via the coefficient of variation) decreases from 15% to 10%, between 25 and 50 trials of data collection. With more trials, the mean and variability of the bias both decrease. Thus, although 25

*qCSF*trials provide only imprecise estimates of the full CSF, reasonably accurate and precise

*AULCSF*estimates can be obtained with such few trials: bias <5% and coefficient of variation <15%.

*qCSF*'s pattern of stimulus sampling is summarized by the two-dimensional stimulus histograms presented in Figure 3. Each panel presents the probability (density) of stimulus presentation, as a function of grating frequency and contrast, for four endpoints of data collection: 25, 50, 100, and 300 trials. These results demonstrate how the

*qCSF*effectively samples the large grating stimulus space and adjusts stimulus presentation to match the observer's underlying contrast sensitivity function. Over the first 25 trials, stimulus presentation is relatively diffuse, especially over high spatial frequencies, where the procedure prospectively samples low contrasts. As the experiment progresses, and uncertainty about the observer's sensitivity decreases, stimulus presentation focuses directly on the observer's underlying contrast sensitivity function.

*qCSF*as a promising method for rapidly estimating the contrast sensitivity function. Given a data collection rate of 10–15 trials/min, reasonably precise CSF estimates can be obtained in 10–20 min. This testing time is significantly less than the 30–60 min required of conventional laboratory CSF measurements. Moreover, much fewer trials are needed to estimate the

*AULCSF*with the

*qCSF*. Due to the

*qCSF*'s high sampling resolution of spatial frequency, its

*AULCSF*estimates will be more precise and flexible than previous measurements taken with charts (Hohberger et al., 2007).

*qCSF*method. We evaluated precision through test–retest comparisons and accuracy through independent CSF estimates obtained with the

*ψ*method developed by Kontsevich and Tyler (1999).

*PsychToolbox*extensions (Brainard, 1997; Pelli, 1997). The stimuli were displayed on a Dell 17-inch color CRT monitor with an 85-Hz refresh rate. A special circuit changed the display to a monochromatic mode, with high grayscale resolution (>14 bits); luminance levels were linearized via a lookup table (Li, Lu, Xu, Jin, & Zhou, 2003). Stimuli were viewed binocularly with natural pupil at a viewing distance of approximately 175 cm in dim light.

*θ*= ±45 degrees from vertical. The signal stimuli were rendered on a 400 × 400 pixel grid, extending 5.6 × 5.6 deg of visual angle. The luminance profile of the Gabor stimulus is described by

*c*is the signal contrast,

*σ*= 1.87 deg is the standard deviation of the Gaussian window (which was constant across spatial frequencies), and the background luminance

*L*

_{0}was set in the middle of the dynamic range of the display (

*L*

_{min}= 3.1 cd/m

^{2};

*L*

_{max}= 120 cd/m

^{2}). For

*qCSF*trials, the 11 possible grating spatial frequencies were spaced log linearly from 0.6 to 20 cpd; the 46 possible grating contrasts were spaced log linearly from 0.15% to 99%. The stimulus sequence started with the presentation of a fixation cross in the center of the screen for 500 ms, and the grating stimulus was presented for 130 ms. The target was preceded by presentation of one of three possible auditory cues—the digitally recorded words “small,” “medium,” or “large”—which conveyed the “stripe size” (spatial frequency) of the imminent grating stimulus. The cue was used to reduce stimulus uncertainty, which could affect CSF measurement, especially in the high-frequency region (Woods, 1996).

*qCSF*runs, which each lasted 100 trials, were applied in succession. Interleaved with

*qCSF*runs were trials implementing another adaptive procedure (the “

*ψ*method”; Kontsevich & Tyler, 1999), applied to independently measure individual contrast thresholds in 6 spatial frequency conditions. There were 30 trials in each spatial frequency condition. To summarize, for each observer, each of four test sessions consisted of 2 × 100 = 200

*qCSF*trials and 6 × 30 = 180

*ψ*trials. Over the course of the experiment, this corresponded to collecting eight total

*qCSF*measures and four

*ψ*–CSF measures for each observer. The priors used for

*qCSF*parameters are presented in 1.

*qCSF*(blue lines) and

*ψ*methods (red lines). Each row presents CSF data from a different observer, and each column presents

*qCSF*estimates obtained with different number of trials: 25, 50, and 100. The error region (shaded gray) represents the

*qCSF*variability (mean ± 1

*SD*) for estimating individual thresholds. For comparison, for each observer, the same

*ψ*−CSF estimate, obtained with 180 × 4 = 760 trials, is presented across all columns; error bars represent variability (±1

*SD*). Initial examination of CSFs obtained with both methods suggests significant overlap. To quantify the concordance of CSF estimates, we calculated the root mean squared error (RMSE) of the mean thresholds obtained with the two methods, collapsed across all three observers (

*m*= 3) and spatial frequency conditions (

*n*= 6) common to both methods:

*ψ*method and the

*qCSF*using 25, 50, and 100 trials were 0.95, 1.09, and 0.86 dB, respectively.

*qCSF*'s convergence is provided by the decreasing variability of threshold estimates as a function of trial number. For each observer, the variability of sensitivity estimates for each spatial frequency was calculated from the standard deviation of the eight CSF estimates obtained with 25, 50, and 100

*qCSF*trials. For the three data cutoff points, the mean of these variability estimates—averaged across the three observers and 11 frequency conditions—was 6.44 dB (

*SD*= 1.5), 3.99 dB (

*SD*= 1.07), and 2.7 dB (

*SD*= 0.63). In Figure 4, this pattern is evident in the decreasing area of the CSF error regions (gray) with increasing trial number. The corresponding variability exhibited by the

*ψ*method was 2.25 dB (

*SD*= 1.04). Therefore, CSF estimates obtained with the

*ψ*method were more precise than the best

*qCSF*estimates but also required more data collection for each CSF: 180 vs. 100 trials. As a fair comparison, for CSFs obtained with the

*ψ*method with comparable number of trials (96 trials per CSF: 16 trials at each of the 6 spatial frequency conditions), threshold variability was 4.5 dB.

*qCSF*is assessed through analysis of the two

*qCSF*runs completed in each session. Figure 5a plots sensitivities estimated from the second

*qCSF*run against those from the first. The average test–retest correlations for the two CSFs estimated in each session, with 25, 50, and 100

*qCSF*trials, were 81.7% (

*SD*= 21%), 88% (

*SD*= 11%), and 96% (

*SD*= 4%). Though test–retest correlations are widely reported as measures of test–retest reliability, they are not the most useful way to characterize method reliability or agreement (Bland & Altman, 1986). Figure 5b presents a Bland–Altman plot of the difference of same-session

*qCSF*estimates against their mean. The mean and standard deviation of test–retest differences were 0.0075 and 0.175 (3.5 dB). These results signify that (1) sensitivity measures do not change systematically over the course of single testing sessions and (2) the precision of test–retest differences within sessions agrees with that estimated across single tests: compare 3.5 dB with

*AULCSF*estimates obtained with the

*qCSF,*Figure 5c presents the coefficient of variation of

*AULCSF*estimates as a function of trial number, for each subject. The consistent pattern, exhibited by each subject, is a decrease in variability as the trial number increases: from approximately 15% after 25 trials to 6% after 100 trials. These measures exhibit excellent agreement with those predicted by simulations.

*qCSF*trials with those obtained with an independent adaptive method (Kontsevich & Tyler, 1999). CSF estimates obtained with the

*qCSF*exhibited (1) excellent agreement with the

*ψ*method and (2) increased precision with increasing test duration. As suggested by simulations, only 25 trials (≈1–2 min) were sufficient to estimate a broad CSF metric, but estimation of individual sensitivities at higher precision (<3 dB) required more trials (100 trials ≈ 10 min). Over the relatively short test duration (100 trials), the

*qCSF*was more precise than the

*ψ*method (2.79 vs. 4.5 dB). It should be noted that the

*qCSF*'s precision advantage depends on the validity of the CSF's functional form assumed by the method. In instances in which the

*truncated log-parabola*misfits the observer's CSF (e.g., in cases with local notches), measurements with the

*ψ*method (which is CSF model-free) would fare better in the comparison. However, under realistic applications, with wider CSF variability, the coarse 6-point sampling scheme used by the current

*ψ*method application would be much more vulnerable to inefficiency than the flexible adaptive sampling used by the

*qCSF*.

*qCSF*applies a Bayesian adaptive strategy that uses a priori knowledge about the CSF's general functional form to accelerate the information gained about the psychophysical observer. Results from simulations and psychophysics demonstrate that 100 trials are sufficient for reasonably accurate and precise estimates of sensitivity across the full spatial frequency range. As few as 25 trials are needed to estimate the broad metric provided by the area under the contrast sensitivity function. Taken together, these results suggest that the

*qCSF*method can meet the different needs for measuring contrast sensitivity in basic and clinical vision applications.

*qCSF*will be potentially valuable for investigating comprehensive models of spatiotemporal vision (Tyler et al., 2002), which require measuring and accounting for contrast sensitivity as a function of retinal illuminance (Koenderink, Bouman, Buenodemesquita, & Slappendel, 1978c), eccentricity (Koenderink, Bouman, Buenodemesquita, & Slappendel, 1978a, 1978b), temporal frequency (Kelly, 1979; van Nes, Koenderink, Nas, & Bouman, 1967), external noise (Huang et al., 2007; Nordmann, Freeman, & Casanova, 1992), or visual pathology (Regan, 1991b; Stamper, 1984). The ability to rapidly estimate a single CSF will certainly benefit investigations, which must measure many CSFs in the same observer, under different conditions.

^{2}) vision and 1.5, 3, 6, and 12 cpd for mesopic (3 cd/m

^{2}) vision. However, a recent study of contrast sensitivity outcomes of refractive and cataract surgery (Pesudovs et al., 2004) illustrates the shortcomings of two current grating charts (FACT and Vistech). The limited grating contrast range of these charts makes them vulnerable to ceiling and floor effects: applied following refractive surgery, 33% and 50% of subjects demonstrated maximum sensitivity at the two lowest spatial frequencies; conversely, up to 60% of patients screened for cataracts with the same chart exhibited minimal sensitivity. Other recent studies comparing multiple contrast sensitivity tests (Buhren et al., 2006; van Gaalen et al., 2009) likewise conclude that none adequately meet the emerging needs of contrast sensitivity testing. To compare with the 45 grating stimuli (5 frequencies × 9 contrasts) used by the FACT charts, the

*qCSF*can sample (at a minimum) a set of 60 contrasts × 12 spatial frequencies = 720 grating stimuli, with grating contrast sampled over a 60-dB range (with 1-dB resolution) and grating frequency sampled over a 10–20-dB range (with 3-dB resolution). With such a broad range and fine resolution for sampling grating stimuli, the

*qCSF*needs no experimenter input to measure a wide variety of CSF phenomena. We believe that the

*qCSF*is both flexible enough to capture large-scale changes of contrast sensitivity across testing conditions and precise enough to capture small-scale changes common to the progression or remediation of visual pathology.

*qCSF*has several shortcomings. (1)

*It uses a forced-choice task with a high-guessing rate*. The possibility of improving the test's efficiency by using a

*Yes–No*task is tempered by the introduction of unconstrained response criteria (Klein, 2001). One potential approach to address the response bias confound is to rapidly estimate the response bias in

*YN*tasks directly (Lesmes, Lu et al., 2006) or add a rated response to the forced-choice task (Kaernbach, 2001; Klein, 2001). (2)

*The spatial CSF is limited as characteristic of spatiotemporal vision*. Because CSF shape depends on factors that include temporal frequency (Kelly, 1979; van Nes et al., 1967), spatial and temporal envelopes (Peli, Arend, Young, & Goldstein, 1993), and retinal illuminance (Koenderink et al., 1978c), clarifying the best

*qCSF*clinical testing conditions requires measuring the spatiotemporal contrast sensitivity surface (Kelly, 1979), which describes contrast thresholds as a function of spatial and temporal frequencies. The practical difficulty of measuring contrast sensitivity across this 2-D surface typically focuses investigation to only one of its cross-sections: (a) a spatial CSF at constant temporal frequency (Campbell & Robson, 1968), (b) a temporal CSF at constant spatial frequency (de Lange, 1958), or (c) a constant-speed CSF at co-varying spatial and temporal frequencies (Kelly, 1979). To improve measurements of spatiotemporal vision, we have developed the

*quick Surface*(or

*qSurface*) method (Lesmes, Gepshtein, Lu, & Albright, 2009), which leverages multiple

*qCSF*applications to estimate different cross-sections of the spatiotemporal contrast sensitivity surface in parallel. Whereas the

*qCSF*method evaluates stimuli for their contributions to a single cross-section through the spatiotemporal sensitivity surface, the

*qSurface*method evaluates grating stimuli (defined by contrast and spatial and temporal frequencies) for the information they provide about concurrent estimates of horizontal, vertical, and diagonal cross-sections through the surface. This innovation greatly reduces the testing time for estimating the spatiotemporal sensitivity surface and even allows for the measurement of multiple surfaces in an experimental session (≈1 h). As a result, the

*qSurface*method should be a valuable tool for finding the most useful spatiotemporal condition(s) for CSF clinical testing and for studying spatiotemporal vision in general. (3)

*The functional form used by the qCSF cannot accommodate notches or other local deficits*. To address this shortcoming, we are currently developing adaptive CSF procedures with fewer model-based assumptions. These tests will be more flexible to detect aberrant CSF features with potential clinical importance (Tahir, Parry, Pallikaris, & Murray, 2009; Woods, Bradley, & Atchison, 1996). Forthcoming work that addresses the above-mentioned shortcomings, in combination with inevitable increases in computing power, should ultimately improve the efficiency of the next generation of

*qCSF*methods.

*truncated log-parabola*. Alternative approaches are also likely to be successful; these include test strategies that apply different trial-to-trial cost functions, e.g., minimizing expected variance (Vul & MacLeod, 2007) or maximizing Fisher information (Remus & Collins, 2007, 2008), or which estimate different CSF descriptions, such as the pooled response of several frequency-specific band-pass mechanisms (Simpson & McFadden, 2005; Wilson & Gelb, 1984). One reviewer suggested defining the CSF by the intersection of two arcs from a radial center at 1 cpd and 30% amplitude. This novel CSF characterization, which uses only two parameters, provides an interesting prospect for future investigation, but the current paper does not have the space for fully exploring its development and implementation. One shortcoming of Bayesian testing strategies is their dependence on the prior; because we do not know the ground truth for the psychophysical parameters of interest, the optimization that drives the stimulus search depends on empirical estimates gained from previous trials. Therefore, local minima pose a risk for these procedures. One potential strategy to escape local minima is to perturb the stimulus search; this allows the stimulus selection algorithm to both explore and exploit diverse regions of the stimulus space (Alpaydin, 2004). A specific shortcoming of the one-step-ahead search is that the real experimental goal is to maximize the information gained over the course of the whole experiment. These current methods approximate this approach by finding the most informative stimulus for the next trial, but the two objectives (and their corresponding experimental trajectories) are not the same. It will be especially interesting to track the development of statistical and mathematical tools that increase the search horizon: more than one trial ahead, and perhaps even over the whole experiment (Lewi, 2009; Lewi, Butera, & Paninski, 2007, 2009). Therefore, we note that the current

*qCSF*procedure almost certainly does not implement the optimal method for estimating the contrast sensitivity function; however, it does provide an unprecedented lower bound for the set of optimal procedures. We are optimistic that the

*qCSF*will become even more efficient with the continuing development of sequential testing algorithms.

*qCSF*offers the “best of both worlds” for laboratory and clinical measures of contrast sensitivity. Over psychophysical testing times of 10–20 min, short by historical standards, the

*qCSF*method can precisely measure the entire CSF over the wide range of spatial frequencies. For testing times that are short for cards and charts (<1–2 min), the

*qCSF*is useful for estimating the area under the log CSF (

*AULCSF*) with good precision (c.v. = 15%). Figure 6 presents the results of a preliminary clinical application that characterizes the CSF deficit of an amblyopic observer using the

*qCSF*. The method distinguishes normal and abnormal CSFs with as few as 25 trials, with a stimulus placement strategy that minimizes the observer's frustration level (overall performance was 84% correct for 75 trials). A more systematic investigation has successfully validated the

*qCSF*'s identification of contrast sensitivity deficits in adults with amblyopia (Hou, Huang, Lesmes, Lu, & Zhou, unpublished data). Further investigations comparing the

*qCSF*method with other CS tests will be important for validating the test's potential advantage in the clinical setting. These studies, which will examine wider populations of normal and abnormal CSFs, will ultimately determine if the efficiency gains provided by the

*qCSF*translate to improved clinical assessment of the CSF.

*qCSF*method is part of a new generation of adaptive methods, which exploit advances in personal computer power to increase the complexity of classical Bayesian adaptive testing strategies. These methods estimate increasingly elaborate psychophysical models, which include the estimation of multi-dimensional models describing the psychometric function (Remus & Collins, 2007, 2008; Tanner, 2008), equi-detectable elliptical contours in color space (Kujala & Lukka, 2006), the features of external noise functions (Lesmes, Jeon et al., 2006), the spatiotemporal contrast sensitivity surface (Lesmes et al., 2009), neural input–output relationships (Lewi et al., 2007, 2009; Paninski, 2005), and the discrimination of memory retention models (Cavagnaro, Myung, Pitt, & Kujala, in press; Myung & Pitt, 2009). Taken together, these methods represent a powerful and versatile approach for studying phenomena previously restricted to data-intensive applications. Their computationally principled approach to data collection strategies will make them valuable in many future applications.

*qCSF*application, which include initialization and pre- and post-trial analyses, are described below and available for download in MATLAB code implementation ( http://lobes.usc.edu/qMethods). To complement 1, MATLAB code that generates a new demo with CSF parameters provided by the user is available.

*qCSF,*first define a discrete gridded parameter space,

*T*

_{ θ}, comprised of four-dimensional vectors

*θ*= (

*f*

_{max},

*γ*

_{max},

*β, δ*), which represent potential CSFs. Before the experiment starts, a prior probability density,

*p*(

*θ*), which reflects baseline knowledge about the observer's CSF, is defined over the space,

*T*

_{ θ}. The prior can be informed by knowledge about how CSF shape varies as a function of task or test population. For example, the gain and frequency of the CSF's peak can vary greatly across species, but there is much less variability in its bandwidth (Ghim & Hodos, 2006; Uhlrich et al, 1981). Figure A1 presents, for each of the four CSF parameters, the priors used in the current psychophysical validation. The priors were defined by hyperbolic secant (sech) functions (King-Smith & Rose, 1997). For each CSF parameter,

*θ*

_{i}, for

*i*= 1, 2, 3, 4, the mode of the marginal prior,

*p*(

*θ*

_{i}), was defined by the best guess for that parameter,

*θ*

_{guess}, and the width was defined by the confidence in that guess,

*θ*

_{confidence}:

*θ*

_{guess}, whose values for the respective parameters were:

*γ*

_{max}= 100,

*f*

_{max}= 2.5 cpd,

*β*= 3 octaves, and

*δ*= 0.5 log units. For each parameter, setting

*θ*

_{confidence}= 1 resulted in priors that were almost, but not completely, flat (see Figure A1). The joint prior was defined as the normalized product of the marginal priors. To show that the information contained in the priors did not over-influence the procedure, the posterior obtained from a complete run of 100

*qCSF*trials (observer JB) is also presented.

*s,*defined by both spatial frequency and contrast, presented on each trial

*t,*the

*qCSF*method applies a strategy that minimizes the expected entropy of the Bayesian posterior defined over psychometric parameters (e.g., Kontsevich and Tyler's

*ψ*method). The experimental data reported in this paper were collected with the pre-trial calculations prescribed by Kontsevich and Tyler, but the provided implementation combines elements of the

*ψ*method and an equivalent reformulation (Kujala & Lukka, 2006). Kujala and Lukka (2006) reformulated the calculation of minimum expected entropy by focusing on the equivalent task of maximizing the expected information gain (entropy change) between prior and posterior. Using a cost function based on the expectation of entropy change provides a great advantage: Monte Carlo sampling of the prior can be used to approximate expected information gain, by calculating the expected information gain over Monte Carlo samples. This approximation affects the precision of parameter estimates only minimally. The one-step-ahead search implemented by the

*qCSF*is a greedy algorithm: the ultimate goal, a maximally informative experiment, is simplified as a search for the maximally informative stimulus on the next trial. This type of greedy algorithm is vulnerable to local minima, which can manifest in over-representation of a small subset of stimuli; to avoid this phenomenon, the

*qCSF*does not strictly choose the stimulus that maximizes expected information gain but instead chooses uniformly over the top decile of stimuli. For sampling the prior, Kujala and Lukka used Monte Carlo Markov chain sampling with particle filtering (Doucet & de Freitas, 2001; Doucet, de Freitas, & Gordon, 2001), which greatly reduces computing load by forgoing explicit maintenance of the prior. In our application, we maintain the discrete grid-defined prior; future implementations of the

*qCSF*may maintain this approach or change to other schemes for fitting/approximating the Bayesian posteriors (Lewi et al., 2009). Before each trial, the grid-defined prior is sampled via Monte Carlo inverse sampling using the MATLAB function “discretesample.m” written by Dahua Lin, and available for download from the

*MATLAB Central*file exchange (http://www.mathworks.com/matlabcentral/fileexchange/21912). Even on old hardware (e.g., a Titanium Powerbook G4 laptop), the pre-trial calculation takes less than 500 ms; on a Windows PC that is several years old, the computing time is reduced to less than 10 ms. The number of samples can be arbitrarily high, though simulations suggest that as few as 50–100 samples are sufficient for method convergence. As prescribed by Kujala and Lukka (2006), for each possible stimulus, the calculation of expected information gain is

*h*(

*p*) = −

*p*log(

*p*) − (1 −

*p*)log(1 −

*p*) defines the entropy of a distribution of complementary probabilities:

*p*and 1 −

*p*. The above calculation requires calculating

*Ψ*

_{θ}(

*x*) over the Monte Carlo samples for each possible grating stimulus. Given a single sampled vector of CSF parameters,

*θ*′

_{j}, that defines a CSF,

*S*

_{θ′j}(

*f*), the probability of a correct response for a grating of frequency,

*f,*and contrast,

*c,*is given by the log-Weibull psychometric function:

*β*= 2, does not change as a function of spatial frequency and that the observers makes stimulus-independent errors or lapses (Swanson & Birch, 1992; Wichmann & Hill, 2001) on a small proportion of trials,

*ɛ*= 4%. Using a shallow assumed slope minimizes biases introduced by parameter mismatch. The assumed lapse rate will likely need to be increased for applications with naive psychophysical observers.

*qCSF*method applies Bayes Rule to reiteratively update

*p*(

*θ*), given the response to that trial's grating stimulus. For the Bayesian update that follows each trial's outcome, we use the explicit gridded priors, rather than its samples. This calculation is computationally intensive, but its impact is mitigated by (1) only calculating the update for the actual stimulus and response on each trial, not for all potential stimuli; and (2) the increases in computing power expected with each generation of personal computers. As prescribed by Kontsevich and Tyler (1999), to calculate the probability of the observed response (either correct or incorrect) to the stimulus

*s,*

*p*

_{t}(

*θ*), to weigh the response rates defined by CSF vectors,

*θ,*across the parameter space,

*T*

_{θ}:

*p*

_{t}(

*θ*) to the posterior

*p*

_{t+1}(

*θ*) via Bayes Rule:

*qCSF*estimate of the CSF is defined by the mean parameters.

*t,*the updated posterior is used as the prior for trial

*t*+ 1. For a stopping criterion, the current

*qCSF*application uses a fixed trial number. For future versions, other stopping criteria can be implemented (Alcalá-Quintana & García-Pérez, 2005).

*qCSF*measurement of widely different CSFs observed in different illumination conditions (Campbell & Robson, 1968). Figure B1a demonstrates that, for both simulated observers, the initial prior CSF poorly matches the observer's CSFs. Figure B1b presents the simulation results: the mean and standard deviation of AULCSF estimates as a function of trial number (main plot) and the mean and standard deviation of CSF estimates obtained with only 25 trials (inset).

*qCSF*largely converge to their true value for both observers by the 25th trial; the mean bias magnitude, less than 5% for both observers, continues to decrease with more trials. Furthermore, at such short testing times, the coefficient of variation for AULCSF estimates (related to the area of the respective shaded regions) is less than 20% for both CSFs: 10 and 15% for the bright and dark conditions, respectively. The inset demonstrates that the CSF estimates obtained with only 25 trials are readily distinguishable from the initial prior CSF and from each other. However, applications attempting to precisely measure these CSFs are recommended to use more trials (>100).

*ψ*method, and Barbara Dosher, William H. Swanson, Don MacLeod, Sergei Gepshtein, Adam Reeves, and Tatanya Sharpee for valuable discussions.

*Psychological Methods*, 9, 250–271. [PubMed] [CrossRef] [PubMed]

*Spatial Vision*, 18, 347–374. [PubMed] [CrossRef] [PubMed]

*Spatial Vision*, 20, 197–218. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 45, 2095–2107. [PubMed] [CrossRef] [PubMed]

*Introduction to machine learning*. Cambridge, MA: The MIT Press.

*Journal of Cataract & Refractive Surgery*, 31, 712–717. [PubMed] [CrossRef]

*Archives of Ophthalmology*, 104, 1783–1787. [PubMed] [CrossRef] [PubMed]

*Lancet*, 327, 307–310. [PubMed] [CrossRef]

*Brain*, 110, 1675–1698. [PubMed] [CrossRef] [PubMed]

*Ophthalmic and Physiological Optics*, 11, 218–226. [PubMed] [CrossRef] [PubMed]

*Spatial Vision*, 10, 433–436. [PubMed] [CrossRef] [PubMed]

*Optometry and Vision Science*, 83, 290–298. [PubMed] [CrossRef] [PubMed]

*Neurology*, 36, 1121–1125. [PubMed] [CrossRef] [PubMed]

*Behavioural Brain Research*, 10, 87–97. [PubMed] [CrossRef] [PubMed]

*The Journal of Physiology*, 197, 551–566. [PubMed] [Article] [CrossRef] [PubMed]

*Neural Computation*.

*Vision Research*, 42, 2137–2152. [PubMed] [CrossRef] [PubMed]

*Journal of Mathematical Psychology*, 40, 353–354.

*Archives of Ophthalmology*, 125, 1118–1121. [PubMed] [CrossRef] [PubMed]

*American Journal of Optometry and Physiological Optics*, 60, 394–398. [PubMed] [CrossRef] [PubMed]

*The American Journal of Psychology*, 75, 485–491. [PubMed] [CrossRef] [PubMed]

*Elements of information theory*. New York: Wiley.

*Journal of Optical Society of America*, 48, 777–783. [PubMed] [CrossRef]

*Spatial vision*. New York: Oxford.

*British Journal of Ophthalmology*, 69, 136–142. [PubMed] [Article] [CrossRef] [PubMed]

*Sequential Monte Carlo methods in practice*. New York: Springer-Verlag.

*Sequential Monte Carlo methods in practice*. (pp. 3–14). New York: Springer-Verlag.

*Eye, Advanced Online Publication*. [.

*The Journal of Physiology*, 187, 517–552. [PubMed] [Article] [CrossRef] [PubMed]

*International Congress Series Vision 2005—Proceedings of the International Congress held between 4 and 7 April 2005 in London, UK*, 1282, 521–524. [Article]

*Spanish Journal of Psychology*, 8, 256–289. [PubMed] [CrossRef] [PubMed]

*British Journal of Mathematical and Statistical Psychology*, 60, 147–174. [PubMed] [CrossRef] [PubMed]

*Journal of Comparative Physiology A*, 192, 523–534. [PubMed] [CrossRef]

*American Journal of Optometry and Physiological Optics*, 61, 403–407. [PubMed] [CrossRef] [PubMed]

*Contact lenses. The CLAO guide to basic science and clinical practice*. (56, pp. 1–19). New York: Grune & Stratton.

*Functional assessment of low vision*. (pp. 77–88). Louis, MO: Mosby.

*International Ophthalmology Clinics*, 43, 5–15. [PubMed] [CrossRef] [PubMed]

*Current Opinion in Ophthalmology*, 17, 19–26. [PubMed] [CrossRef] [PubMed]

*Visual pattern analyzers*. New York: Oxford.

*Journal of Vision*, 9, (7):13, 1–12, http://journalofvision.org/9/7/13/, doi:10.1167/9.7.13. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 17, 1049–1055. [PubMed] [CrossRef] [PubMed]

*Archives of Ophthalmology*, 102, 1035–1041. [PubMed] [CrossRef] [PubMed]

*Graefe's Archive for Clinical and Experimental Ophthalmology*, 245, 1805–1814. [PubMed] [CrossRef] [PubMed]

*Investigative Ophthalmology & Visual Science*, 49, 3049–3057. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 47, 22–34. [PubMed] [CrossRef] [PubMed]

*Ophthalmic and Physiological Optics*, 18, 3–12. [PubMed] [CrossRef] [PubMed]

*Journal of Cataract and Refractive Surgery*, 15, 141–148. [PubMed] [CrossRef] [PubMed]

*Perception and Psychophysics*, 63, 1377–1388. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Neuroscience Methods*, 97, 103–110. [PubMed] [CrossRef] [PubMed]

*Experimental Eye Research*, 88, 747–751. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America*, 69, 1340–1349. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 34, 885–912. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 37, 1595–1604. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 39, 4152–4160. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 63, 1421–1455. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of the Optical Society of America*, 68, 845–849. [PubMed] [CrossRef]

*Journal of the Optical Society of America*, 68, 850–854. [PubMed] [CrossRef]

*Journal of the Optical Society of America*, 68, 860–865. [PubMed] [CrossRef]

*Vision Research*, 39, 2729–2737. [PubMed] [CrossRef] [PubMed]

*Journal of Mathematical Psychology*, 50, 369–389. [CrossRef]

*Journal of Mathematical Psychology*. [.

*Journal of Vision*, 5, (5):8, 478–492, http://journalofvision.org/5/5/8/, doi:10.1167/5.5.8. [PubMed] [Article] [CrossRef]

*Perception & Psychophysics*, 63, 1279–1292. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Vision*, 9, (8):696, 696a, http://journalofvision.org/9/8/696/, doi:/10.1167/9.8.696. [CrossRef]

*Vision Research*, 46, 3160–3176. [PubMed] [CrossRef] [PubMed]

*d*′) levels in yes/no tasks [Abstract].

*Journal of Vision*, 6, (6):1097, 1097a, http://journalofvision.org/6/6/1097/, doi:10.1167/6.6.1097. [CrossRef]

*Philosophical Transactions of the Royal Society B: Biological Sciences*, 364, 399–407. [PubMed] [CrossRef]

*Journal of the Acoustical Society of America*, 49, 467–477. [PubMed] [CrossRef] [PubMed]

*Sequential optimal design of neurophysiology experiments*.

*Advances in Neural Information Processing Systems*, 19, 857.

*Neural Computation*, 21, 619–687. [PubMed] [CrossRef] [PubMed]

*Nature Neuroscience*, 12, 549–551. [PubMed] [CrossRef] [PubMed]

*Investigative Ophthalmology & Visual Science*, 46, 3161–3168. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Neuroscience Methods*, 130, 9–18. [PubMed] [CrossRef] [PubMed]

*Archives of Ophthalmology*, 102, 1303–1306. [PubMed] [CrossRef] [PubMed]

*Science*, 210, 439–440. [PubMed] [CrossRef] [PubMed]

*Archives of Ophthalmology*, 119, 1371–1373. [PubMed] [CrossRef] [PubMed]

*Neurology*, 40, 1710–1714. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics and Image Science*, 5, 2166–2172. [PubMed] [CrossRef] [PubMed]

*The Journal of Physiology*, 283, 101–120. [PubMed] [Article] [CrossRef] [PubMed]

*Psychological Review*, 116, 499–518. [PubMed] [Article] [CrossRef] [PubMed]

*Ophthalmology*, 113, 1807–1812. [PubMed] [CrossRef] [PubMed]

*Ophthalmology Clinics of North America*, 16, 171–178. [PubMed] [CrossRef] [PubMed]

*Neural Computation*, 17, 1480–1507. [PubMed] [CrossRef] [PubMed]

*Spatial Vision*, 7, 1–14. [PubMed] [CrossRef] [PubMed]

*Spatial Vision*, 10, 437–442. [PubMed] [CrossRef] [PubMed]

*Clinical Vision Sciences*, 2, 187–199.

*British Journal of Ophthalmology*, 88, 11–16. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 35, 961–979. [PubMed] [CrossRef] [PubMed]

*Spatial vision*. (pp. 43–63). Boca Raton, FL: CRC Press.

*Brain*, 108, 647–676. [PubMed] [CrossRef] [PubMed]

*Proceedings of the National Academy of Sciences of the United States of America*, 101, 6692–6697. [PubMed] [Article] [CrossRef] [PubMed]

*Spatial vision*. (pp. 1–42). Boca Raton, FL: CRC Press.

*Spatial vision*. (pp. 239–249). Boca Raton, FL: CRC Press.

*Brain*, 105, 735–754. [PubMed] [CrossRef] [PubMed]

*Brain*, 104, 333–350. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 69, 311–323. [PubMed] [CrossRef] [PubMed]

*Journal of the Acoustical Society of America*, 123, 315–326. [PubMed] [CrossRef] [PubMed]

*American Journal of Ophthalmology*, 104, 64–68. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics and Image Science*, 10, 1591–1599. [PubMed] [CrossRef] [PubMed]

*British Journal of Ophthalmology*, 68, 821–827. [PubMed] [Article] [CrossRef] [PubMed]

*Contrast sensitivity: Proceedings of the Retina Research Foundation Symposia*(vol. 5, 103–116). Cambridge, MA: MIT Press.

*Vision Research*, 45, 2723–2727. [PubMed] [CrossRef] [PubMed]

*Journal of Mathematical Psychology*, 41, 431–439. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 28, 1357–1366. [PubMed] [CrossRef] [PubMed]

*Archives of Ophthalmology*, 103, 51–54. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 51, 409–422. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 9, (7):11, 1–12, http://journalofvision.org/9/7/11/, doi:10.1167/9.7.11. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Cataract & Refractive Surgery*, 34, 570–577. [PubMed] [CrossRef]

*Perception*, 37, (ECVP Abstract Supplement), 93.

*Current Eye Research*, 5, 635–639. [PubMed] [CrossRef] [PubMed]

*Brain*, 112, 283–303. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 35, 2503–2522. [PubMed] [CrossRef] [PubMed]

*American Journal of Ophthalmology*, 121, 547–553. [PubMed] [CrossRef] [PubMed]

*Investigative Ophthalmology & Visual Science*, 48, 1627–1634. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics and Image Science*, 2, 393–398. [PubMed] [CrossRef] [PubMed]

*Human Vision and Electronic Imaging VII*, 4662, 138.

*Behavioural Brain Research*, 2, 291–299. [PubMed] [CrossRef] [PubMed]

*Journal of Cataract & Refractive Surgery*, 35, 47–56. [PubMed] [CrossRef]

*Journal of the Optical Society of America*, 57, 1082–1088. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 5, (10):6, 823–833, http://journalofvision.org/5/10/6/, doi:10.1167/5.10.6. [PubMed] [Article] [CrossRef]

*Ophthalmic and Physiological Optics*, 22, 582–582. [CrossRef]

*Perception*, 36, (ECVP Abstract Supplement) 216.

*Journal of Vision*, 5, (9):6, 717–740, http://journalofvision.org/5/9/6/, doi:10.1167/5.9.6. [PubMed] [Article] [CrossRef]

*Perception & Psychophysics*, 47, 87–91. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 33, 113–120. [PubMed] [CrossRef] [PubMed]

*Annual Review of Biomedical Engineering*, 7, 361–401. [PubMed] [CrossRef] [PubMed]

*British Journal of Mathematical and Statistical Psychology*, 18, 1–10. [PubMed] [CrossRef] [PubMed]

*Perception & Psychophysics*, 63, 1293–1313. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Optical Society of America A, Optics and Image Science*, 1, 124–131. [PubMed] [CrossRef]

*Ophthalmic and Physiological Optics*, 16, 513–519. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 36, 3587–3596. [PubMed] [CrossRef] [PubMed]

*Clinical & Experimental Optometry*, 78, 43–57. [CrossRef]

*Vision Research*, 46, 739–750. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 8, (3):9, 1–10, http://journalofvision.org/8/3/9/, doi:10.1167/8.3.9. [PubMed] [Article] [CrossRef] [PubMed]