Computational models of spatial vision typically make use of a (rectified) linear filter, a nonlinearity and dominant late noise to account for human contrast discrimination data. Linear–nonlinear cascade models predict an improvement in observers' contrast detection performance when low, subthreshold levels of external noise are added (i.e., stochastic resonance). Here, we address the issue whether a single contrast gain-control model of early spatial vision can account for both the pedestal effect, i.e., the improved detectability of a grating in the presence of a low-contrast masking grating, and stochastic resonance. We measured contrast discrimination performance without noise and in both weak and moderate levels of noise. Making use of a full quantitative description of our data with few parameters combined with comprehensive model selection assessments, we show the pedestal effect to be more reduced in the presence of weak noise than in moderate noise. This reduction rules out independent, additive sources of performance improvement and, together with a simulation study, supports the parsimonious explanation that a single mechanism underlies the pedestal effect and stochastic resonance in contrast perception.

*explanation*of single channel behavior despite offering an excellent

*description*of contrast discrimination performance (e.g., Wichmann, 1999). In this paper, we exploit the gain-control model as a powerful statistical tool with few free parameters to describe our contrast discrimination data, and we use the fits to different data sets to make statistically sound inferences about changes in the data.

*average*optimal contrast, but on each trial, due to the noise, the contrast in the relevant channel will be somewhat higher or lower and thus not optimal. This will lead to a higher threshold, i.e., a reduced pedestal effect. Considered this way, the invariance to the presence of noise displayed by some observers, especially in 1-D noise (e.g., Figure 8 and 10 in Henning & Wichmann, 2007), may indicate a change in the underlying mechanism. In 2-D noise, the number of active channels, sensitive to the signal, is similarly important: Having few active channels, perhaps even with correlated noise (Henning et al., 2002) could be expected to reduce the size of the pedestal effect due to the additionally introduced stimulus variability. On the other hand, having many active channels that sample different image regions and hence different regions of noise could enable the averaging out of noise-induced effects of signal variability and thus leave the size of the pedestal effect (almost) unchanged. Whatever the exact mechanism underlying contrast discrimination in noise may be, our main interest here concerns the dipper-effect being approximately invariant to addition of (strong) broadband noise.

^{2}. Viewing distance was 120 cm, leading to a pixel-size of 0.009° of visual angle.

*σ*of 0.27°. Stimuli had a spatial extent of 2.35° of visual angle.

^{−7}deg

^{2}were used. Noise-power spectral density is defined as the luminance variance multiplied by the pixel area, expressed in visual degrees squared. Effects of temporal waveform, duration and bandwidth are not considered here. It is proportional to the average power at the different frequencies present in the noise. The maximal amount of clipping (i.e., pixels set to the minimal or maximal luminance values because of the limited 8-bit dynamic range of the DACs on the video card) at the highest noise level was around 2.5%. Through simulations in MATLAB, this level of clipping was calculated to have no significant influence on the spectral properties of the Gaussian white noise.

^{−7}deg

^{2}, and the moderate noise of 42 × 10

^{−7}deg

^{2}, approximately a log unit increase. We call the highest noise level used in this experiment “moderate” because it leads, on average, only to a rise by a factor of 1.34 in the detection threshold at 75% correct. The high detection thresholds purposely enforced through the use of very short presentation times, together with the limited dynamic range of today's graphics cards and CRTs that prevented us from using higher noise power densities, unfortunately prevented us from having stronger masking effects.

*not*independent effects. Of course, only considering 75% correct-thresholds ignores much of the information present in our data set. To study whether and how the presence of noise changes contrast discrimination performance in more detail, we will make use of the standard gain-control model introduced by Foley (1994) and elaborated by Wichmann (1999). In this application, we mainly use this model as a statistical tool to get a full quantitative description of our data with as few parameters as possible.

*stimulus theory*describes how a transduction mechanism maps physical stimuli to internal states; second, a probabilistic

*theory of internal states*describes the probability distribution of the internal states that results from repeated presentation of the same stimulus; and finally, a deterministic

*response theory*describes a decision rule that maps internal states to a response. In the gain-control model, the transduction mechanism was chosen to be the generalized four parameter Naka–Rushton function (free parameters

*α, β, η,*and

*κ*). These parameters express the response gain (

*α*), the semisaturation contrast (

*β*), the response exponent (

*η*), and the gain-control exponent (

*κ*) of the contrast response function. One additional free parameter was added to describe the internal noise, assumed to be Gaussian and signal independent (free parameter

*σ*). Recently, there has been some debate in spatial vision regarding the question whether a signal-dependent source might also contribute to internal noise (e.g., Georgeson & Meese, 2006; Gorea & Sagi, 2001; Kontsevich et al., 2002; Wichmann, 1999). Level-dependent noise may be needed to explain contrast discrimination performance at high pedestal levels. Here it was not needed because the majority of data points were gathered at relatively low pedestal levels. Thus, we did not include level-dependent noise for our modelling to reduce the number of free parameters. This is not to argue that level-dependent noise may not be crucial; indeed, one of us showed that level-dependent noise is critically needed to fit contrast discrimination data at high pedestal contrasts (Wichmann, 1999).

*p*(Δ

*x, x*), as a function of the contrast increment (Δ

*x*) and the pedestal contrast (

*x*) in a 2AFC-task.

*z*is a dummy variable, and

*f*(Δ

*x, x*) and

*g*(Δ

*x, x*) are given by

*σ*was taken to be 1, resulting in a four free parameter model. An additional (highly constrained) vector of free parameters

*λ*(“lapse rates”), estimated for each noise and pedestal combination as explained by Wichmann and Hill (2001a, 2001b), was introduced in the fitting of the model to avoid biased parameter estimates (for details see Wichmann, 1999). Priors were introduced for each parameter to constrain estimates to realistic values. To find the surface

*p*(Δ

*x, x*) that maximizes the likelihood that the data were generated from a process with success probability given by

*p*(Δ

*x, x*), the log-likelihood of the surface

*p*(Δ

*x, x*) given the parameters (

*α, β, η,*and

*κ*) was maximized using purpose-written software in MATLAB (

*fminsearch,*which makes use of the Nelder–Mead simplex search method). The log-likelihood of the surface

*p*(Δ

*x, x*) given parameter vector

**θ,**containing {

*α, β, η, κ*} with

*σ*= 1 and

**λ**equal to the lapse rate vector derived from the psychometric function fits, is given by Equation 4:

*n*

_{ji}the number of trials (block size) measured at pedestal contrast

*j*and signal contrast

*i*and

*y*

_{ji}the proportion of correct responses in that condition. Because the problem is nonconvex due to

**λ**, a multi-start procedure with semi-randomly chosen initial parameter values was used. For each model fit reported, at least 20 different starting points were used.

*χ*

^{2}-distributed, with degrees of freedom equal to the number of data blocks minus the number of free parameters if the model is correct and an observer behaves perfectly stationary during the whole experiment (and thus generates truly binomially distributed data). Often, due to a variety of reasons, this is not the case. Responses of nonstationary observers are more variable than binomially distributed data and thus lead to higher deviances (overdispersion). Wichmann (1999) has shown that, due to the typically relatively small number of measurements, the asymptotically derived deviance distributions often fail to approximate the real deviance distribution for psychophysical data sets. The real deviance distribution can be estimated easily by means of Monte Carlo simulations. As suggested by Wichmann (1999), we estimated the deviance distribution for each model fit by means of 10,000 simulated data sets for an observer whose correct responses in our experiment are binomially distributed as specified by the model fit. From these simulations, we derived critical values for each reported fit. These values indeed often deviate in an unpredictable manner from the asymptotically derived values, confirming Wichmann (1999). Of course, these critical values do not take into account the nonstationariness of real observers. Overdispersion may thus still occur.

*d*

_{ ji}is defined as the square root of the deviance value calculated for data point

*i*in isolation, signed according to the direction of the arithmetic residual

*y*

_{ ji}−

*p*(Δ

*x*

_{ ji},

*x*

_{ j}). For binomial data, this is expressed by Equation 6,

*D*=

*predictive accuracy,*different quantitative methods have been suggested (for a review and overview, see Myung, 2000; Pitt & Myung, 2002; Pitt, Myung, & Zhang, 2002; Wasserman, 2000; Zucchini, 2000). As there is no generally agreed consensus as to what method is best, we used three different model selection criteria:

*Akaike*'s

*information criterion*(AIC),

*Bayesian information criterion*(BIC), and

*cross-validation*(CV). AIC trades simplicity and goodness-of-fit for nested models. It is commonly formulated for model family

**F**as given by Equation 7.

*l*the number of adjustable parameters.

^{1}As for all model selection methods mentioned, the model that minimizes the criterion should be selected. AIC has the additional advantage that, for nested models, the reduction in AIC can be compared to a

*χ*

^{2}-distribution with degrees of freedom equal to the difference in number of free parameters between the models.

*n*) > 2, i.e.,

*n*> 7.

*training error*. The normalized deviance of the same parameter estimates to the subsample that was left out during parameter estimation (i.e., the test set) is called

*test error*(see Equation 9). By minimizing test error, CV has a strong and intuitively appealing emphasis on generalizability (and large differences between training and test error are indicative of over-fitting).

*λ*was re-estimated for each pedestal and noise combination. Each subsample can be used once as test set, which results in ten estimates of test error. Assuming stationary data, i.e., training and test data come from the same distribution, a model that is correct—in particular does not over-fit—has a test error equal to the training error. We did ten iterations of 10-fold CV, leading to 100 parameter estimates and their associated test errors.

α | β | η | κ | σ | D _{Total} | D _{NN} | D _{WN} | D _{MN} | |
---|---|---|---|---|---|---|---|---|---|

L.V. | 17.95 | 0.074 | 2.87 | 2.39 | 1 | 1.51** | 1.36 | 2.01** | 1.18 |

E.G. | 10.98 | 0.050 | 3.70 | 3.19 | 1 | 1.52** | 1.88** | 1.74** | 0.93 |

B.B. | 16.99 | 0.059 | 3.31 | 2.69 | 1 | 1.20 | 1.17 | 1.39 | 1.03 |

L.V.E. | 10.34 | 0.051 | 3.61 | 3.14 | 1 | 1.17 | 1.13 | 1.24 | 1.14 |

*β*—i.e., the semi-saturation contrast if both exponents are equal, corresponding to the location of the trough of the dip, indicated by the vertical dashed lines—while the deviance residuals of the weak-noise condition ( Figure 2e) display a decreasing trend in the same region. The deviance residuals of the moderate-noise condition ( Figure 2f) do not display any systematic trend. Indeed, a linear regression analysis relating the logarithm of pedestal contrast to deviance residual revealed that deviance residuals of the no- and weak-noise condition differ significantly at low pedestal contrasts. Figure 3 shows the results of this linear regression analysis. The full lines depict the best fitting linear curves to the data, the dashed lines depict the 99.15%

*confidence bands*of these curves, and the circles illustrate the mean deviance residual. Deviance residuals of the decreasing and rising part of the dipper function were analyzed separately. For both pedestal contrast regions, six comparisons are interesting to make: Do these three lines differ from 0, indicating a systematic misfit of the model? And do they differ from each other, indicating systematic differences between noise conditions? The overall probability of making a Type I error, i.e., falsely rejecting the null hypothesis, thus equals 0.05 (i.e., 1 − (0.9915)

^{6}) for both the decreasing and rising part of the dipper function.

^{2}

*η*and

*κ,*was fitted to each noise condition separately for each observer. An example of these fits can be seen in the upper row of Figure 4 for observer E.G.

*D*= 1.87). Fitting this condition with an expanded 6-free parameter version of the gain-control model (i.e., one signal-dependent noise source, having both a multiplicative and exponential component, was added) improved quality of fit only marginally to 1.80, which is not significantly better according to AIC or BIC. Most likely, this data set is over-dispersed, i.e., observer E.G. displayed nonstationary behavior in the no-noise condition (this, of course, cannot be fixed by

*any*other model: the “error” is intrinsic to the data set).

α | β | η | κ | σ | D | |
---|---|---|---|---|---|---|

No noise | ||||||

L.V. | 17.95 | 0.074 | 3.40 | 3.00 | 1 | 1.06 |

E.G. | 10.98 | 0.050 | 3.72 | 3.24 | 1 | 1.87** |

B.B. | 16.99 | 0.059 | 4.17 | 3.55 | 1 | 1.00 |

L.V.E. | 10.34 | 0.051 | 4.18 | 3.75 | 1 | 1.06 |

| ||||||

Weak noise | ||||||

L.V. | 17.95 | 0.074 | 2.79 | 2.22 | 1 | 1.13 |

E.G. | 10.98 | 0.050 | 2.99 | 2.47 | 1 | 1.44 |

B.B. | 16.99 | 0.059 | 3.06 | 2.41 | 1 | 1.37 |

L.V.E. | 10.34 | 0.051 | 3.42 | 2.92 | 1 | 1.21 |

| ||||||

Moderate noise | ||||||

L.V. | 17.95 | 0.074 | 2.68 | 2.23 | 1 | 1.16 |

E.G. | 10.98 | 0.050 | 3.91 | 3.39 | 1 | 0.93 |

B.B. | 16.99 | 0.059 | 2.94 | 2.33 | 1 | 1.00 |

L.V.E. | 10.34 | 0.051 | 3.17 | 2.70 | 1 | 1.11 |

*α*) and semisaturation contrast (

*β*) to the estimates of the fit to the pooled noise conditions and leaving the exponents (

*η*and

*κ*) free to capture the differences between the noise conditions leads to a parsimonious model (i.e., 0.033 free parameters per block of 50 trials which corresponds to about 1 free parameter per 1,500 trials) that successfully describes our data. Compared to the simultaneous fit to all data, allowing the exponents to vary over noise conditions leads to an improvement in quality of fit and the disappearance of systematic trends in the deviance residuals at low pedestal contrasts. We now assess whether the improvement brought about by more free parameters is sufficiently large as assessed by methods of model selection.

*χ*

^{2}-distribution with eight free parameters (e.g., 99.9% of the area of this distribution is located below 26.125). We may thus conclude that, considered across noise conditions and observers, the response gain and semisaturation contrast may be frozen, but it is better not to freeze the exponents of the Naka–Rushton equation. Second, we can also do this analysis for each noise condition across observers and for each observer across noise conditions. And finally, we can do this analysis for each condition within each observer.

_{1}− AIC

_{2}= 5.66;

*p*< 0.059). We suspect that this may at least in part result from the relative lack of data that reduces statistical power—she completed only 4,500 trials—because her deviance residuals and parameter estimates are not inconsistent with other observers. When considering the different noise conditions, it is clear that the improvement in predictive accuracy is mainly due to the better fits to the weak-noise condition (AIC

_{1}− AIC

_{2}= 70.67;

*p*< 10

^{−5}) and the no-noise condition (AIC

_{1}− AIC

_{2}= 27.44;

*p*< 10

^{−5}). For the moderate-noise condition, fits were already fine in the first approach, so not much could be gained (AIC

_{1}− AIC

_{2}= 1.62;

*p*< 0.65).

No noise | Weak noise | Moderate noise | ∑ | |
---|---|---|---|---|

L.V. | 16.78*** | 51.01*** | 0.36 | 68.15*** |

E.G. | −2.16 | 17.27** | −0.80 | 14.31*** |

B.B. | 9.45** | 1.14 | 1.02 | 11.61** |

L.V.E. | 3.37 | 1.25 | 1.04 | 5.66 |

∑ | 27.44*** | 70.67*** | 1.62 |

No noise | Weak noise | Moderate noise | ∑ | |
---|---|---|---|---|

L.V. | 15.50 | 49.73 | −0.91 | 64.32 |

E.G. | −3.44 | 16.00 | −2.07 | 10.49 |

B.B. | 8.18 | −0.13 | −0.25 | 7.8 |

L.V.E. | 2.56 | 0.44 | 0.23 | 3.23 |

∑ | 22.81 | 66.04 | −3 |

*test error*) and the difference in test error for both modelling approaches for each observer. As indicated by the positive differences in test error, the separate fits provide better predictions for

*unseen*data for each observer. It should in addition be noted that for three of four observers, the average test error of each noise condition belongs to the 99% confidence interval of the distribution of deviance values of a stationary observer: The data are thus very well described by our model.

Model I | Model II | Model I–Model II | |
---|---|---|---|

L.V. | 1.54** | 1.16 | 0.39 |

E.G. | 1.54** | 1.47** | 0.07 |

B.B. | 1.24* | 1.15 | 0.09 |

L.V.E. | 1.19 | 1.16 | 0.04 |

∑ | 1.38 | 1.24 | 0.15 |

*η,*the gain-control exponent

*κ,*and their difference are shown in Figure 7. We first compare parameter estimates for the no-noise and weak-noise condition. For all observers, both exponents are estimated to be reduced in the presence of weak noise. Furthermore, the difference between the exponents is always higher for the weak-noise condition than for the no-noise condition. It is this difference, together with the absolute value of the response exponent, which determines the strength of the pedestal effect: The smaller the difference and the larger the response exponent, the bigger the pedestal effect is. In other words, the depth of the dipper function is reduced in the presence of weak noise for all observers. It is interesting to note that results are not as systematic for the moderate-noise condition. For some observers, exponents are estimated to be reduced relative to the no-noise condition (e.g., observer B.B.), but for others, this is clearly not the case (e.g., observer E.G.). A similar variability over observers is present in the differences between both exponents: For some this difference has increased in the presence of moderate noise (e.g., observer L.V.E.), but for others this is not the case (e.g., observer B.B.). This variation over observers is not inconsistent with data sets that have been published earlier: The dipper function of some observers seems to be invariant to the presence of strong noise, while this is not the case for others (e.g., Henning & Wichmann, 2007).

*β,*the semi-saturation contrast. In this region, the contrast response behaves as an accelerating nonlinearity. Because internal variance is constant at all contrast levels in our fits (

*σ*= 1), these functions could also be interpreted as detection functions (signal-to-noise ratio as a function of contrast). Indeed, detection sensitivity has been reported to rise in an accelerating way as a function of contrast (Foley & Legge, 1981; Nachmias, 1981; Nachmias & Sansbury, 1974). It is this acceleration that leads to the response expansion that underlies the pedestal effect. The larger the log–log steepness at low contrasts, the stronger the pedestal effect. Comparing the no-noise condition to the weak-noise condition illustrates that the log–log steepness, and thus the pedestal effect, is reduced for all observers in the presence of weak noise. The reduced log–log steepness in weak noise is a consequence of the higher response at low contrasts, which in turn leads to improved sensitivity. The response difference between the no-noise and weak-noise condition diminishes as a function of contrast and disappears completely around the semi-saturation contrast (due to the rescaling procedure). This is not inconsistent with the effect of a rectification mechanism at the output of a linear filter stage prior to the nonlinear response expansion. Due to rectification, the mean response of a linear filter is enhanced in the presence of noise. At zero pedestal contrast, the response-enhancing effect of rectification is maximal because half of the responses of a nonrectified linear filter are negative. As pedestal contrast, and thus the average filter response, increases, the proportion of negative responses drops and the response-enhancing effect of rectification diminishes until it vanishes completely. This is consistent with the contrast response functions plotted in Figure 8.

*(optimal) template matching,*followed by full-wave rectification (e.g., Lu & Dosher, 2008). Template matching is a convenient way to transform 2-D input images to 1-D “filter responses.” Prior to this filter stage,

*stimulus sampling*or limited calculation efficiency was assumed, as is often the case in detection-in-noise models (e.g., Lu & Dosher, 2008). This is described by parameter

*k,*which expresses the proportion of available information used by the observer and ranges between 0 and 1. Inefficiencies in the visual system need not be conceptualized as sampling: parameter

*k*could be thought of as, e.g., reflecting the use of a suboptimal filter, for instance a spatial-frequency tuned channel that has an effective bandwidth that is broader than the narrowband Gabor signal. To describe the nonlinear mapping of stimulus contrast to internal contrast representation, the second part of the transduction mechanism consisted of the generalized four parameter Naka–Rushton function (free parameters

*α, β, η,*and

*κ*). The rectified filter responses used in the expansive, i.e., the numerator, and the compressive, i.e., the denominator, parts of the Naka–Rushton function were the same. Although some evidence points to the existence of a broadly tuned contrast gain-control pool (e.g., Foley, 1994; Holmes & Meese, 2004), we opted to use only within-channel suppression in this model to avoid an increase of the number of parameters. Furthermore, because the spectral properties of the noise were constant for the weak and moderate noise conditions, it is unlikely that this simplification has a significant impact on our conclusions. The transduction mechanism in this simulation is thus fully determined by specifying the sampling (

*k*) and the parameters of the generalized Naka–Rushton equation (

*α, β, η,*and

*κ*).

*σ*). The effect of the external noise on internal variability was derived from Monte Carlo simulations. With the noise levels used in our experiments as input to the gain-control model,

*k, η, κ,*and

*β*were varied—effects of

*α,*i.e., the response gain, need not be simulated. Descriptive functions were fitted to the simulated response variances. Assuming equal variance of the signal-plus-noise and noise representations (as a first approximation), allowed us to use these descriptive functions to formalise the full model behavior.

*image sampling*(

*k*),

*template matching, response rectification, nonlinear transduction*(

*α, β, η,*and

*κ*), and

*late noise addition*(

*σ*). All these components are illustrated in the upper row of Figure 9. As we did in the rest of the paper,

*σ*was frozen to 1. The other five model parameters were chosen in such a way that the (normalized) simulated thresholds would approximate the data shown in Figure 1. We found this to be the case for

*k*= 0.05 (i.e., sampling),

*α*= 2*10

^{6}(i.e., the response gain),

*β*= 0.025 (i.e., the semisaturation contrast),

*η*= 3 (i.e., the response exponent),

*κ*= 1.75 (i.e., the gain-control exponent), and

*σ*= 1 (i.e., the late noise). Figure 9a shows the psychometric functions relating detection performance to signal contrast without noise (red) and in the presence of weak (green) and moderate noise (blue) on semi-logarithmic coordinates. As was the case for the detection in noise data discussed in this paper and shown in Figure 1a, addition of weak noise improves contrast detection performance. This is not the case for moderate noise. In weak noise, the 75% correct detection threshold is reduced by a factor of 1.39 (as compared to 1.39 across observers). In modest noise, this threshold is increased by a factor of 2.14 (1.34 across observers). Further, psychometric functions are not parallel on semi-logarithmic coordinates. With these parameter values, the Goris et al. (2008) gain-control model thus mimics some aspects of human detection in noise data.

*α*= 12.3 and

*β*= 0.04). Note that this gain-control model is similar but not identical to the model used to produce these data. The most interesting parameter estimates are shown in Figure 10 (compare to Figure 7). It will be noted that the three parameter changes discussed above are also present in the parameter estimates for the simulated data set (i.e., a reduction in both the response- and gain-control exponent and an increased difference between these exponents in the presence of weak noise, relative to no-noise). This simulation thus shows that a single mechanism underlying the pedestal effect and stochastic resonance may have the signature of reduced exponents if the gain-control model is fitted to the data. These similarities further support the interpretation of the contrast discrimination data discussed in this paper as consistent with the idea that a single mechanism underlies the pedestal effect in contrast discrimination and stochastic resonance in contrast detection.

*reduced*in the presence of weak noise for all observers. This reduction clearly rules out independent, additive sources of performance improvement and cannot simply be attributed to additionally introduced response variability by the weak noise because it was smaller and not as consistent in the presence of moderate noise. We further showed that a single mechanism responsible for the pedestal effect and stochastic resonance may have the signature of reduced exponents if the gain-control model is fitted to the data. Given that the pattern of parameter changes for real data is the same as for simulated data (under the hypothesis of a single mechanism) and that the alternative hypothesis can be ruled out by model selection, we interpret these data as indicating that a single mechanism underlies the pedestal effect and stochastic resonance in contrast perception.

^{2}Detection data were omitted from the low-pedestal contrast analysis. The reasons are twofold. First, due to the normalisation procedure, most differences between deviance residuals as a function of pedestal contrast will be removed for detection. Second, because of the logarithmic transformation of signal contrast, there is no correct location for these deviance residuals on the contrast axis.

*Spatial Vision*, 10, 485–489. [PubMed] [CrossRef] [PubMed]

*Spatial Vision*, 10, 403–414. [PubMed] [CrossRef] [PubMed]

*Spatial Vision 20,*. [.

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 19, 1267–1273. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 38, 267–280. [PubMed] [CrossRef] [PubMed]

*The Journal of Physiology*, 203, 237–260. [PubMed] [Article] [CrossRef] [PubMed]

*Vision Research*, 26, 991–997. [PubMed] [CrossRef] [PubMed]

*Color Research and Application*, 14, 23–34. [CrossRef]

*Spatial Vision*, 10, 433–436. [PubMed] [CrossRef] [PubMed]

*Journal of Mathematical Psychology*, 44, 108–132. [PubMed] [CrossRef] [PubMed]

*The Journal of Physiology*, 204, 283–298. [PubMed] [Article] [CrossRef] [PubMed]

*The Journal of Physiology*, 197, 551–556. [PubMed] [Article] [CrossRef] [PubMed]

*Nature*, 383, 770. [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics and Image Science*, 2, 1160–1169. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 29, 241–246. [PubMed] [CrossRef] [PubMed]

*Spatial Vision*. Oxford: Oxford University Press.

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 11, 1710–1719. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 37, 2779–2788. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 21, 1041–1053. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics and Image Science*, 15, 1036–1047. [CrossRef]

*British Journal for the Philosophy of Science*, 50, 83–102. [CrossRef]

*Journal of Mathematical Psychology*, 44, 205–231. [PubMed] [CrossRef] [PubMed]

*British Journal for the Philosophy of Science*, 45, 1–35. [CrossRef]

*Vision Research*, 27, 369–379. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 46, 4294–4303. [PubMed] [CrossRef] [PubMed]

*Nature Neuroscience*, 4, 1146–1150. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 8, (9):4, 1–15, http://journalofvision.org/8/9/4/, doi:10.1167/8.9.4. [Article] [CrossRef] [PubMed]

*Visual pattern analyzers*. Oxford: Oxford University Press.

*Vision Research*, 11, 251–259. [PubMed] [CrossRef] [PubMed]

*Signal detection theory and psychophysics*. New York: John Wiley & Sons, Inc.

*The elements of statistical learning theory*. New York: Springer.

*Journal of the Optical Society of America A, Optics and Image Science*, 5, 1362–1373. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 19, 1259–1266. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America*, 73, 851–854. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 15, 887–897. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America*, 71, 574–581. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 7, (1):3, 1–15, http://journalofvision.org/7/1/3/, doi:10.1167/7.1.3. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Vision*, 4, (12):7, 1080–1089, http://journalofvision.org/4/12/7/, doi:10.1167/4.12.7. [PubMed] [Article] [CrossRef]

*Advances in neural information processing systems*. (19, pp. 689–696). Cambridge, MA: MIT Press.

*Vision Research*, 42, 1771–1784. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 21, 457–467. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America*, 70, 1458–1471. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics and Image Science*, 4, 391–404. [PubMed] [CrossRef] [PubMed]

*Psychological Review*, 115, 44–82. [PubMed] [CrossRef] [PubMed]

*Journal of Mathematical Psychology*, 44, 190–204. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 21, 215–223. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 14, 1039–1042. [PubMed] [CrossRef] [PubMed]

*Human vision, visual processing, and digital display III, SPIE Proceedings 1666*(pp. 41–56). Bellingham, WA: SPIE.

*Journal of the Optical Society of America A, Optics, and Image Science*, 2, 1508–1532. [PubMed] [CrossRef]

*Spatial Vision*, 10, 437–42. [PubMed] [CrossRef]

*Spatial Vision*, 10, 443–446. [PubMed] [CrossRef]

*Trends in Cognitive Sciences*, 6, 421–425. [PubMed] [CrossRef] [PubMed]

*Psychological Review*, 109, 472–491. [PubMed] [CrossRef] [PubMed]

*Journal of Mathematical Psychology*, 44, 92–107. [PubMed] [CrossRef] [PubMed]

*Nature*, 302, 419–422. [PubMed] [CrossRef] [PubMed]

*Some aspects of modelling human spatial vision: Contrast discrimination*. Oxford, UK: The University of Oxford.

*Perception & Psychophysics*, 63, 1293–1313. [PubMed] [CrossRef]

*Perception & Psychophysics*, 63, 1314–1329. [PubMed] [CrossRef]

*Advances in neural information processing system*. –1496). Cambridge, MA: MIT Press.

*Nature*, 373, 33–36. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 35, 1979–1989. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 2, (3):4, 243–255, http://journalofvision.org/2/3/4/, doi:10.1167/2.3.4. [PubMed] [Article] [CrossRef]

*Brain Research*, 869, 251–255. [PubMed] [CrossRef] [PubMed]

*Journal of Mathematical Psychology*, 44, 41–61. [PubMed] [CrossRef] [PubMed]