We simulated the study of
Ahumada & Beard (1998) who measured classification images in a Vernier acuity task. In our simulations, we assume a spatial kernel (the vector
w) similar to those obtained in the actual measurements (
Figure 1a) and an error function nonlinearity with
rmax = 1,
y0 = 0.5, ɛ = 1.0. In each trial of the experiment one of two possible stimuli,
x0 or
x1, is presented in the presence of additive white Gaussian noise with
σ = 1. The subject is asked to identify the stimulus. We code the subject’s response as
r = 0 for the
x0 choice and = 1 for the
x1 choice. The probability that the subject will select
x1 is modeled as
. If we pool all the trials in which only
x1 was presented, we obtain
(where
n represents the noise in the stimulus). Similarly, for the trials in which only
x0 was presented, we obtain
. The “classification images” obtained by cross-correlation between response and stimulus in these two cases should be the same and proportional to
w. Similarly, the nonlinearities should be identical up to a translation. In fact, these constraints could be used as a test for the validity of the model in a 2AFC task.
We ran simulations that contained between 300 and 100,000 trials per stimulus. We selected one of the stimuli and calculated the linear kernel (or “classification image”) (
Figure 1b and
1c) and the estimated error function nonlinearity (
Figure 2 The nonlinearity calculations show that the proposed method converges faster than the LR/CLR methods (the result from the CLR method is not shown, but it was no better than the LR method). Our technique gives a reasonable approximation to the nonlinearity after only 500 trials, whereas the LR method yields a gross underestimate (
Figure 2a). After 2400 trials, the moment method has determined the nonlinearity almost perfectly, but the LR method still underestimates (
Figure 2b). After 75,000 trials, both methods have converged to the correct solution (
Figure 2c).
To study the convergence of the algorithm, we simulated 60 runs and estimated the average relative error of the parameters for increasing lengths of the data record, ranging from 300 to 100,000 trials. A summary of the average relative error in
y0 and ɛ vs. length of the data record is shown in
Figure 2d. After 2500 trials, the error is down to 10 percent, and it converges toward zero as the number of trials is increased further.