Classification images provide an important new method for learning about which parts of the stimulus are used to make perceptual decisions and provide a new tool for measuring the template an observer uses to accomplish a task. Here we introduce a new method using one-dimensional sums of sinusoids as both test stimuli (discrete frequency patterns [DFP]) and as noise. We use this method to study and compare the templates used to detect a target and to discriminate the target’s position in central and parafoveal vision. Our results show that, unsurprisingly, the classification images for detection in both foveal and parafoveal vision resemble the DFP test stimulus, but are considerably broader in spatial frequency tuning than the ideal observer. In contrast, the classification images for foveal position discrimination are not ideal, and depend on the size of the position offset. Over a range of offsets from close to threshold to about 90 arc sec, our observers appear to use a peak strategy (responding to the location of the peak of the luminance profile of the target plus noise). Position acuity is much less acute in the parafovea, and this is reflected in the reduced root efficiency (i.e., square root of efficiency) and the coarse classification images for peripheral position discrimination. The peripheral position template is a low spatial frequency template.

*m*ranging from 1 to 11. As seen in Equation 1, the test contrast,

*c*, is defined as the peak contrast at the center of the spatial pattern,

*y*=

*0*. The normalization in Equation 3 assures that Equation 2 has the same definition.

*2π 6y*) is the carrier and the term cos(

*πy*)

^{10}is the envelope. The envelope peaks at unity, falls to 0.5 at y = ± 0.117 degrees and is zero at

*y*= ± 0.5 degrees. In the frequency domain, the envelope has components ranging from 0 to 5 c/degree in 1 c/degree stepsEquation 3 gives the spectrum of components of the full stimulus. The target (Equation 1) has the advantage of being localized in both space and spatial frequency, but with a well characterized discrete frequency spectrum (neglecting the truncation outside the displayed region). One cycle of the fundamental was shown. The fundamental was 1 c/degree so that the test and noise patterns subtended 1 degree vertically. The gratings were also 1 degree horizontally so that the stimulus was square.

*b*

_{m}and

*d*

_{m}are zero mean, unit variance Gaussian random numbers. In our experiments,

*n*was 4%. Because the test and noise patterns are matched on average in their spectral characteristics, the noise provides a very potent mask. Discrete component noise has several advantages over noise with continuous spectra. (1) Discrete component noise strength can be specified in contrast units rather than in energy density units. (2) Ideal observer predictions can be computed in a straightforward manner, as will be discussed. (3) Because the noise can be specified by a small number of coefficients, linear regression rather than reverse correlation can be used to obtain the classification image with a reduction in the number of trials needed for a given image quality (Klein & Levi, 2002). In this study, we obtained the coefficients for each run, and then averaged them.

^{2}with a dark surround. The stimuli were presented on a monitor using MatVis

^{TM}software (Neurometrics Institute, Berkeley, CA).

^{2}(Equation 4), the performance of the ideal observer for the detection task is easy to calculate. The ideal d′ for the m

*th*component is given by the signal strength of the m

*th*component,

*c a*(Equation 2), divided by the RMS noise strength, n (Equation 4). This ratio is

_{nm}*c a*(from Equations 2–4). The total d′

_{m}/n^{2}is given by the sum of squares of the individual d′s:

_{template}, based on using a general template (the template observer) in which the 11 coefficients have weightings,

*w*

_{m}. The d′ value is the template response to the test pattern divided by the standard deviation of the template response to noise:

*w*

_{m}, equals the coefficients of the test pattern,

*a*

_{m}, and Equation 6 becomes which is identical to Equation 5, as expected. The Pythagorean sum in Equation 5 can be calculated from Equation 3 to be sqrt(Σ

_{m}a

_{m}

^{2}) = 0.419. Thus for our experiments with

*n*= 4%, the ideal observer’s threshold (d′ = 1) is given by Equation 5 to be

*c*

_{ideal}= 4%/0.419 = 9.56%. Alternatively, the ideal observer would have d′ values of 1.24, 2.49, and 3.73 for our three test contrasts of 12%, 24% and 36%.

*r*, between the test pattern coefficients,

*a*

_{m}, and the template,

*w*

_{m}. Correlation coefficients are always between −1 and +1. In foveal vision of our normal observers, the correlation coefficients are typically between 0.7 and 0.8.

*c a*

_{m}sin(

*2π m offset*)/

*n*, where the factor of 2 has been removed because we are interested in the d′ versus the stimulus with no offset rather than comparing opposite offsets. The calculation of d′ for the ideal observer and the template observer is identical to what we did for the case of detection except that

*c*

_{m}replaces

*a*

_{m}. The d′ for the ideal observer is The d′ of the template observer is equal to d′

_{ideal}times the correlation between the template

*w*

_{m}and the coefficients

*c*

_{m}of Equation 8.

*r*

_{k,s}is the internal response on trial

*k*of a given stimulus level,

*s*;

*n*

_{k,s,i}is the external noise amplitudes, where the subscript

*i*goes from 1 to 11 for the 11 spatial frequencies, the subscript

*s*goes from 0 to 3 (detection task) or −1 to +1 (position task), and

*q*

_{k,s}is the internal noise plus the truncation noise that is needed to make

*r*

_{k,s}an integer. Equation 10 is based on the assumption that higher-order nonlinearities are negligible. We intend to investigate this assumption in future studies. The term f

_{s}in Equation 10 is a constant that depends on the stimulus level. Because it is a constant, it will cancel when the response is cross-correlated with the zero mean noise. The subscript k indicates the trial number for a given level, and goes from 1 to about 50 (200/4) for the detection task and about 67 (200/3) for the position task. As will be discussed in “Results,” we separately analyzed each stimulus level,

*s*, to minimize bias. The coefficients

*w*

_{i,s}are the regression coefficients that correspond to the template weighting used by the observer. These coefficients are the classification image.

*s*, the internal response,

*r*

_{k,s}is linearly related to the observer’s response. This assumption is equivalent to an assumption that the criteria were uniformly spaced. This assumption seems reasonable because the observers were encouraged to distribute their responses uniformly. The subscript,

*s*, enables the constant of proportionality to be included in the coefficient

*w*

_{i,s}so that

*r*

_{k,s}can be taken as the observer’s response. How the constant of proportionality depends on the placement of criteria is considered elsewhere (Klein & Levi, 2002). The standard method to obtain the coefficients

*w*

_{i,s}is to cross-correlate the responses with the external noise where

*ntrials*is the number of trials at a given stimulus level, and from Equations 10 and 11,

**N**: where the second term is noise that is of order

*ntrials*

^{−0.5}and will be neglected in the present analysis.

**N**, the noise variance-covariance, is approximately a diagonal matrix with the diagonal elements being close to

*n*

^{2}. In that case, Equation 14 is approximately .

*w*

_{i,s}will have an ordinate with units. It is useful to consider the meaning of the magnitude of

*w*

_{i,s}. The numerator of Equation 15 has units of response times noise and the denominator has units of noise squared. Thus

*w*

_{i,s}has units of response divided by noise. Because the noise is n = 0.04, w

_{i,s}is 25 times the response variability. Consider, for example,

*w*

_{6,s}in Figure 2, whose value is

*w*

_{6,s}= 5. That means the 6 c/degree component of the noise contributes a variation of 5/25 = 0.2 to the response

*r*

_{k,s}. A larger value of

*w*

_{i,s}means a greater variability of responses, which would produce a lower d′. Thus we have the counterintuitive result that a larger classification image is correlated with reduced d′ (see discussion preceding Figure 9). Klein & Levi (2002) provide further details on the meaning of the magnitude of the classification components,

*w*

_{i,s}, including a redefinition of

*w*

_{i,s}that removes the response variance so that

*w*

_{i,s}becomes the correlation between the stimulus and response.

*w*

_{i,s}versus the 11 spatial frequencies. We will also plot the classification images given by for the detection task, and for the position task

_{i,0}+w

_{i,1})/2, for the two below threshold contrast levels 0 and 0.12) and green (for the two above threshold contrast levels 0.24 and 0.36) lines show that in the fovea there is actually very little influence of contrast. The relative independence of these classification images with contrast reflects the relatively low transducer exponents for detecting the DFP test pattern in noise. We calculated the transducer exponents from our rating scale data in two ways: by fitting a power function to d′ versus contrast and fitting a power function up to d′ = 1 and then a straight line constrained to have the same slope as the power function at d′ 1. These two methods gave similar exponents of 0.92 and 0.89, respectively, much lower than the exponent of 1.5 to 2 typically obtained in detection experiments and consistent with Legge, Kersten, and Burgess (1987). A linear transducer function would imply that the sensitivity to small changes is independent of test level, as indicated by the regression coefficients being independent of contrast. The foveal detection classification images are reasonably similar in the three observers, and they also appear to be reasonably well matched to the ideal observer template (dotted black line), although the humans’ secondary peaks appear to be slightly narrower than the ideal’s. Note that in these, and all the subsequent classification image figures, the ordinate has arbitrary units.

_{i0}. Figure 3 shows the classification image (left) and coefficients (right) for detection, averaged across the three observers, for each of the four contrast levels (rather than grouped as above). The three non-zero contrast stimuli give nearly identical responses and coefficients. The zero contrast condition (blue) gives a lower response (and coefficients), that is,

*w*

_{i,0}is less than

*w*

_{i,s}for

*s*> 0. One possibility is that the noise that observers are trying to classify might be below threshold on some trials. That is, even though the overall transducer exponent appears to be near 1, at very low contrasts there may still be some acceleration. The classification method we are using may be very sensitive to the shape of the transducer function near zero contrast. A second explanation is that the placement of criteria at the low response categories might have been chosen to be widely spread apart, producing less response variability for the blank stimulus, thus causing a smaller classification image for blanks. Additional factors that affect the magnitude of the coefficients are discussed following Figures 4 and 9.

_{ideal}= 9.56%. Thus root efficiency is

^{2})

^{−3/2}to simplify Equation 26.

^{−1}. For σ = 1/60 degree, corresponding to the third curve from the top in Figure 11, the peak is at 60/2π = 9.5 c/degree. The classification images for small Vernier offsets have peaks above 9.5 c/degree corresponding to a centroid mechanism with a Gaussian window with σ < 1 min. This is such a narrow window that it is reasonable to call the position mechanism a peak or slope or dipole mechanism.

_{m}Equation 3. The function G(f) has an arbitrary scale factor for convenience in plotting.

*Investigative Ophthalmology and Visual Science*, 40(Suppl.), S3015.

*Vision Research*, 39, 789–801. [PubMed] [CrossRef] [PubMed]

*Investigative Ophthalmology and Visual Science*, 41(Suppl), S804.

*Vision Research*, 37, 325–346. [PubMed] [CrossRef] [PubMed]

*Journal of Physiology*, 203, 237–260. [PubMed] [CrossRef] [PubMed]

*Science*, 214, 93–94. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A*, 2, 1498–1507. [PubMed] [CrossRef]

*Vision Research*, 37, 525–539. [PubMed] [CrossRef] [PubMed]

*Journal of Cognitive Neuroscience*, 6, 156–164. [CrossRef] [PubMed]

*Current Biol,ogy*, 10, 663–666. [PubMed] [CrossRef]

*Journal of Vision*, 1(3), 46a, http://journalofvision.org/1/3/46/, DOI 10.1167/1.3.46. [Link] [CrossRef]

*Science*, 180, 1194–1197. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 36, 3821–3826. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 33, 1241–1258. [PubMed] [CrossRef] [PubMed]

*Psychoanalytic Review*, 94, 148–175.

*Journal of the Optical Society of America A*, 4, 391–404. [PubMed] [CrossRef]

*Vision Research*, 25, 963–977. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 40, 951–972. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 34, 3293–3313. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 40, 973–988. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 34, 2215–2238. [PubMed] [CrossRef] [PubMed]

*Science*, 285, 844–846. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 14, 1409–1420. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 24, 1387–1397. [PubMed] [CrossRef] [PubMed]

*Vision Research*, 22, 157–162. [PubMed] [CrossRef] [PubMed]