We used Pearson's correlation to compute dynamic classification images of biological motion in a point-light display. Observers discriminated whether a human figure that was embedded in dynamic white Gaussian noise was walking forward or backward. Their responses were correlated with the Gaussian noise fields frame by frame, across trials. The resultant correlation map gave rise to a sequence of dynamic classification images that were clearer than either the standard method of A. J. Ahumada and J. Lovell (1971) or the optimal weighting method of R. F. Murray, P. J. Bennett, and A. B. Sekuler (2002). Further, the correlation coefficients of all the point lights were similar to each other when overlapping pixels between forward and backward walkers were excluded. This pattern is consistent with the hypothesis that the point-light walker is represented in a global manner, as opposed to a fixed subset of point lights being more important than others. We conjecture that the superior performance of the correlation map may reflect inherent nonlinearities in processing biological motion, which are incompatible with the assumptions underlying the previous methods.

*classification image*. The equation used to compute a classification image

**C**was

*S*(

*S*∈ {

*A,B*}) and the observer responded

*R*(

*R*∈ {

*A,B*}) in a discrimination experiment with two targets,

*A*and

*B*. This method of calculating a classification image, termed here the “standard” method, has led to many successful applications in studying low- and middle-level visual perception (Abbey & Eckstein, 2002; Eckstein & Ahumada, 2002; Gold, Murray, Bennett, & Sekuler, 2000; Watson & Rosenholtz, 1997).

**C**) = ||

*E*(

**C**)||

^{2}/VAR(

**C**), where ||

**x**||

^{2}=

*x*

_{i}

^{2},

*E*(·) is expected value, and VAR(·) is variance. When the observer is unbiased, such that

*p*

_{AA}=

*p*

_{BB}

*,*the classification image calculated by the optimal weighting method is the same as Ahumada's method in Equation 1; accordingly, we will refer to both methods as standard except when it is necessary to distinguish them in cases of response bias. It follows that Ahumada's method is optimal when the observer is unbiased. The optimal weighting method by Murray et al. extended the applicability of the classification image technique to include biased observers, multiple signal contrasts, and confidence ratings. Nonetheless, as Murray et al. noted, their method relies on a noisy linear cross correlator that assumes the additive Gaussian internal noise and linearity. Therefore, the weighting method that is optimal when these assumptions are satisfied may not be optimal when they are violated.

*correlation map*. This method, employed in perceptual psychophysics (Richards & Zhu, 1994), is closely related to the technique of

*reverse correlation*in receptive field estimation in physiology (Chauvin, Worsley, Schyns, Arguin, & Gosselin, 2005; Jones & Palmer, 1987; Ringach, Hawken, & Shapley, 1997).

*p*

_{ SR}

*,*the proportion of a response

*R*when signal

*S*is presented. The weights in the correlation method follow a normalized quadratic function of

*p*

_{ SR}. Although the sample correlation (Pearson's correlation) is a biased estimator of the population correlation (Fisher, 1915; Zimmerman, Zumbo, & Williams, 2003), the bias is negligible when the sample size is large and the correlation is weak, which is typically the case in classification image studies. Therefore, the sample correlation is practically an unbiased and consistent estimator of the population correlation. Nevertheless, the theoretical significance of this property remains an open question in classification image studies.

^{2}) aperture (84 × 120 pixels, 2.27 × 3.24 deg in visual angle), centered on a black background (1.96 cd/m

^{2}).

*β*= .45; J.H.,

*β*= .64; and J.R.,

*β*= .87 (if no bias,

*β*= 1). Three methods were used to calculate dynamic classification images: the standard method defined in Equation 1 (Ahumada & Lovell, 1971), the optimal weighting method defined in Equation A3 (Murray et al., 2002), and the correlation method defined in Equation A4. Figure 2 presents the results for observer J.R., with six frames from the resultant classification images. Observers H.L. and J.H. yielded similar results. (All classification movies are provided as supplemental materials. Click here to view the movies.)

^{−6}).

*F*(1,2) = 19.53,

*p*= .048. Neither the main effect of point lights,

*F*(9,18) = 1.18,

*p*= .37, nor the two-way interaction,

*F*(9,18) = 1.75,

*p*= .15, was significant. The lack of any reliable differences in the correlations across individual point lights suggests that all point lights had comparable influences on the discrimination process. These results are consistent with the hypothesis that discrimination of biological motion in our task is based on global processing, rather than on characteristics of local features. Here, global processing does not necessarily mean that all available sources of information, namely, all the nonoverlapping point-light pixels, were optimally used. We also note that the above analysis expects to find a statistically significant difference between point lights if the participants consistently attended to a fixed subset of point lights throughout the experiment. However, if a participant only attended to a subset of point lights and randomly switched from one random subset to another between frames or between trials, then we cannot rule out this possibility from the above analysis.

**g**consists of two components, a noise field

**N**in which each noise pixel is independently sampled from a Gaussian distribution with mean 0 and variance

*σ*

^{2}and one of the two signals {

**A**,

**B**} representing the two targets, respectively. The stimulus

**g**can be described as

*g*

_{ j}

*, A*

_{ j}

*,*and

*N*

_{ j}

*,*respectively, represent the stimulus, the signal, and the noise pixel values of the

*j*th pixel in the

*t*th frame.

*t*th frame can be calculated with the standard method and the optimal weighting method, respectively,

*R*∈ {−1,1}, where

*R*= 1 if the response is

**A**and

*R*= −1 otherwise.

*n*is the total number of experimental trials whereby targets

*A*and

*B*are each presented

*n*/2 trials,

*t*th frame across all trials,

*R*

_{ i}is the response on the

*i*th trial, and

*p*

_{ AA}= 2

*n*

_{ AA}/

*n,*Equation A4 can then be rewritten as

*p*

_{ AA}+

*p*

_{ AB}=

*p*

_{ BA}+

*p*

_{ BB}= 1 and

*p*

_{ AA}+

*p*

_{ BB}= 2

*p*

_{ c}

*,*in which

*p*

_{ c}denotes the overall accuracy, the mean response

*w*

_{ SR}

*,*on the four average noise fields.

*A*trials and

*B*trials differently, whereas the standard method does. In the denominator of the second line of Equation A4, the left term is proportional to the standard error of noise fields (which varies little from pixel to pixel when there are a large number of trials). This standard error does not depend on observer responses. The right term in the denominator depends only on the response bias and does not vary from pixel to pixel; accordingly, it is a scale factor. The numerator can be rewritten as

*R*= ±1, this term serves to add up all the noise fields where the responses are

*A*and subtract all the noise fields where the responses are

*B*. In the standard methods, the trials are averaged within four stimulus–response categories.

**T**) composed of six pixels were defined as [0 1 0 0 0 0] and [1 0 0 0 0 0], respectively. A model observer discriminated the target with a noisy stimulus input, that is, a target image

**I**contaminated by a Gaussian white noise field

**N**with the mean being 0 and the variance being 0.16. A nonlinear transformation was imposed on the noisy input. An internal noise field

**Z**was multiplied afterward. The internal noise field

**Z**follows the same Gaussian distribution as the external noise field

**N**. The model observer computed a decision variable

*s*using

*s*= ||〈

*Z*

_{ j}exp(

*I*

_{ j}+

*N*

_{ j})〉 −

**T**

_{1}||

^{2}/||〈

*Z*

_{ j}exp(

*I*

_{ j}+

*N*

_{ j})〉 −

**T**

_{2}||

^{2}, where 〈·〉 denotes an element-by-element multiplication and ||·||

^{2}denotes the Euclidean distance. The value of the decision variable was compared with a 0.9 threshold that introduces a response bias in the model observer's performance.

*β*= .77 (if no bias,

*β*= 1). The SNR of classification images was 7,334 for the standard method, 6,311 for the optimal weighting, and 8,493 for the correlation map. The simulation result that the greatest SNR was obtained with the correlation method demonstrated as an example that, at least in certain situations, this method could outperform the standard methods when nonlinearity was introduced.

*s*was

*s*= ||〈

*Z*

_{ j}+ exp(

*I*

_{ j}+

*N*

_{ j})〉 −

**T**

_{1}||

^{2}/||〈

*Z*

_{ j}+ exp(

*I*

_{ j}+

*N*

_{ j})〉 −

**T**

_{2}||

^{2}, where noise field

**Z**followed a Gaussian distribution with a mean of 0 and a standard deviation of exp(

**I**+

**N**).

*Journal of Vision*, 2, (1), 66–78, http://journalofvision.org/2/1/5/, doi:10.1167/2.1.5. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Vision*, 2, (1), 121–131, http://journalofvision.org/2/1/8/, doi:10.1167/2.1.8. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of the Acoustical Society of America*, 49, 1751–1756. [CrossRef]

*Proceedings of SPIE*, 3299, 79–85.

*Psychological Science*, 5, 221–225. [CrossRef]

*Journal of Vision*, 5, (9), 659–667, http://journalofvision.org/5/9/1/, doi:10.1167/5.9.1. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of the Optical Society of America*, 64, 1321–1327. [PubMed] [CrossRef] [PubMed]

*Bulletin of the Psychonomic Society*, 9, 353–356. [CrossRef]

*Journal of Vision*, 2, (1), i–i, http://journalofvision.org/2/1/i/, doi:10.1167/2.1.i. [PubMed] [Article] [CrossRef]

*Biometrika*, 10, 507–521.

*Current Biology*, 10, 663–666. [PubMed] [Article] [CrossRef] [PubMed]

*Perception and Psychophysics*, 14, 210–211. [CrossRef]

*Journal of the Optical Society of America A, Optics and Image Science*, 4, 391–404. [PubMed] [CrossRef] [PubMed]

*Proceedings of the Royal Society of London: Series B, Biological Sciences*, 258, 273–279. [CrossRef]

*Journal of Vision*, 2, (1), 79–104, http://journalofvision.org/2/1/6/, doi:10.1167/2.1.6. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of Vision*, 4, (2), 82–91, http://journalofvision.org/4/2/2/, doi:10.1167/4.2.2. [PubMed] [Article] [CrossRef] [PubMed]

*Nature*, 401, 695–698. [PubMed] [CrossRef] [PubMed]

*Journal of Vision*, 2, (1), 1–11, http://journalofvision.org/2/1/1/, doi:10.1167/2.1.1. [PubMed] [Article] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics and Image Science*, 2, 1508–1532. [PubMed] [CrossRef] [PubMed]

*Acta Psychologica*, 102, 293–318. [PubMed] [CrossRef] [PubMed]

*Journal of the Acoustical Society of America*, 95, 423–434. [PubMed] [CrossRef] [PubMed]

*Nature*, 387, 281–284. [PubMed] [CrossRef] [PubMed]

*Perception and Psychophysics*, 59, 51–59. [PubMed] [CrossRef] [PubMed]

*Journal of the Optical Society of America A, Optics, Image Science, and Vision*, 22, 2257–2261. [PubMed] [CrossRef] [PubMed]

*Cognitive Neuropsychology*, 15, 535–552. [CrossRef] [PubMed]

*Investigative Ophthalmology & Visual Science*, 38, 2.

*Psicologica*, 24, 133–158.