Free
Research Article  |   May 2006
Computing dynamic classification images from correlation maps
Author Affiliations
Journal of Vision May 2006, Vol.6, 12. doi:https://doi.org/10.1167/6.4.12
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hongjing Lu, Zili Liu; Computing dynamic classification images from correlation maps. Journal of Vision 2006;6(4):12. https://doi.org/10.1167/6.4.12.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

We used Pearson's correlation to compute dynamic classification images of biological motion in a point-light display. Observers discriminated whether a human figure that was embedded in dynamic white Gaussian noise was walking forward or backward. Their responses were correlated with the Gaussian noise fields frame by frame, across trials. The resultant correlation map gave rise to a sequence of dynamic classification images that were clearer than either the standard method of A. J. Ahumada and J. Lovell (1971) or the optimal weighting method of R. F. Murray, P. J. Bennett, and A. B. Sekuler (2002). Further, the correlation coefficients of all the point lights were similar to each other when overlapping pixels between forward and backward walkers were excluded. This pattern is consistent with the hypothesis that the point-light walker is represented in a global manner, as opposed to a fixed subset of point lights being more important than others. We conjecture that the superior performance of the correlation map may reflect inherent nonlinearities in processing biological motion, which are incompatible with the assumptions underlying the previous methods.

Introduction
Standard method of computing classification images
A fundamental question in vision science is to understand how the visual system represents the shape of an object. Although the internal representation of an object is not directly observable, it can be estimated by measuring the influence of input stimuli on observers' responses. The psychophysical technique of classification images, pioneered by Ahumada (2002), Ahumada and Lovell (1971), and Beard and Ahumada (1998), has been used to elucidate the internal representation (sometimes referred to as a “template”) of an object, when the input stimulus is generated by adding white Gaussian noise to the object's image. Ahumada's insight was that the internal template could be estimated from an observer's response that was subject to the added noise. The result of this estimation was an image that was termed classification image. The equation used to compute a classification image C was 
C=(N¯AA+N¯BA)(N¯AB+N¯BB),
(1)
where
N¯SR
denotes the average of the noise fields across the trials where the stimulus was S (S ∈ {A,B}) and the observer responded R (R ∈ {A,B}) in a discrimination experiment with two targets, A and B. This method of calculating a classification image, termed here the “standard” method, has led to many successful applications in studying low- and middle-level visual perception (Abbey & Eckstein, 2002; Eckstein & Ahumada, 2002; Gold, Murray, Bennett, & Sekuler, 2000; Watson & Rosenholtz, 1997). 
As Ahumada (2002, p. 123) carefully noted, the standard method in Equation 1 was developed as a heuristic to integrate the four average noise fields. Others have refined the method to increase the sensitivity and robustness of classification images (Abbey & Eckstein, 2002; Neri, 2004; Neri & Heeger, 2002; Neri, Parker, & Blakemore, 1999; Nykamp & Ringach, 2002; Thomas & Knoblauch, 2005). For example, Murray, Bennett, and Sekuler (2002) derived an optimal solution for calculating classification images, under the assumption that internal noise is additive and follows a Gaussian distribution. We will refer to this method as the optimal weighting model. The goal of this method was to select the optimal weights for each average noise field to maximize the signal-to-noise ratio (SNR) of the classification image: SNR(C) = ||E(C)||2/VAR(C), where ||x||2 =
i
xi2, E(·) is expected value, and VAR(·) is variance. When the observer is unbiased, such that pAA = pBB, the classification image calculated by the optimal weighting method is the same as Ahumada's method in Equation 1; accordingly, we will refer to both methods as standard except when it is necessary to distinguish them in cases of response bias. It follows that Ahumada's method is optimal when the observer is unbiased. The optimal weighting method by Murray et al. extended the applicability of the classification image technique to include biased observers, multiple signal contrasts, and confidence ratings. Nonetheless, as Murray et al. noted, their method relies on a noisy linear cross correlator that assumes the additive Gaussian internal noise and linearity. Therefore, the weighting method that is optimal when these assumptions are satisfied may not be optimal when they are violated. 
Discriminating biological motion and the method of correlation maps
The fundamental assumptions of the standard method—additive Gaussian internal noise and linearity—appear to hold reasonably well, empirically, in low-level visual tasks such as traditional discrimination of static stimuli (Cohn, Thibos, & Kleinstein, 1974; Legge, Kersten, & Burgess, 1987; Pelli, 1985). It is unknown, however, how well the method applies to dynamic stimuli with a high-level visual task such as discriminating point-light human walkers (Cutting & Kozlowski, 1977; Johansson, 1973). This study will address this question empirically. The study will also investigate how well a classification “movie” can be obtained using a basic correlation method, which will be referred to as a correlation map. This method, employed in perceptual psychophysics (Richards & Zhu, 1994), is closely related to the technique of reverse correlation in receptive field estimation in physiology (Chauvin, Worsley, Schyns, Arguin, & Gosselin, 2005; Jones & Palmer, 1987; Ringach, Hawken, & Shapley, 1997). 
The usefulness of computing correlations to derive classification images has been widely recognized. For example, Eckstein and Ahumada (2002) emphasized, “The central concept of the technique is the correlation of observer decisions with noisy stimulus features over sets of stimuli” (p. 1). In fact, Beard and Ahumada (1998) defined a classification image as a correlation map: “A perceptual classification image for a stimulus is the correlation over trials between the local noise contrast and the observer's responses to that stimulus.” 
Despite this definition, however, the standard method is not exactly equivalent to correlation, as shown in 1. This difference can be characterized as different weightings of the four noise fields
N ¯ S R
in Equation 1. As summarized in 1, the standard method weights the four equally, and the optimal weighting method weights them according to the (possibly biased) responses p SR , the proportion of a response R when signal S is presented. The weights in the correlation method follow a normalized quadratic function of p SR. Although the sample correlation (Pearson's correlation) is a biased estimator of the population correlation (Fisher, 1915; Zimmerman, Zumbo, & Williams, 2003), the bias is negligible when the sample size is large and the correlation is weak, which is typically the case in classification image studies. Therefore, the sample correlation is practically an unbiased and consistent estimator of the population correlation. Nevertheless, the theoretical significance of this property remains an open question in classification image studies. 
Empirically, as will be shown in the next section, we have found that correlation maps gave rise to statistically significant classification movies that depict the influence of noise pixels at point-light locations on observers' responses. In comparison, the standard methods failed to produce any discernable classification movies. In 2, we demonstrate in a toy problem that the correlation method can indeed outperform the standard methods when the system is nonlinear. 
Experiment: Discriminating biological motion
Method
Stimuli were presented on a 15-in. Dell monitor with a refresh rate of 75 Hz and resolution of 1024 × 768 pixels. At the viewing distance of 57 cm (maintained via a chin rest), each pixel subtended 1.62 arcmin. The monitor was calibrated with a Minolta CS-100 photometer. 
A biological motion sequence in one walking cycle was generated by a walk designer in Poser 4 software (MetaCreations Inc.). The motion sequence simulated a person walking on a treadmill. The stimulus contained a dark gray target, which was a point-light human figure walking either forward or backward (moon walker), inside a light gray (46.50 cd/m 2) aperture (84 × 120 pixels, 2.27 × 3.24 deg in visual angle), centered on a black background (1.96 cd/m 2). 
Each point light in the target was displayed as a square of 5 × 5 pixels (0.14 × 0.14 deg). The 11 point lights included the points of head, one shoulder (only one was visible from the side), elbows, hands, one hip, knees, and feet. The movie was composed of 20 frames, presented at a rate of 67 ms/frame, for one complete walking cycle ( Figure 1). The first and last frames were identical. The backward-walking movie was simply the reverse of the forward-walking movie. Accordingly, the first frames of the two movies were identical. Throughout the movie sequence, the hip point remained stationary; hence, its location completely overlapped in the two movies. For the other 10 point lights, the number of pixel locations that overlapped across the two targets varied from frame to frame. Keeping the hip point stationary for both walking targets and presenting the two movies in a small aperture were intended to help reduce positional uncertainty of the human figure. 
Figure 1
 
Top: Frames 3, 6, 9, 12, 15, and 18 in the forward-walking target. Bottom: the same frames with added dynamic noise.
Figure 1
 
Top: Frames 3, 6, 9, 12, 15, and 18 in the forward-walking target. Bottom: the same frames with added dynamic noise.
Spatiotemporal Gaussian luminance noise, independently and identically distributed, was generated and added. The root-mean-square contrast of the noise was held constant at 0.2 in the entire experiment. The target was defined as the point-light stimulus without noise. The contrast of the target walker against the gray background was defined as the signal contrast, which was constant across frames. 
Prior to the experiment, a psychometric function was measured at five signal contrast levels: .08, .17, .29, .53, and 1.00 in Weber contrast, with 80 trials per level. Contrast threshold of 72% correct was used as the signal contrast in the first experimental block. Subsequently, one up and one down block-by-block staircase was used to maintain accuracy within the 70–75% correct range. The step size of the staircase was 0.02. One block consisted of 1,000 trials. 
Each trial began with presentation of the static first frame in maximal signal contrast (1.00 in Weber contrast), which lasted for 500 ms. This static “anchoring” frame was used to reduce spatial uncertainty without introducing any cues for the subsequent movie because the first frames of the two targets were identical. A walker movie embedded in dynamic noise was then presented either forward or backward. Participants pressed one of two keys to respond. No feedback was provided. 
Author H.L. and two naive observers, J.H. and J.R., participated in the experiment. For H.L. and J.R., the forward-walking direction was to the right. For J.H., it was to the left. All three participants ran 10,000 trials over 5 days. 
Results and discussion
The average accuracies of H.L., J.H., and J.R. were 71%, 72%, and 75%, respectively. All participants biased toward the forward-walking response: H.L., β = .45; J.H., β = .64; and J.R., β = .87 (if no bias, β = 1). Three methods were used to calculate dynamic classification images: the standard method defined in Equation 1 (Ahumada & Lovell, 1971), the optimal weighting method defined in Equation A3 (Murray et al., 2002), and the correlation method defined in Equation A4. Figure 2 presents the results for observer J.R., with six frames from the resultant classification images. Observers H.L. and J.H. yielded similar results. (All classification movies are provided as supplemental materials. Click here to view the movies.) 
Figure 2
 
Frames 3, 6, 9, 12, 15, and 18 of the dynamic classification movies from observer J.R. All images have been normalized to the full range of contrast [0, 255]. Classification images obtained from (A) the standard method, (B) the optimal weighting method, and (C) the correlation-map method. White pixels indicate positive correlations (backward walker); black pixels indicate negative correlations (forward walker).
Figure 2
 
Frames 3, 6, 9, 12, 15, and 18 of the dynamic classification movies from observer J.R. All images have been normalized to the full range of contrast [0, 255]. Classification images obtained from (A) the standard method, (B) the optimal weighting method, and (C) the correlation-map method. White pixels indicate positive correlations (backward walker); black pixels indicate negative correlations (forward walker).
Both standard methods failed because no discernable classification images could be found. In comparison, the correlation map yielded clear classification images. Figure 3 depicts significantly nonzero correlation pixels ( p < .01) in the six frames shown in Figure 2C
Figure 3
 
Frames 3, 6, 9, 12, 15, and 18 in the classification movie depicting pixels with significant correlations ( p < .01), from Figure 2C. White pixels indicate positive correlations (backward walker); black pixels negative correlations (forward walker).
Figure 3
 
Frames 3, 6, 9, 12, 15, and 18 in the classification movie depicting pixels with significant correlations ( p < .01), from Figure 2C. White pixels indicate positive correlations (backward walker); black pixels negative correlations (forward walker).
An important substantive question in the study of biological motion is whether all point lights are attended to in perceiving biological motion in a global manner (Bertenthal & Pinto, 1994; Pinto & Shiffrar, 1999; Shiffrar, Lichtey, & Heptulla Chatterjee, 1997) or whether some point lights are more important than others. For example, there is evidence that hands and feet may carry more motion information than the other joints (Mather & Murdoch, 1994) and, hence, may be more important in the discrimination. A third possibility is that the perception of biological motion depends on some combination of both global and local processing (Thornton, Pinto, & Shiffrar, 1998). 
Classification images can address this issue by examining whether the noise pixels in the various point lights have comparable influences on responses. We performed an analysis using all those point-light pixel locations in each frame that differentiated between the two targets (i.e., excluding those pixel locations that overlapped between the forward and backward walkers). The absolute correlation values of overlapping pixels were, as expected, close to 0 (less than 10 −6). 
Figure 4 shows the values of correlation coefficients of the 10 point lights, averaged over the 20 frames and over nonoverlapping pixels in each point light. For all observers, mean correlations for every point light were reliably positive for the backward walker and negative for the forward walker, with no apparent systematic variations in correlations across the individual point lights. To assess the results statistically, we conducted a repeated measures analysis of variance (ANOVA) on the mean correlations of point lights with two factors: walking direction (forward vs. backward) and the 10 different point lights. This ANOVA yielded a significant main effect of walking direction, F(1,2) = 19.53, p = .048. Neither the main effect of point lights, F(9,18) = 1.18, p = .37, nor the two-way interaction, F(9,18) = 1.75, p = .15, was significant. The lack of any reliable differences in the correlations across individual point lights suggests that all point lights had comparable influences on the discrimination process. These results are consistent with the hypothesis that discrimination of biological motion in our task is based on global processing, rather than on characteristics of local features. Here, global processing does not necessarily mean that all available sources of information, namely, all the nonoverlapping point-light pixels, were optimally used. We also note that the above analysis expects to find a statistically significant difference between point lights if the participants consistently attended to a fixed subset of point lights throughout the experiment. However, if a participant only attended to a subset of point lights and randomly switched from one random subset to another between frames or between trials, then we cannot rule out this possibility from the above analysis. 
Figure 4a, 4b, 4c
 
Point-light correlation results for three observers. The mean correlation in a forward walker and a backward walker as a function of 10 point lights, including points of head (Hd), shoulder (Sd), left elbow (LE), left hand (LH), right elbow (RE), right hand (RH), left knee (LK), left foot (LF), right knee (RK), and right foot (RF). Point-light correlations were averaged over nonoverlapping pixels and the 20 frames. Error bars represent standard error of the mean ( SEM).
Figure 4a, 4b, 4c
 
Point-light correlation results for three observers. The mean correlation in a forward walker and a backward walker as a function of 10 point lights, including points of head (Hd), shoulder (Sd), left elbow (LE), left hand (LH), right elbow (RE), right hand (RH), left knee (LK), left foot (LF), right knee (RK), and right foot (RF). Point-light correlations were averaged over nonoverlapping pixels and the 20 frames. Error bars represent standard error of the mean ( SEM).
General discussion
This study extended the technique of classification images in discriminating point-light biological motion stimuli. For these stimuli, the correlation method provided clearer classification images than the standard method. Because the correlation method outperformed the standard methods in calculating classification images, we conjecture that substantial nonadditive Gaussian noise or substantial nonlinearities in the decision process were present in the task (see 2 for a demonstration of the advantage of correlation method). We acknowledge, however, that we do not yet have a way to characterize the presumed noise and nonlinearities in biological motion perception. 
The technique of classification images potentially provides a useful tool for the study of biological motion. This study applied the correlation method to address a controversy concerning the relative importance of global versus local processing in discriminating biological motion. The results of this correlation analysis revealed that noise positioned at each distinct point light had comparable influence on discrimination responses. This result implies that global processing of visual information dominates in the recognition of biological motion. The present experiment thus provides an example of how classification images can be used to address important questions concerning biological motion. 
Future work is needed to develop and refine the correlation method in computing classification images. For example, we do not yet know how bias influences the resultant classification images derived by the method or when and to what extent the method is nonoptimal. What is important is that there is a natural way to extend the correlation analysis to the more general framework of multiple regressions for calculating classification images. Specifically, the correlation map in this article is a zeroth-order correlation, which can characterize the linear relationship between responses and noise fields. In future work, semipartial correlations can be computed to reconstruct subtler influences of noise on responses using stepwise multiple-regression analysis. By integrating classification image computations with the general framework of multiple regressions, more advanced statistical approaches can be applied to increase the power of the template reconstruction. For example, logistic regression can be applied to predict the relationship between binary responses with a set of variables that may be continuous, discrete, dichotomous, or a mix thereof. Future research needs to address the issue of how classification images can be optimized without restrictive assumptions about internal noise and processes. As methods for calculating classification images become more robust and optimized, the range of potential applications to perception and cognition will correspondingly broaden. 
Supplementary Materials
Movie 1 - Movie 1 
Movie 1. Classification movie of HL. White pixels indicate positive correlations (forward walker); black pixels indicate negative correlation (forward walker). 
Movie 2 - Movie 2 
Movie 2. Classification movies of HL with only positive correlation pixels (Left) and with only negative correlation pixels (Right). 
Movie 3 - Movie 3 
Movie 3. Classification movie of JH. White pixels indicate positive correlations (backward walker); black pixels indicate negative correlations (forward walker). 
Movie 4 - Movie 4 
Movie 4. Classification movies of JH with only positive correlation pixels (Left) and with only negative correlation pixels (Right). 
Movie 5 - Movie 5 
Movie 5. Classification movie of JR. White pixels indicate positive correlations (backward walker); black pixels indicate negative correlations (forward walker). 
Movie 6 - Movie 6 
Movie 6. Classification movies of JR with only positive correlation pixels (Left) and with only negative correlation pixels (Right). 
Appendix A:
Different weightings of the standard and correlation methods
In a discrimination experiment, a dynamic stimulus g consists of two components, a noise field N in which each noise pixel is independently sampled from a Gaussian distribution with mean 0 and variance σ 2 and one of the two signals { A, B} representing the two targets, respectively. The stimulus g can be described as  
g j t = A j t + N j t , i f s i g n a l A i s p r e s e n t e d g j t = B j t + N j t , i f s i g n a l B i s p r e s e n t e d ,
where g j , A j , and N j , respectively, represent the stimulus, the signal, and the noise pixel values of the jth pixel in the tth frame. 
The resulting classification movie (the dynamic analog of a classification image) is calculated frame by frame independently. The classification image in the tth frame can be calculated with the standard method and the optimal weighting method, respectively,  
C t = ( N ¯ A A t + N ¯ B A t ) ( N ¯ A B t + N ¯ B B t ) ,
(A2)
 
C t = ( g ( G 1 ( p A A ) ) N ¯ A A t + g ( G 1 ( p B A ) ) N ¯ B A t ) ( g ( G 1 ( p A B ) ) N ¯ A B t + g ( G 1 ( p B B ) ) N ¯ B B t ) .
(A3)
 
The correlation method is based upon the sample correlations between noise at each individual pixel and an observer's response across trials. The response in one trial is denoted as R ∈ {−1,1}, where R = 1 if the response is A and R = −1 otherwise.  
C c o r r t = i N i t R i i N i t i R i / n ( i N i t 2 ( i N i t ) 2 / n ) ( i R i 2 ( i R i ) 2 / n ) = i N i t ( R i R ¯ ) ( i ( N i t ) 2 n ( N ¯ t ) 2 ) ( i R i 2 n R ¯ 2 ) ,
(A4)
where n is the total number of experimental trials whereby targets A and B are each presented n/2 trials,
N ¯ t = i N i t / n
is the average noise field of the tth frame across all trials, R i is the response on the ith trial, and
R ¯ = i R i / n
is the mean response. If the observer is unbiased, the mean response
R ¯
is 0. 
It is possible to reformulate the correlation method as a function of the four average noise fields. This formulation will further clarify how the correlation method is related to the standard methods. Given that p AA = 2 n AA/ n, Equation A4 can then be rewritten as  
C c o r r t = ( p A A N ¯ A A t + p B A N ¯ B A t ) ( 1 R ¯ ) ( p A B N ¯ A B t + p B B N ¯ B B t ) ( 1 + R ¯ ) ( i ( N i t ) 2 n ( N ¯ t ) 2 ) ( i R i 2 n R ¯ 2 ) ,
(A5)
where
i ( N i t ) 2 n ( N ¯ t ) 2 = n σ ^ N 2
, in which
σ ^ N 2
is the estimator of the variance of the noise field.
i R i 2 n R ¯ 2
can be ignored because it is a constant. Given p AA + p AB = p BA + p BB = 1 and p AA + p BB = 2 p c , in which p c denotes the overall accuracy, the mean response
R ¯
can be calculated as follows:  
R ¯ = ( n A A + n B A ) ( n A B + n B A ) n = 2 p B A + 2 p c 2 .
Substituting Equation A6 into Equation A5, we have  
C c o r r t 2 p A A 2 + ( 2 p c + 1 ) p A A σ ^ N N ¯ A A t + 2 p B A 2 + ( 3 2 p c ) p B A σ ^ N N ¯ B A t 2 p A B 2 + ( 3 2 p c ) p A B σ ^ N N ¯ A B t 2 p B B 2 + ( 2 p c + 1 ) p B B σ ^ N N ¯ B B t .
(A7)
 
By comparing Equations A2, A3, and A7, we see that the methods differ in their weights, w SR , on the four average noise fields. 
Another way to describe the difference between the correlation and standard methods is as follows (we thank the anonymous reviewer for pointing this out). This is based on the observation that correlation method does not treat A trials and B trials differently, whereas the standard method does. In the denominator of the second line of Equation A4, the left term is proportional to the standard error of noise fields (which varies little from pixel to pixel when there are a large number of trials). This standard error does not depend on observer responses. The right term in the denominator depends only on the response bias and does not vary from pixel to pixel; accordingly, it is a scale factor. The numerator can be rewritten as
i N i t R i i N i t R ¯
. The second term is proportional to the mean noise contrast, which varies little from pixel to pixel, and does not depend on observer trial-by-trial responses. It follows that
i N i t R i
is the only term that depends on trial-by-trial responses. Given that R = ±1, this term serves to add up all the noise fields where the responses are A and subtract all the noise fields where the responses are B. In the standard methods, the trials are averaged within four stimulus–response categories. 
Appendix B:
A toy example demonstrating the advantage of the correlation method
We now provide simulation results on a toy discrimination task to demonstrate the advantage of the correlation method in a nonlinear case. Two targets ( T) composed of six pixels were defined as [0 1 0 0 0 0] and [1 0 0 0 0 0], respectively. A model observer discriminated the target with a noisy stimulus input, that is, a target image I contaminated by a Gaussian white noise field N with the mean being 0 and the variance being 0.16. A nonlinear transformation was imposed on the noisy input. An internal noise field Z was multiplied afterward. The internal noise field Z follows the same Gaussian distribution as the external noise field N. The model observer computed a decision variable s using s = ||〈 Z j exp( I j + N j)〉 − T 1|| 2/||〈 Z j exp( I j + N j)〉 − T 2|| 2, where 〈·〉 denotes an element-by-element multiplication and ||·|| 2 denotes the Euclidean distance. The value of the decision variable was compared with a 0.9 threshold that introduces a response bias in the model observer's performance. 
Classification images were calculated with the three methods above in an experiment with 10,000 trials. We simulated the model observer in 1,000 repetitions of the experiment. In each repetition, randomly sampled noise fields were generated to compute a classification image. The sample mean and the sample variance of classification images over the 1,000 repetitions were used to compute the SNR to compare the quality of classification images. The average accuracy of the model observer was .70. The average bias was β = .77 (if no bias, β = 1). The SNR of classification images was 7,334 for the standard method, 6,311 for the optimal weighting, and 8,493 for the correlation map. The simulation result that the greatest SNR was obtained with the correlation method demonstrated as an example that, at least in certain situations, this method could outperform the standard methods when nonlinearity was introduced. 
Simulations with the same qualitative results were also obtained using an additive internal noise with variance that depended on the input contrast. Specifically, the decision variable s was s = ||〈 Z j + exp( I j + N j)〉 − T 1|| 2/||〈 Z j + exp( I j + N j)〉 − T 2|| 2, where noise field Z followed a Gaussian distribution with a mean of 0 and a standard deviation of exp( I + N). 
Acknowledgments
This research was supported by an International Fellowship from the American Association of University Women (H.L.). We are grateful to the anonymous reviewer for the insightful and constructive comments. We thank Craig Abbey, John Hummel, Dario Ringach, James Thomas, Bosco Tjan, and Alan Yuille for valuable discussions and Jennifer Reynolds for excellent assistance in data collection. We especially thank Keith Holyoak for comments on earlier drafts. 
Commercial relationships: none. 
Corresponding author: Hongjing Lu. 
Email: hongjing@ucla.edu. 
Address: Department of Psychology, UCLA, Los Angeles, CA, USA. 
References
Abbey, C. K. Eckstein, M. P. (2002). Classification image analysis: Estimation and statistical inference for two-alternative forced-choice experiments. Journal of Vision, 2, (1), 66–78, http://journalofvision.org/2/1/5/, doi:10.1167/2.1.5. [PubMed] [Article] [CrossRef] [PubMed]
Ahumada, A. J.Jr. (2002). Classification image weights and internal noise level estimation. Journal of Vision, 2, (1), 121–131, http://journalofvision.org/2/1/8/, doi:10.1167/2.1.8. [PubMed] [Article] [CrossRef] [PubMed]
Ahumada, Jr., A. J. Lovell, J. (1971). Stimulus features in signal detection. Journal of the Acoustical Society of America, 49, 1751–1756. [CrossRef]
Beard, B. L. Ahumada, Jr., A. J. (1998). A technique to extract relevant image features for visual tasks. Proceedings of SPIE, 3299, 79–85.
Bertenthal, B. I. Pinto, J. (1994). Global processing of biological motions. Psychological Science, 5, 221–225. [CrossRef]
Chauvin, A. Worsley, K. J. Schyns, P. G. Arguin, M. Gosselin, F. (2005). Accurate statistical tests for smooth classification images. Journal of Vision, 5, (9), 659–667, http://journalofvision.org/5/9/1/, doi:10.1167/5.9.1. [PubMed] [Article] [CrossRef] [PubMed]
Cohn, T. E. Thibos, L. N. Kleinstein, R. N. (1974). Detectability of a luminance increment. Journal of the Optical Society of America, 64, 1321–1327. [PubMed] [CrossRef] [PubMed]
Cutting, J. E. Kozlowski, L. T. (1977). Recognizing friends by their walk: Gait perception without familiarity cues. Bulletin of the Psychonomic Society, 9, 353–356. [CrossRef]
Eckstein, M. P. Ahumada, A. J.Jr. (2002). Classification images: A tool to analyze visual strategies. Journal of Vision, 2, (1), i–i, http://journalofvision.org/2/1/i/, doi:10.1167/2.1.i. [PubMed] [Article] [CrossRef]
Fisher, R. A. (1915). Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population. Biometrika, 10, 507–521.
Gold, J. M. Murray, R. F. Bennett, P. J. Sekuler, A. B. (2000). Deriving behavioural receptive fields for visually completed contours. Current Biology, 10, 663–666. [PubMed] [Article] [CrossRef] [PubMed]
Johansson, G. (1973). Visual perception of biological motion and a model for its analysis. Perception and Psychophysics, 14, 210–211. [CrossRef]
Jones, J. P. Palmer, L. A. (1987). The two-dimensional spatial structure of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58, 1187–1211. [PubMed] [PubMed]
Legge, G. E. Kersten, D. Burgess, A. E. (1987). Contrast discrimination in noise. Journal of the Optical Society of America A, Optics and Image Science, 4, 391–404. [PubMed] [CrossRef] [PubMed]
Mather, G. Murdoch, L. (1994). Gender discrimination in biological motion displays based on dynamic cues. Proceedings of the Royal Society of London: Series B, Biological Sciences, 258, 273–279. [CrossRef]
Murray, R. F. Bennett, P. J. Sekuler, A. B. (2002). Optimal methods for calculating classification images: Weighted sums. Journal of Vision, 2, (1), 79–104, http://journalofvision.org/2/1/6/, doi:10.1167/2.1.6. [PubMed] [Article] [CrossRef] [PubMed]
Neri, P. (2004). Estimation of nonlinear psychophysical kernels. Journal of Vision, 4, (2), 82–91, http://journalofvision.org/4/2/2/, doi:10.1167/4.2.2. [PubMed] [Article] [CrossRef] [PubMed]
Neri, P. Heeger, D. J. (2002). Spatiotemporal mechanisms for detecting and identifying image features in human vision. Nature Neuroscience, 5, 812–816. [PubMed] [Article] [PubMed]
Neri, P. Parker, A. J. Blakemore, C. (1999). Probing the human stereoscopic system with reverse correlation. Nature, 401, 695–698. [PubMed] [CrossRef] [PubMed]
Nykamp, D. Q. Ringach, D. L. (2002). Full identification of a linear–nonlinear system via cross-correlation analysis. Journal of Vision, 2, (1), 1–11, http://journalofvision.org/2/1/1/, doi:10.1167/2.1.1. [PubMed] [Article] [CrossRef] [PubMed]
Pelli, D. G. (1985). Uncertainty explains many aspects of visual contrast detection and discrimination. Journal of the Optical Society of America A, Optics and Image Science, 2, 1508–1532. [PubMed] [CrossRef] [PubMed]
Pinto, J. Shiffrar, M. (1999). Subconfigurations of the human form in the perception of biological motion displays. Acta Psychologica, 102, 293–318. [PubMed] [CrossRef] [PubMed]
Richards, V. M. Zhu, S. (1994). Relative estimates of combination weights, decision criteria, and internal noise based on correlation coefficients. Journal of the Acoustical Society of America, 95, 423–434. [PubMed] [CrossRef] [PubMed]
Ringach, D. L. Hawken, M. J. Shapley, R. (1997). Dynamics of orientation tuning in macaque primary visual cortex. Nature, 387, 281–284. [PubMed] [CrossRef] [PubMed]
Shiffrar, M. Lichtey, L. Heptulla Chatterjee, S. (1997). The perception of biological motion across apertures. Perception and Psychophysics, 59, 51–59. [PubMed] [CrossRef] [PubMed]
Thomas, J. P. Knoblauch, K. (2005). Frequency and phase contributions to the detection of temporal luminance modulation. Journal of the Optical Society of America A, Optics, Image Science, and Vision, 22, 2257–2261. [PubMed] [CrossRef] [PubMed]
Thornton, I. M. Pinto, J. Shiffrar, M. (1998). The visual perception of human locomotion. Cognitive Neuropsychology, 15, 535–552. [CrossRef] [PubMed]
Watson, A. B. Rosenholtz, R. (1997). A Rorschach test for visual classification strategies. Investigative Ophthalmology & Visual Science, 38, 2.
Zimmerman, D. W. Zumbo, B. D. Williams, R. H. (2003). Bias in estimation and hypothesis testing of correlation. Psicologica, 24, 133–158.
Figure 1
 
Top: Frames 3, 6, 9, 12, 15, and 18 in the forward-walking target. Bottom: the same frames with added dynamic noise.
Figure 1
 
Top: Frames 3, 6, 9, 12, 15, and 18 in the forward-walking target. Bottom: the same frames with added dynamic noise.
Figure 2
 
Frames 3, 6, 9, 12, 15, and 18 of the dynamic classification movies from observer J.R. All images have been normalized to the full range of contrast [0, 255]. Classification images obtained from (A) the standard method, (B) the optimal weighting method, and (C) the correlation-map method. White pixels indicate positive correlations (backward walker); black pixels indicate negative correlations (forward walker).
Figure 2
 
Frames 3, 6, 9, 12, 15, and 18 of the dynamic classification movies from observer J.R. All images have been normalized to the full range of contrast [0, 255]. Classification images obtained from (A) the standard method, (B) the optimal weighting method, and (C) the correlation-map method. White pixels indicate positive correlations (backward walker); black pixels indicate negative correlations (forward walker).
Figure 3
 
Frames 3, 6, 9, 12, 15, and 18 in the classification movie depicting pixels with significant correlations ( p < .01), from Figure 2C. White pixels indicate positive correlations (backward walker); black pixels negative correlations (forward walker).
Figure 3
 
Frames 3, 6, 9, 12, 15, and 18 in the classification movie depicting pixels with significant correlations ( p < .01), from Figure 2C. White pixels indicate positive correlations (backward walker); black pixels negative correlations (forward walker).
Figure 4a, 4b, 4c
 
Point-light correlation results for three observers. The mean correlation in a forward walker and a backward walker as a function of 10 point lights, including points of head (Hd), shoulder (Sd), left elbow (LE), left hand (LH), right elbow (RE), right hand (RH), left knee (LK), left foot (LF), right knee (RK), and right foot (RF). Point-light correlations were averaged over nonoverlapping pixels and the 20 frames. Error bars represent standard error of the mean ( SEM).
Figure 4a, 4b, 4c
 
Point-light correlation results for three observers. The mean correlation in a forward walker and a backward walker as a function of 10 point lights, including points of head (Hd), shoulder (Sd), left elbow (LE), left hand (LH), right elbow (RE), right hand (RH), left knee (LK), left foot (LF), right knee (RK), and right foot (RF). Point-light correlations were averaged over nonoverlapping pixels and the 20 frames. Error bars represent standard error of the mean ( SEM).
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×