Free
Research Article  |   June 2009
Critical features for the perception of emotion from gait
Author Affiliations
Journal of Vision June 2009, Vol.9, 15. doi:https://doi.org/10.1167/9.6.15
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Claire L. Roether, Lars Omlor, Andrea Christensen, Martin A. Giese; Critical features for the perception of emotion from gait. Journal of Vision 2009;9(6):15. https://doi.org/10.1167/9.6.15.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Human observers readily recognize emotions expressed in body movement. Their perceptual judgments are based on simple movement features, such as overall speed, but also on more intricate posture and dynamic cues. The systematic analysis of such features is complicated due to the difficulty of considering the large number of potentially relevant kinematic and dynamic parameters. To identify emotion-specific features we motion-captured the neutral and emotionally expressive (anger, happiness, sadness, fear) gaits of 25 individuals. Body posture was characterized by average flexion angles, and a low-dimensional parameterization of the spatio-temporal structure of joint trajectories was obtained by approximation with a nonlinear mixture model. Applying sparse regression, we extracted critical emotion-specific posture and movement features, which typically depended only on a small number of joints. The features we extracted from the motor behavior closely resembled features that were critical for the perception of emotion from gait, determined by a statistical analysis of classification and rating judgments of 21 observers presented with avatars animated with the recorded movements. The perceptual relevance of these features was further supported by another experiment showing that artificial walkers containing only the critical features induced high-level after-effects matching those induced by adaptation with natural emotional walkers.

Introduction
Emotion expression plays a central role in regulating human social interactions, and making reliable judgements about other people's feelings is a highly important skill in everyday life. It has been shown, for example, that smiles guide us in whom we choose to cooperate with (Schmidt & Cohn, 2001), and emotion-recognition aptitude predicts success in negotiation situations (Elfenbein, Foo, White, Tan, & Aik, 2007). One of the most important emotional signaling channels is facial expression: humans are able to express at least six different emotional states (anger, happiness, sadness, fear, surprise and disgust) with remarkable cross-cultural stability (Ekman & Friesen, 1971; Ekman, Sorenson, & Friesen, 1969; Izard, 1977). 
But which are the relevant visual features supporting the recognition of the different emotions? By restricting the stimulus images to parts of the face it has been shown that the emotions differ in which parts of the face are most important for recognition (Bassili, 1979), the eye region being very important for perceiving anger and fear, whereas the mouth is very informative for the expression of happiness (Gosselin & Schyns, 2001; Schyns, Petro, & Smith, 2007). Studies show that raising or lowering of the eyebrows or of the corners of the mouth represent examples of important features of facial emotion expression (Ekman & Friesen, 1978; Ellison & Massaro, 1997). Such ideas were formulated most prominently in the facial action coding system that describes the production of distinct emotional expressions based on local components that were originally derived from patterns of muscle contraction (Ekman & Friesen, 1978). More recently, unsupervised-learning techniques, such as principal component analysis (PCA) or independent component analysis (ICA), have been applied to determine components of facial expressions for dimension reduction and in order to identify features that are critical for the recognition of faces (Bartlett, Movellan, & Sejnowski, 2002; Hancock, Burton, & Bruce, 1996; Turk & Pentland, 1991; Valentin, Abdi, Edelman, & O'Toole, 1997) and of facial emotion expression (Calder, Burton, Miller, Young, & Akamatsu, 2001). Last, not least, it has also been shown that dynamic cues contribute to the recognition of facial expressions (Bassili, 1978; O'Toole, Roark, & Abdi, 2002). For example, recognition performance is influenced by the speed at which an expression unfolds (Kamachi et al., 2001). 
While most research on the expression of emotions has focused on the human face as signaling channel, recently there has been rising interest in studying emotionally expressive body movement and body posture. Human observers readily recognize emotions expressed in body movement (Atkinson, Dittrich, Gemmell, & Young, 2004; Atkinson, Tunstall, & Dittrich, 2007; de Gelder, 2006; Dittrich, Troscianko, Lea, & Morgan, 1996; Pollick, Paterson, Bruderlin, & Sanford, 2001; Wallbott, 1998; Wallbott & Scherer, 1986). To identify relevant physical stimulus attributes that support these perceptual capabilities, researchers have attempted to correlate observers' classification performance or expressiveness ratings of videotaped expressive movements, excluding culture-dependent emblems (Ekman, 1969), with movement characteristics, which were either obtained directly from movement trajectories or from observers' ratings of predefined kinematic features. These previous studies provide some insight into the influence of different physical characteristics of body movements on the perception of emotions. First, studies employing static pictures show that emotion recognition is influenced by body posture. A systematic analysis of this effect was performed in experiments requiring observers to classify emotions from static images of puppets whose joint angles covered a range of possible values (Coulson, 2004). Examples of important posture features include head inclination, which is typical for sadness, or elbow flexion, which observers associate with the expression of anger. Second, the perception of emotions from body expressions is influenced by movement kinematics. Typically, velocity, acceleration and jerk have been considered as interesting parameters, and all three have been shown to affect emotional classifications of expressive movements, as well as accounting for a substantial part of the variance in the classification of expressive arm movements (Pollick et al., 2001; Sawada, Suda, & Ishii, 2003). Since the speed with which a movement is executed has such a profound effect on the perception of emotional style and since the posture and kinematics of movements are affected by velocity (Donker, Beek, Wagenaar, & Mulder, 2001; Kirtley, 2006), it seems crucial to evaluate emotional body expressions also against a baseline of neutral body movements with speeds that are comparable to those of emotionally expressive movements. 
An important difficulty for uncovering relationships between physical aspects of body movement and emotional style is the fact that the moving human body represents a complex, high-dimensional dynamic visual stimulus. While some studies have investigated heuristically chosen pre-selected potentially interesting features, others have tried to make this high dimensionality more tractable by application of data-reduction methods. As for images of faces, PCA has been applied to learn lower-dimensional representations of human movements (Santello, Flanders, & Soechting, 2002; Troje, 2002; Yacoob & Black, 1999). Other studies have exploited motion morphing in order to define low-dimensional parameterizations of motion styles (e.g. Giese & Lappe, 2002; Vangeneugden, Pollick, & Vogels, 2008). Studies in motor control have applied ICA, factor analysis, and Non-negative Matrix Factorization (NMF) for the analysis of body movements (Ivanenko, Cappellini, Dominici, Poppele, & Lacquaniti, 2005; Ivanenko, Poppele, & Lacquaniti, 2004; Santello & Soechting, 1997; Tresch, Cheung, & d'Avella, 2006). Our own group has developed a novel blind source separation algorithm that approximates trajectories by linear mixtures of source signals with variable delays (also known as ‘anechoic mixture’ in acoustics). The advantage of this model is that it typically results in more compact representations of movement trajectories, requiring fewer source terms or ‘parameters’ than PCA and ICA for a given level of accuracy (Omlor & Giese, 2007a, 2007b). It seems plausible that such highly compact models, by minimizing redundancies in the parameterization of body movements, are particularly suited for the identification of movement features carrying information about emotional style. 
An interesting question in this context is whether it is possible to identify spatially localized features that carry information about emotion. The extraction of informative spatially localized features has been successfully demonstrated for static pictures of faces (Gosselin & Schyns, 2001). Besides, it has been shown that applying PCA separately to different local face regions (e.g. centered on the eye and the mouth) improves classification performance, presumably because the individual extracted features are more informative (Padgett & Cottrell, 1995). Previous studies suggest an influence of heuristically chosen local features, such as head inclination or arm swing, on the perception of emotion from gait (Montepare, Goldstein, & Clausen, 1987). The problem with such approaches is that, in principle, a very large number of such local features can be defined. This raises the question whether it is possible to extract limited sets of highly informative features in a more systematic way. 
The existence of such informative, spatially local features is consistent with the hypothesis that the processing of biological motion, and potentially also of body shape, is based on ‘holistic’ templates (Bertenthal & Pinto, 1994; Dittrich, 1993). ‘Holistic processing’ refers to the observation that the perception of biological-motion stimuli is strongly degraded if only parts of the stimulus are presented (Mather, Radford, & West, 1992; Pinto & Shiffrar, 1999). Similar findings are also well-established for face stimuli (Carey & Diamond, 1994; Tanaka & Farah, 1993). While the spatial integration might well be based on holistic mechanisms, the integrated local information could still be defined in terms of a limited number of highly informative local features. 
The goal of the current study was to identify postural and kinematic features that are important for the perception of emotion expressed in human gait. To accomplish this goal, we conducted three experiments:
  1.  
    First, applying machine learning methods, we extracted informative features from the joint-angle trajectories of emotional gaits that were recorded by motion capture from participants that expressed different emotions during walking.
  2.  
    In a second step, we analyzed how the features that we had extracted from the motor behavior are related to features that determine the perception of emotional gaits. For this purpose, we conducted a perception experiment during which human observers had to classify and rate the emotional expressiveness of computer-generated characters that were animated with the recorded trajectories of emotional gaits. The perceptual judgments were then subjected to statistical analysis in order to identify the most important posture and dynamic features that influenced the perceptual judgments.
  3.  
    Since we found a high degree of overlap between the informative features extracted from the motor behavior and those features that determine perceptual judgments, in a third experiment we exploited high-level after-effects to test whether the extracted feature set truly corresponded to critical features driving the perception of the individual emotions.
High-level after-effects were first described in the context of face perception. Adaptation with face stimuli with a particular identity can bias the perception of subsequent faces toward identities that correspond to points on the opposite side of the average face in face space (Leopold, O'Toole, Vetter, & Blanz, 2001; Webster, Kaping, Mizokami, & Duhamel, 2004) and control experiments show that these after-effects for faces are not simply a consequence of previously known low-level adaptation processes, e.g. for orientation or local contrast (Xu, Dayan, Lipkin, & Qian, 2008). Instead, at least partially, these after-effects seem to result from adaptive changes in higher-level face-selective representations. More recently, similar after-effects have also been reported for the perception of biological motion (Jordan, Fallah, & Stoner, 2006; Troje, Sadr, Geyer, & Nakayama, 2006): Adaptation, for example, with a male walker biases the perception of a subsequent gender-neutral walker toward the opposite gender (female). In the third part of our study, we exploited such high-level after-effects as a tool for testing whether the extracted emotion-specific features really capture a significant amount of the perceptually relevant emotion-specific information. To this end, we used as adapting stimuli artificial emotional walkers containing only the postulated critical features as adaptors and compared the size of the resulting adaptation effects with the ones induced by natural emotional walking patterns. Comparable sizes of the induced after-effects suggest that the extracted feature set comprises the major part of the perceptually relevant emotion-specific information. 
Experiment 1: Movement analysis
Methods
Actors
Altogether 25 individuals were recorded, thirteen of which were right-handed (five male, eight female, mean age 27 years 3 months), twelve were left-handers (six male, six female, mean age 25 years 8 months; ages ranging from 22 years 8 months to 30 years 4 months in both groups). None of them had suffered from injuries or other factors resulting in motor impairments. Criterion for inclusion in the handedness groups was a laterality quotient above 0.5 for right-handers and below −0.5 for left-handers, on the Edinburgh Handedness Inventory (Oldfield, 1971). All participants were Caucasian and students at the University of Tübingen. 
The individuals in the left-handed sample had no specific acting experience, while the right-handers all had between six months' and two years' experience performing in lay theatre groups; none had received any formal acting training. Although the group with lay-acting experience reported less inhibition during the recording of emotional movements than did the novices, we combined their data for analysis since there were no statistically significant differences in recognizability between the movements executed by the two groups. 
Motion capture of emotional gaits
The recording area was approximately five meters in length, allowing the recording of around six complete step cycles. We recorded walking in a straight line, each condition repeated three times. First, emotionally neutral gaits were recorded to serve as baseline. Then four emotionally expressive gaits were recorded. The actors were instructed to avoid the use of gestures that would interrupt their rhythmic walking pattern. For the recordings of neutral gait, the actors were first recorded walking at their customary walking speed. For a subset of actors, we then recorded neutral gait at four different velocities: two higher speeds and two lower speeds than their usual walking speed, actors instructed to walk slightly faster/slower than normal, or very fast/slow. Eleven actors participated in these recordings, all of them left-handed. The slow and fast gaits were matched on a trial-by-trial basis to that emotionally expressive trial with the most similar velocity. Such a match was possible for a good number of trials (anger: 24; fear: 18, happiness: 30, sadness: 20), resulting in overall velocity differences of below 15%. Emotionally expressive gait (anger, fear, happiness and sadness, in an order counterbalanced across actors) was recorded afterwards. 
For ensuring behavior that is very similar to the spontaneous expression of emotion the actors were instructed to recall a past situation associated with the relevant affect. In addition, during an induction phase the emotion was expressed by gestures, vocalization and facial expression. This mood induction procedure has been shown to be highly effective in modulating mood states (Westermann, Spies, Stahl, & Hesse, 1996). The actors were instructed to continue with this induction procedure until they started to experience the relevant mood state, which they indicated by walking to a particular point in the recording area. Then the recording for the affect was started without further intervention of the experimenter. For fear, we only considered expressions associated with movements slower than normal gait in our analysis; if an actor first spontaneously chose fast movements, we further instructed him or her to induce a mood that matched slow movements. 
Recordings and psychophysical experiments were performed with informed consent of participants. All experimental procedures had been approved by the responsible local ethics board of the University of Tübingen (Germany). 
Motion capture was performed using a Vicon 612 system (Vicon, Oxford, UK) with eight cameras. The system has a sampling frequency of 120 Hz and determines the three-dimensional positions of reflective markers (2.5 cm diameter) with spatial error below 1.5 mm. The markers were attached to skin or tight clothing with double-sided adhesive tape, according to the positions of Vicon's PlugInGait marker set. Commercial Vicon software was used to reconstruct and label the markers, and to interpolate short missing parts of the trajectories. 
Processing of trajectory data and computation of joint angles
All further data processing was done using MATLAB 7.0 (The MathWorks, Natick, USA). From each trial a single complete gait cycle was selected, determined as the interval between two successive heel strikes of the same foot. In cases where the trajectory contained more than one full cycle, the second complete cycle was usually chosen. If the actor did not execute a rhythmic walking movement, or if any accompanying gestures were evident, this cycle was replaced by a later cycle from the same trial. For the computation of joint angles, all trajectories were re-sampled with 100 time steps per gait cycle and smoothed by spline interpolation. The joint angles were then extracted by approximating the marker positions with a hierarchical kinematic body model (skeleton) with 17 joints (head, neck, spine, and right and left clavicle, shoulder, elbow, wrist, hip, knee and ankle). Coordinate systems were attached to each rigid segment of this skeleton. Since anatomical landmarks and marker positions were used to define these coordinate systems, it was in some cases necessary to correct small deviations from orthogonality between the basis vectors by singular-value decomposition. The rotations between adjacent coordinate systems along this skeleton were characterized by Euler angles, as indicated in Figure 1, the pelvis coordinate system serving as the origin of the kinematic chain. All resulting angle trajectories were inspected for the potential ambiguities inherent in the computation of Euler angles, and jumps caused by such ambiguities were removed by unwrapping. Differences between start and end points of the trajectories were corrected by spline interpolation between the five first and last frames of each trajectory, and trajectories were additionally smoothed by fitting them with a third-order Fourier series. 
Figure 1
 
Joint angles and stimulus design. The joint angles describe rotation around the three axes defining flexion (red, marked flex), abduction (blue, marked abd) and rotation angles (green, marked rot).
Figure 1
 
Joint angles and stimulus design. The joint angles describe rotation around the three axes defining flexion (red, marked flex), abduction (blue, marked abd) and rotation angles (green, marked rot).
Computation of simple parameters characterizing body posture and kinematics
It has been shown that emotionally expressive gaits vary in postural and kinematic characteristics (Atkinson et al., 2007; Wallbott, 1998) as well as in gait velocity (Montepare et al., 1987). To determine which features are most important for expressing the different affects, we extracted a multitude of physical movement features. Gait velocity was computed as the ratio of traveled distance and duration. As measure of body posture extracted from the joint-angle trajectories, we considered mean flexion, averaged over the entire step cycle. As shown in Figure 1, the flexion angles corresponded to rotations about the main joint axes. For straight walking the resulting rotation axes were approximately orthogonal to the walking direction. We considered the angles of eleven major joints (head, spine, pelvis and left and right shoulder, elbow, hip and knee joints). 
For the analysis of movement features we considered the flexion-angle trajectories independent of differences in posture and velocity. For this purpose, the mean flexion angles were subtracted. The resulting signals were further analyzed by approximating them with a blind source separation model, as described in the following section. 
Blind source separation
The joint angle trajectories define complex spatio-temporal patterns, and a myriad of possible features could be analyzed in order to investigate how these trajectories change with emotion. A more systematic approach of dealing with this high dimensionality is to apply unsupervised learning techniques in order to obtain a parameterized generative model for the measured joint angle trajectories. The variations of the parameters of this model can then be analyzed in order to characterize the emotion-specific spatio-temporal changes. A description in terms of maximally informative features is obtained if the generative model is highly compact and contains only a minimum set of estimated parameters. In the following we present a blind source-separation method that results in such compact generative models. 
The joint trajectories were modeled by applying a blind source separation algorithm that approximates the trajectories by linear superpositions of independent components with joint-specific time delays (Omlor & Giese, 2007b). Several previous studies have applied dimension reduction techniques for the modeling of gait, including PCA (Ivanenko et al., 2004; Santello & Soechting, 1997; Troje, 2002), factor analysis (Davis & Vaughan, 1993; Merkle, Layne, Bloomberg, & Zhang, 1998; Olree & Vaughan, 1995), Fourier analysis (Unuma, Anjyo, & Takeuchi, 1995), and standard ICA (Ivanenko et al., 2005; Tresch et al., 2006). The new algorithm applied in our study is based on ICA, which approximates sets of time signals by the weighted linear superposition of source signals that are (approximately) statistically independent. This linear superposition is usually computed separately for each point in time, resulting in an instantaneous mixture model of the form 
xi(t)=j=1nαijsj(t),
(1)
where the joint-angle trajectories xi(t) are approximated by linear superpositions of the source signals or basis components sj, weighted by the mixing weights αij. The same mixing model also underlies PCA, where the source signals are orthogonal rather than statistically independent (Cichocki & Amari, 2002; Jolliffe, 2002). 
Both ICA and PCA accomplish dimension reduction by approximating the data with superpositions of a small number of basis components or source functions. Often a limited number of components is sufficient to explain a large fraction of the variance in the data. For gait data, however, the instantaneous mixture model ( Equation 1) fails to model phase differences between different limbs in an explicit manner. Such phase relationships characterize many stable motor coordination patterns between different limbs; in gait, for example, there is an anti-phase relationship between homologous joints on opposite sides, e.g. right and left leg (Golubitsky, Stewart, Buono, & Collins, 1999). An efficient model for this type of data would make use of these regularities in the data, allowing for related signals appearing with temporal delays. Explicit modeling of time delays has classically been applied in acoustics in order to model travelling times between sound sources and microphones at different positions in space. Mathematically, these dependencies can be taken into account by an anechoic mixing model of the form 
xi(t)=j=1nαijsj(tτij),
(2)
where the constants τij describe joint-specific time delays between source signals and joint angles. The introduction of such time delays has previously been shown to be beneficial for modeling electromyographically recorded patterns of muscle activation during coordinated leg movements (d'Avella & Bizzi, 2005). Previous work from our group shows that for different types of body movements, including gaits, the model described by Equation 2 results in more compact representations with fewer parameters than the model defined by Equation 1. Quantitative comparisons for gait data show, for example, that for the same level of accuracy PCA requires more than twice the number of source terms than the proposed novel model (Omlor & Giese, 2007b), implying that the anechoic mixing model results in more compact representations than traditional approaches such as PCA or regular ICA. Our analysis was based on the hypothesis that more compact models, with fewer parameters, also might yield more interpretable learned emotion-specific features than models with many redundant parameters. 
The original version of our blind source separation algorithm (Omlor & Giese, 2007a, 2007b) estimates one mixing weight αij and one time delay τij per joint and source. This approach already resulted in the extraction of well-interpretable emotion-specific features from the movement trajectories. However, comparing across emotions, we noticed that the time delays estimated for a given joint were often largely independent of the emotion. This made it possible to further reduce the number of parameters in the model by constraining the delays for each individual joint to be equal for all different emotions. Mathematically, this constraint can be easily embedded in the original blind source separation algorithm (see 1 for details). The constraint improved the robustness of the results and the interpretability of the parameters. With the additional constraint the explained variance for a model with three sources was 92% (opposed to 99% without this additional constraint). The final model contained three source functions, 51 time delays τ ij (constrained to be equal over all emotions) that were equal for all emotions, and 51 mixing weights (one per joint and source, estimated separately for the different emotions). 
Sparse regression
Facial and body expressions can be characterized by a large number of potentially relevant features. Even the application of unsupervised learning techniques, such as the one discussed in the last section, results in models with relatively many parameters. To obtain clearer insights into which features are critical for the expression of emotion in gait, it is important to extract the most informative features, i.e. those that capture the most important emotion-specific variations in the data. The automatic extraction of such features is possible with machine-learning algorithms. In our study we applied sparse regression, for two purposes: to extract emotion-specific postural and dynamic features from gait trajectories, and to identify features that are critical for the perception of emotions from gait, as discussed in Experiment 2
Linear regression normally models a dependent variable Y (e.g. a rating of emotional expressiveness) as a linear function of the predictors X (representing, for instance, different relevant postural or kinematic features), where the elements of the vector β are the estimated regression coefficients, and where ɛ is a noise vector:  
Y = X β + ɛ .
(3)
The regression coefficients can be estimated, for instance, by minimizing the least-squares error:  
& β c i r c ; = arg min β Y X β 2 2 ,
(4)
where the l 2-norm of a vector u is defined as  
u 2 = k = 1 n | u k 2 | .
(5)
For problems with many predictor variables typically a large number of these contribute to the solution, resulting in many small non-zero coefficients β k and unstable estimates of the individual parameters' values. Such regression models are usually difficult to interpret. Ideally, one would try to explain the data variance with a minimum number of free model parameters, corresponding to a solution where many of the regression parameters β k are zero. Such a sparse solution, which automatically selects the most important features, can be computed by forcing small terms to adopt the value zero, leaving only those predictors in the model that carry the highest proportion of the variance. It is well-known that regression models can be sparsified by including an additional regularizing terms (for example an l 1-norm regularizer) into the cost function ( Equation 4). The corresponding error function is given by  
& β c i r c ; = arg min β Y X β 2 2 + λ β 1 ,
(6)
where the parameter λ ≥ 0 controls the degree of sparseness. This method is known in statistics as the ‘Lasso Method’ (Meinshausen, Rocha, & Yu, 2007; Tibshirani, 1996), where the l1-norm is defined as 
|u|1=k=1n|uk|.
(7)
The corresponding (convex) minimization problem has only a single solution, which can be determined by quadratic programming (Nocedal & Wright, 2006). The parameter λ specifies the degree to which small weights are penalized, determining the sparseness of the solution. For λ = 0, the algorithm coincides with normal least-squares regression. With increasing values of λ, less important contributions to the solution are progressively forced to zero, resulting in models with fewer and fewer active variables (Tibshirani, 1996). 
An optimal value of the parameter λ, which determines the sparseness level of the model, and thus the number of active features, can be calculated by different statistical techniques. For our analysis, we applied generalized cross-validation (GCV) (Fu, 1998; Tibshirani, 1996), which minimizes a combined error measure that depends on the approximation error and on an estimate of the effective number of model parameters (details see 2). In the following λ opt signifies the determined optimal value of the sparseness parameter. 
We applied sparse regression for automatically selecting the most informative trajectory features for emotional walking compared with neutral walking. For this purpose, we defined vectors a j and a 0 that specify the movement and posture features for emotion j and for neutral gait, respectively, so the emotion-specific feature changes are given by Y j = a ja 0. With β j signifying the vector of the corresponding regression coefficients ( Equation 4), which characterize the importance of the individual features for the changes of emotion j compared to neutral walking, one can define the trivial regression problem Y j = X j β j + ɛ j; here, the non-square matrix X j contains only one entries, so that the estimated β j without regularization terms correspond to the joint-specific means across trials of the entries of the Y j term. 
Concatenating the β j, ɛ j, X j, and Y j in matrices, i.e., B = [ β 1,…, β 4] T and X = [ X 1,…, X 4] T, Y = [ Y 1,…, Y 4] T, and E = [ ɛ 1,…, ɛ 4] T, one can approximate the emotion-specific changes of all features across emotions by the regression problem Y = XB + E. For this matrix regression problem an error function equivalent to the one in Equation 6 can be defined, just replacing the vector norms by matrix norms. Here indicates the (matrix) Frobenius norm, and refers to the sum of the absolute values of all matrix elements. After sparsification, non-zero coefficients in the matrix specify the important features (for individual joints and emotions) that are necessary to approximate the emotion-specific changes compared to neutral gait. Again the optimal sparsification parameter λ can be determined by GCV. 
Results
There were characteristic differences in body movement and body posture between the gaits expressing different emotions. Example gaits are shown in movies, Movie 1 showing gaits expressing fear (on the left) and sadness (on the right). Examples for angry and happy gaits are shown in Movie 2 and 3
 
Movie 1
 
Fearful and sad gait. Avatar on the left side of the movie shows an example of fearful gait; avatar on the right side of the movie shows an example of sad gait.
 
Movie 2
 
Angry gait and speed-matched neutral gait. Angry gait (avatar on the left side) and speed-matched neutral gait (avatar on the right side) of one individual, shown side by side to allow comparison.
 
Movie 3
 
Happy gait and speed-matched neutral gait. Happy gait (avatar on the left side) and speed-matched neutral gait (avatar on the right side) of one individual, shown side by side to allow comparison.
In the following, we present the results of the statistical analysis of movement trajectories with the goal to extract critical posture and movement features that characterize gaits expressing different emotions. In addition, by comparing the emotional gaits with neutral gaits that were matched in walking speed, we describe postural and kinematic changes in emotional gaits that cannot be explained by changes in gait velocity. 
Posture features
The postural effects of emotional expression were analyzed by comparing the respective average posture angles for the emotional and the neutral gaits on an actor-by-actor basis. The results of this analysis are shown in Figure 2
Figure 2
 
Emotion-specific posture effects. (A) Regression weights from the sparse regression analysis for the posture changes for emotional relative to neutral gait for six different joints (averaging the data for corresponding bilateral joints), for head (He), spine (Sp), shoulder (Sh), elbow (El), hip (Hi) and knee (Kn) joints. Color code as in flanking color bar. Signs (+ and −) indicate critical posture features reported in previous psychophysics experiments on the perception of emotional body expressions. (B) Mean ± SEM posture change (in rad), for emotional relative to neutral gait for six different joints (averaging the values for bilateral joints). Emotions are indicated by different colors of the bars. For head and spine, negative values indicate increased inclination; upper-arm retraction is indicated by negative shoulder flexion. For elbow, hip and knee positive values indicate increased flexion. Asterisks mark significant posture changes ( p < 0.05).
Figure 2
 
Emotion-specific posture effects. (A) Regression weights from the sparse regression analysis for the posture changes for emotional relative to neutral gait for six different joints (averaging the data for corresponding bilateral joints), for head (He), spine (Sp), shoulder (Sh), elbow (El), hip (Hi) and knee (Kn) joints. Color code as in flanking color bar. Signs (+ and −) indicate critical posture features reported in previous psychophysics experiments on the perception of emotional body expressions. (B) Mean ± SEM posture change (in rad), for emotional relative to neutral gait for six different joints (averaging the values for bilateral joints). Emotions are indicated by different colors of the bars. For head and spine, negative values indicate increased inclination; upper-arm retraction is indicated by negative shoulder flexion. For elbow, hip and knee positive values indicate increased flexion. Asterisks mark significant posture changes ( p < 0.05).
Figure 2A shows the weights β k from the sparse regression analysis as color-coded plot. Since the weights for the right and left body side were usually very similar we collapsed the results over both sides of the body for each individual joint. The figure shows a clear pattern of emotion-specific posture changes (defined by the average joint angles) relative to neutral walking. The sparsification parameter for this analysis was chosen according to GCV (see Methods), defining the set of significant features. The analysis reveals a clear pattern of emotion-specific posture features. The most prominent findings were the strongly reduced head angle (corresponding to increased head inclination) for sad walking, and increases of the elbow angles for fearful and angry walking. 
To further validate the obtained results, and in order to relate the features obtained by our new type of analysis to the previous literature on emotional body expressions, Figure 2B also contains a summary of the results from related previous studies (Coulson, 2004; de Meijer, 1989, 1991; Montepare, Koff, Zaitchik, & Albert, 1999; Schouwstra & Hoogstraten, 1995; Sogon & Masutani, 1989; Wallbott, 1998). The signs in the figure summarize the results of these studies, ‘+’-signs indicating cases in which increased emotional expressiveness was associated with increased flexion or greater perceived movement in the corresponding joints, while ‘−’-signs indicate reductions in the perceived joint flexion associated with increased expressiveness. 
We found an interesting correspondence between the features automatically extracted by our algorithm and those derived from published psychophysical experiments, especially for our most prominent features. However, some features with small regression weights were not consistent with published findings: for example, in previous studies changes of spine and shoulder angles were not detected as significant features for happiness and fear expressions, and neither was a decrease in elbow angle described as a feature for expressing sadness. In total, 67% of the detected features coincided with those extracted from psychophysical data in these previous studies. Our sparse regression analysis missed 21% of the features described as significant in the previous psychophysical literature. 
One might ask if the same results could not have been obtained by a more classical analysis, without sparsification. Emotion-specific movement effects can also be statistically analyzed using a multivariate GLM to assess the effect of the factor Emotion (four levels: anger, fear, happiness and sadness) on the movement of the ten joints (head, spine, and left and right shoulder, elbow, hip and knee). With this type of analysis, we obtained significant differences between the emotions on all entered joints ( F 3, 296 ranging from 13.7 to 64.2, all p < 0.001), and significant differences between neutral and emotional gait for several joints by applying a t-test ( t 74 > 2.95, uncorrected p < 0.006). The means and standard errors underlying this analysis are presented in a conventional bar diagram in Figure 2B. A post-hoc Scheffé test revealed significant similarities between the different emotions. The most prominent posture features in this analysis coincided with the ones extracted by sparse regression. In total, ten features were significantly different from neutral walking, out of which 90% matched features derived from psychophysical data. The GLM analysis missed 29% of the features found in previous psychophysical studies. 
Summarizing, with two different types of analysis of the average joint angles we found a clear pattern of emotion-specific posture changes, which strongly overlaps with features that published psychophysical studies have shown to be critical for the perception of emotion from gait. 
Movement features
The effects of emotional expression on movement were characterized by the linear mixing weights extracted using the blind source separation algorithm described in Methods. Feature extraction was based on the differences of the mixing weights between emotional and neutral walking, compared on an actor-by-actor basis. The trajectories were approximated by three source signals, one of which accounted only for a very small amount of variance (<5%). We thus restricted the feature analysis to the weight differences for the two sources that explained the greatest part of the trajectory variance (see Methods). The analyzed weight differences coincided directly with amplitude changes in the movement of the corresponding joints (Omlor & Giese, 2007b), as corroborated by a substantial correlation (r = 0.86; p < 0.001) between weight differences and joint-angle amplitudes computed over the entire data set. 
Figure 3A shows the results for the sparsified mixing weights for the first source s 1, which explained the largest amount of variance in the trajectories in terms of a color-coded plot. This plot immediately reveals the well-known result that the emotions happiness and anger were associated with increased joint amplitudes (indicated in red), while sadness and fear rather tended to be associated with a reduction in joint-angle amplitudes (indicated in blue) compared to normal walking, seemingly consistent with the intuition that more energetic emotions were characterized by ‘larger movements’, while ‘reduced movements’ were typical for sadness and fear. For expressions of fear we also observed reduced linear weights for knee movement, likely caused by a slinking gait adopted by the actors when expressing this emotion. 
Figure 3
 
Emotion-specific dynamic features. (A) Regression coefficients from sparse regression based on the weight differences between emotional and neutral walking for the first source that explains the maximum of the variance (see Methods). The signs indicate corresponding features derived from psychophysical experiments (see text) for left (L) or right (R) side of body. Joint abbreviations as in Figure 2A. (B) Mean ± SEM of differences in mixing weights between emotional and neutral gait extracted by the novel algorithm, for the first source function ( s 1). Emotions are color-coded, and asterisks mark significant weight changes ( p < 0.05). (C) Features extracted by PCA; significant features do not match results from psychophysics. Features extracted by PCA plotted in the same way as in (A).
Figure 3
 
Emotion-specific dynamic features. (A) Regression coefficients from sparse regression based on the weight differences between emotional and neutral walking for the first source that explains the maximum of the variance (see Methods). The signs indicate corresponding features derived from psychophysical experiments (see text) for left (L) or right (R) side of body. Joint abbreviations as in Figure 2A. (B) Mean ± SEM of differences in mixing weights between emotional and neutral gait extracted by the novel algorithm, for the first source function ( s 1). Emotions are color-coded, and asterisks mark significant weight changes ( p < 0.05). (C) Features extracted by PCA; significant features do not match results from psychophysics. Features extracted by PCA plotted in the same way as in (A).
Motion features found to be important for the perception of emotion from gait in previous psychophysical experiments (de Meijer, 1989; Montepare et al., 1987; Wallbott, 1998), which investigated correlations between observed kinematic features with perceived emotional expressiveness, are indicated by the signs in Figure 3A. Almost always positive correlations (‘+’-signs) and negative correlations (‘−’-signs) coincide with the sign of the emotion-specific weight change relative to neutral walking. The only exception to this rule was the reduction in hip- and knee-angle movement for fearful walking that was not observed in the previous psychophysical studies. Interestingly, in a previous perception experiment with stimuli generated from movement trajectories taken from the same database, we had already found that leg-movement kinematics strongly influence the perception of fear expressed in gait (Roether, Omlor, & Giese, in press), implying a one-to-one correspondence between the automatically extracted features and the features determined in perception studies. 
As a further validation, we once more ran a multivariate GLM analysis with the factor Emotion (four levels: anger, fear, happiness and sadness) on the weights for the first two source functions, s 1 and s 2, for the paired shoulder, elbow, hip and knee joint as dependent variables. The means and standard errors for the weight differences of the first source compared to normal walking are shown as conventional bar plots in Figure 3B. We found emotion-specific effects in all joints and for both source functions (all F 3, 296 > 6.35, p < 0.001; for s 1 only: all F 3, 296 > 16.98, p < 0.001) except for the left and right knee on s 1 ( F 3, 296 < 2.5, p > 0.063). Homogeneous subsets determined post hoc using the Scheffé test revealed (obvious) commonalities according to emotion activation: especially for the arm joints similar weight changes relative to neutral occurred during expressions of anger and happiness on the one hand, and of sadness and fear on the other (see also Figure 3A). Comparing the weight changes for first source against neutral walking with conventional tests, we identified 14 significant features all of which also were reported in previous perception studies. In addition, the GLM analysis detected significant changes in the knee movement of anger and sadness expressions, which have not been described in previous studies. 
We also extracted informative features for the second source function, s 2, that explains the second largest amount of variance in the data. In this analysis step, the coefficients from the sparse regression were increased for the movement of the left shoulder and elbow joint during expressions of anger and happiness, and decreased for knee movement during expressions of fear and sadness (data not shown). Since this source oscillates with double gait frequency, and since the corresponding weights can be considered as a measure for the high-frequency components in the joint-angle trajectories, these results are again consistent with the intuition of larger, and potentially less smooth movements during happy and angry walks, and with a reduction of amplitude in fearful and angry walking. 
Performance comparison between algorithms
One might object that the above results were critically dependent on the chosen unsupervised-learning method, or that we should have used simpler, classical algorithms such as PCA or Fourier analysis for trajectory representation. To counter such objects, we validated the performance of the novel algorithm by reanalyzing the data in exactly the same way as before, now using a different type of trajectory model, but matching the number of extracted trajectory features. Thus, we compared the novel algorithm with standard PCA and ICA, and Fourier PCA, a technique that combines PCA on the postures frame-by-frame with modeling of the weights of the eigenvectors by truncated Fourier series (Troje, 2002). The results obtained by applying sparse regression to PCA components is shown in Figure 3C. The features extracted with this technique did not match well with the psychophysical results from the previous literature: a number of relevant features were not detected, while in other cases the signs of the weight changes did not match the signs of perceived joint-amplitude changes (‘+’ and ‘−’ signs), contrasting with the new algorithm (reaching a perfect match (100%) with previous results from perception studies; see above). PCA resulted in only 25% matching features and Fourier-PCA in only 15%. 
The above comparison strongly supports the hypothesis that highly compact trajectory models that avoid redundant terms are advantageous for identifying informative features since they avoid distributing the variance over a large number of parameters. This finding reflects a fundamental principle of statistical learning theory: the stability of statistical inference (in our case about informative features) is increased when the capacity (complexity) of the underlying model is restricted or minimized (Vapnik, 1999). 
Influence of average gait velocity
Our previous analysis has compared emotional gaits to neutral gaits with normal speed. Yet it is known that walking speed varies substantially with emotion (Montepare et al., 1987), and the gaits in our study covered a speed range from 0.5 to more than 2 m/s. It is thus possible that the observed postural and kinematic features were simply an indirect consequence of these changes in gait speed. Alternatively, there may also be additional emotion-specific features, especially in terms of movement kinematics, that cannot be explained by variations in average speed alone. To test this question we compared the emotional gaits with neutral gaits whose speeds were matched, for each actor and on a trial-by-trial basis, to the speed of the emotional gaits (with overall velocity difference ≤15%). Movie 2 and Movie 3 show the emotional gaits (angry and happy) on the left side and the corresponding speed-matched neutral gaits on the right side. 
Figure 4 shows the weight differences between emotional and velocity-matched neutral gaits, for both sources, separately for the four emotions and for the individual joints. These results were similar to the results for the comparison against neutral gaits that were not matched with respect to speed ( Figure 3). Weights were increased for the activated affects anger and happiness, especially for the shoulder and elbow joints, and weights were decreased for expressions of sadness and fear. To test for significant effects of emotional expression on movement dynamics, we conducted a one-way multivariate GLM (Bortz, 1993) with the factor Trial (two levels: emotional gait; speed-matched neutral gait) for each of the four emotions. All eight tested features (four joints times two sources, taking the average weight between the joints on the left and right side of the body) served as dependent variables in this analysis. Overall, we found significant differences for all emotions (F1, 60 > 6.5, p < 0.05), as marked in Figure 4, confirming the existence of emotion-specific dynamic features that are independent of changes in overall gait speed. 
Figure 4
 
Emotion-specific kinematic effects, relative to velocity-matched neutral gait. Mean ± SEM difference in linear weights for the first (four darker gray bars) and second (lighter gray bars) source function. Joint abbreviations as in Figure 2, asterisks mark significant differences between emotional and speed-matched neutral walking (* p < 0.05; ** p < 0.01). (A) Anger, (B) Fear, (C) Happiness, (D) Sadness.
Figure 4
 
Emotion-specific kinematic effects, relative to velocity-matched neutral gait. Mean ± SEM difference in linear weights for the first (four darker gray bars) and second (lighter gray bars) source function. Joint abbreviations as in Figure 2, asterisks mark significant differences between emotional and speed-matched neutral walking (* p < 0.05; ** p < 0.01). (A) Anger, (B) Fear, (C) Happiness, (D) Sadness.
A similar analysis was conducted for the average posture angles. Again, we found overall significant differences between emotional gait and speed-matched neutral gaits for all emotions ( F 1, 60 > 5.4, p < 0.05). Since posture is not generally strongly affected by gait speed, the results of this analysis were not very different from those shown in Figure 2B. As for the comparison with neutral walks not speed-matched to emotional gaits, significantly increased head inclination was observed during expression of sadness, and angry and fearful gaits were characterized by increased elbow flexion. For fear, upper-arm retraction and knee flexion were increased, consistent with widespread postural tension. 
Discussion
Summarizing our detailed analysis of posture and kinematic features derived from the joint angle trajectories, we found pronounced emotion-specific changes in body posture and movement kinematics. During the expression of activated affects such as anger and happiness, movements are faster than normal, and larger than even speed-matched neutral movements; the deactivated affects fear and sadness are associated with small, slow movements, smaller than even speed-matched neutral gait. In the set of affects we tested, limb flexion and head inclination are the most prominent body-posture factors that differentiate between pairs of affects sharing a similar speed. 
By comparison with published studies, we found that the features that are critical for the accurate approximation of the motor behavior closely match features that in previous studies have been shown to be important for perceiving emotional body expressions. Another important observation in our analysis was that for the automatic extraction of meaningful features it was critical to approximate the trajectories with a highly compact model that minimizes the number of redundant parameters. 
These findings imply that visual perception of emotionally expressive body movements efficiently extracts the features that best characterize the emotion-specific movement changes. Opposed to many previous studies on the perception of emotions in gait, Experiment 1 provided us with an exact characterization of the emotion-specific physical changes. This made it possible to study in detail how individual postural and kinematic features influence the perception of emotional gait. 
Experiment 2: Critical features for emotion perception
The statistical analysis of the movement trajectories recorded in Experiment 1 revealed the existence of emotion-specific postural and kinematic features in gait. Comparison with published studies suggested that the features which were critical for an accurate reconstruction of the joint angles and their trajectories in motor behavior corresponded to features that observers report basing their emotion judgements on. However, since most of these studies were based on other types of movements than gait, and since the physical changes in the emotional expressions were usually not quantified objectively, we aimed at establishing the relationship between the movement features extracted in Experiment 1 and emotion perception. 
We thus investigated the role that the postural and kinematic features we had extracted play for emotion perception, by conducting a two-part perception experiment consisting of classification and emotional-expressiveness ratings. For the classification part we analyzed the patterns of confusions between the different emotions, and the influence of gait velocity on classification. Besides, by applying discriminant analysis, we tried to determine movement and posture features that are diagnostic for the classification of different emotions. In the rating experiment participants judged the perceived emotional expressiveness of the trajectories. Applying sparse regression analysis, we identified the postural and kinematic features that were critical for determining emotional expressiveness. 
Methods
Stimuli: Avatar model and animation
A custom-built volumetric puppet model rendered in MATLAB was used as the avatar model for all stimuli in Experiments 2 and 3. The model (see Figure 1) was composed of three-dimensional geometric shape primitives that were defined by polygons. Ellipsoids represented the trunk, hip, upper arms, forearms, hands, thighs, shanks and feet. All limb joints and the head were modeled as spheres, and cylinders represented the puppet's trunk and neck. Additional shapes were added to model the clavicular regions and the hip to improve the overall appearance of the figure. A convex sheath covered the hip, the clavicle regions, and each hand and foot from the wrist or ankle down. The positions of all shapes were determined by the positions of the motion-capture markers or by additional markers such as computed joint-center positions. The puppet's anatomy was actor-specific, but scaled to a common height. 
The puppet model was animated by specifying the joint trajectories derived from one typical step cycle taken from each of the recorded trials; the animated step cycles corresponded to those subjected to quantitative analysis ( Experiment 1). Since the movement was repeated periodically for presentation in the perception experiments, all animations were checked for unusual head or trunk movements due to imperfect periodicity of the original gait trajectories, causing jumps during repeated presentation. Where such irregularities occurred, they were replaced by another gait cycle taken from the same trial. 
In order to eliminate translational movement of the stimulus, the center of the hip joint was first set to zero for each frame of the resulting movie. But following this manipulation, the resulting animation would show no translatory vertical movement of the hip, yielding unnatural-looking movements with the hip fixated in space and the feet losing ground contact. Therefore, we added a synthetic vertical hip translation, generated by determining the distance between the pelvis center and the lowest marker point, corresponding to the figure's foot touching the ground. The vertical component of this difference trajectory was low-pass filtered and added to the position of the pelvis center, resulting in a natural-looking movement. 
Participants
Twenty-one participants were tested in Experiment 2 (9 male, 12 female, mean age 23 years 4 months, their ages ranging from 19 years 6 months to 27 years 10 months). Participants were students at the University of Tübingen. They all had normal or corrected-to-normal vision, were tested individually and paid for their participation. 
Apparatus
Testing took place in a small, dimly-lit room. Stimuli were displayed and participants' responses were recorded using the Psychophysics Toolbox (Brainard, 1997) on a PowerBook G4 (60 Hz frame rate; 1280 × 854 pixel resolution), viewed from a distance of 50 cm. The stimuli subtended approximately 4 × 8.6 degrees of visual angle, and they were presented on a uniform gray background. On each trial one stimulus was presented, moving continuously until the participant responded by pressing a key on the computer's keyboard. Animations were presented facing to the observer's left, turned 20 degrees from the frontal view, since this view maximized the expressiveness of the gait patterns. 
Experimental paradigm
The experiment consisted of two blocks: a classification task followed by a rating task. In both blocks, a total of 388 animations were shown. This set included the animations generated from each of the three repetitions of the four emotions, executed by 25 actors (N = 300). The remaining 88 animations constituted the two neutral walks best matching the gait velocity of the emotional walks, for eleven left-handed actors. Inter-stimulus intervals randomly varied between 500 and 800 ms. 
For the classification task, the animations were presented in random order. Participants were instructed to classify them as expressing anger, happiness, sadness or fear by a key press. The set of four response keys was kept constant throughout, but the assignment of responses to keys was counterbalanced. 
For the emotional-expressiveness rating, the stimuli were presented in four emotion blocks, each containing all 75 animations per emotion and the 22 best velocity-matched neutral walks presented in random order. Order of emotions was counterbalanced across participants. The name of the target emotion was displayed on the screen at the beginning of each block. Participants were instructed to rate the intensity of emotional expression of each stimulus on a five-point scale (from ‘not expressing the emotion’ to ‘expressing the emotion very strongly’), responding by pressing the number keys 1 to 5. 
Each block was completed within less than 35 minutes. To avoid fatigue, participants took a short break after seeing the first 194 trials within a block, and a break between blocks, if desired. All procedures of the psychophysical experiments had been approved by the ethics board of the University of Tübingen. 
Results
Our analysis of the classification results includes three parts: first, we investigated the probability of confusions between different emotional gaits, demonstrating the recognizability of the emotional expressions in the database and revealing typical confusions between different emotional gait patters. Second, we tested how observers classified neutral gaits whose speed was matched to that of emotional gaits. While gait speed is known to influence emotion judgments (Wallbott & Scherer, 1986), this analysis provides an insight into the amount of emotional information conveyed by gait speed alone. Third, we studied in detail how the different posture and kinematic features contributed to the classification of emotional gaits, by conducting discriminant analyses, exploiting the previously discussed posture and motion-specific features. 
Classification: Emotion confusion patterns
First, we assessed the probability of correct classifications of the individual emotions for the presented animations. Classification accuracy was high altogether, and stimuli were categorized as expressing the intended affect in 78% of cases ( Table 1). The target affect was attributed least often to movements intended to express anger (70.3%), and most frequently to movements intended to express sadness (89.8%). This relationship between intended and perceived emotional expression is highly significant, as tested by a contingency-table analysis testing the null hypothesis that these variables are independent ( χ 2 1800, d. f. = 3, p < 0.001). It shows that the lay actors were able to produce emotional expressions that were easily recognized, at rates comparable to rates found in many previous studies, some of which were based on the movements of professional actors (Atkinson et al., 2007; Grèzes, Pichon, & de Gelder, 2007; Wallbott, 1998). 
Table 1
 
Classification of emotional gaits (N = 20 subjects). Expression intended in the stimuli is shown in columns (75 trials per affect), and mean (± SD) percentages of subjects' responses in rows. Diagonal entries (in bold) mark the percentage of trials in which the movement was classified as expressing the intended emotion.
Table 1
 
Classification of emotional gaits (N = 20 subjects). Expression intended in the stimuli is shown in columns (75 trials per affect), and mean (± SD) percentages of subjects' responses in rows. Diagonal entries (in bold) mark the percentage of trials in which the movement was classified as expressing the intended emotion.
Anger Happiness Fear Sadness
Anger 70.3 ± 21.4 15.6 ± 11.3 3.2 ± 5.2 1.0 ± 1.4
Happiness 23.2 ± 19.2 75.1 ± 23.0 1.9 ± 4.1 1.2 ± 1.4
Fear 4.7 ± 8.4 6.6 ± 8.6 77.1 ± 14.1 8.0 ± 5.5
Sadness 1.8 ± 3.1 2.7 ± 1.5 17.9 ± 5.7 89.8 ± 5.7
In line with the kinematic differences between expressions of different emotions ( Figure 3A), confusions tended to occur between emotions sharing a similar level of movement speed: anger stimuli were second-most often classified as expressing happiness, and vice versa. Likewise, there was a tendency for confusing fear and sadness. Yet we also observed characteristic asymmetries in the confusions between these two pairs of emotions: anger stimuli were classified as happy more frequently than happiness stimuli were classified as expressing anger, and fear stimuli were more often classified as expressing sadness than sadness stimuli were classified as expressing fear. 
A more detailed analysis of classification performance revealed that there was no evidence of performance decrements over the course of the classification block: the differences in classification accuracy between stimuli shown in the first and second half of the block were non-significant ( t 63 = 1.46, two-tailed p = 0.15). We also found a highly significant influence of actor gender on the recognition of fear expressions ( χ 2 = 201.05, d. f. = 3, p < 0.001): expressed by female actors, fear was correctly recognized at just over 90%, whereas males' fear expressions were only recognized in 60.5% of trials. Conversely, males' expressions of sadness were recognized more often than females' were (93% vs. 87.3%), again highly significant ( χ 2 = 15.15, d. f. = 3, p = 0.005). However, there was no significant difference in the recognition rates of gaits executed by individuals with or without experience in lay-theatre groups (all t 74 < 1.1, p > 0.27). 
Based on these classification rates we selected the trials for further analysis, limiting the data set to those expressions for which the intended emotion was recognized correctly by at least 70% of observers. The following analyses of the differences between the emotions were thus obtained from this subset of well-recognized emotional gaits (44 anger, 58 fear, 54 happiness, and 70 sadness trials). 
Classification: Speed-matched neutral gaits
To address the influence of gait speed on emotion classification we presented neutral gaits that were speed-matched to the different emotional gaits. Is gait speed alone, even in presence of neutral patterns, suitable to transmit information about emotion? To test this question participants had to assign the speed-matched neutral gait patterns to one of the four tested emotions (anger, happiness, fear and sadness). 
For the emotional gaits average velocity was strongly affected by emotional expression: anger, for instance, was associated with gait velocities more than twice as high as fear (mean ± SEM for anger: 1.82 ± 0.22 m/s; for fear: 0.83 ± 0.31 m/s). This effect was reflected in a highly significant main effect of the factor Emotion (levels: angry, happy, sad and fearful) on average speed in a repeated-measures ANOVA ( F 3,39 = 242.84, p < 0.001). As further statistical validation of the speed matching between neutral and emotional gaits on an trial-by-trial basis for each actor, we performed a two-way repeated-measures ANOVA with the factors Trial (velocity-matched neutral gait vs. emotional gait) and Emotion (angry, happy, sad and fearful), finding no significant influence of Trial ( F 1, 13 = 0.14, p = 0.71) and no significant interaction ( F 3, 39 = 1.42, p = 0.25). 
Consistent with previous studies indicating a strong influence of gait velocity on emotion judgements for gaits, we found a remarkable consistency in observers' emotional classification of the speed-matched neutral gaits. As shown in Table 2, observers classified up to 54.1% of the neutral gaits as expressing that emotion to which they were matched in speed (chance level 25%). Speed matches for fear were the only exception, observers classifying these gaits as expressing fear in only 28% of cases. This relationship was statistically confirmed by a contingency-table analysis, finding a highly significant relationship between emotion-specific speed and perceived emotion ( χ 2 > 38, d. f. = 3, p < 0.001). 
Table 2
 
Classification of velocity-matched neutral gaits (N = 20 subjects). Columns show the emotion for which the gait velocity was matched (22 trials per affect); mean (± SD) percentages of subjects' responses in rows.
Table 2
 
Classification of velocity-matched neutral gaits (N = 20 subjects). Columns show the emotion for which the gait velocity was matched (22 trials per affect); mean (± SD) percentages of subjects' responses in rows.
Anger Happiness Fear Sadness
Anger 48.8 ± 13.8 20.7 ± 12.2 2.5 ± 6.3 3.7 ± 7.8
Happiness 39.3 ± 10.3 42.6 ± 15.8 8.3 ± 10.9 8.3 ± 13.4
Fear 7.0 ± 2.5 19.0 ± 5.4 28.1 ± 9.4 33.9 ± 10.7
Sadness 5.0 ± 3.6 17.8 ± 6.1 61.2 ± 11.4 54.1 ± 14.1
As for the emotionally expressive gaits, we found high confusion probabilities between emotions typically associated with high gait speeds (anger and happiness), and between those associated with low speeds (sadness and fear). 
Interestingly, in almost 60% of cases sadness was attributed to neutral gaits at the velocity of fearful walking. Since the gait velocities observed for fearful and sad walking largely overlap, this finding points to an influence of other factors than gait velocity as critical for the perception of fear from bodily expressions (e.g. posture). 
Summarizing, the comparison between Table 2 and Table 1 shows that in spite of the fact that participants were able to classify emotions from speed-matched neutral gaits with remarkable accuracy, the correct classification rates were substantially higher for the emotionally expressive gaits. Consistent with the results of analyzing emotion-specific features in the movement trajectories, this finding indicates that while speed has a strong influence on emotion classification, emotional gait patterns convey substantial additional information that is independent of movement speed. This interpretation was also confirmed by comparing the emotional-expressiveness ratings (see below): real emotional gaits were rated as significantly more expressive than speed-matched neutral gaits (confirmed by a repeated-measures ANOVA on the mean expressiveness ratings; F 1, 17 = 88.1, p < 0.001). 
What about the classification of neutral gait at normal walking speed? If observers can distinguish between neutral gait and emotionally expressive gait, then this underscores the specificity of the emotional expressions. Therefore, we ran a control experiment in which we repeated the classification task exactly as above, but now including neutral as both stimulus and response category (neutral gaits at normal speed), with five observers (three female and two male, mean age 26 years 3 months). The results of this experiment are shown in Table 3: as for four-choice classification, observers gave highly consistent responses for all five stimulus types (neutral, happy, sad, angry and fearful). The modal response was always the emotion that the actor was attempting to express. For fear and sadness, classification performance was hardly affected by including the neutral condition; there were only very few confusions between neutral and these two affects. However, there was a tendency for angry and happy gaits to be confused with neutral, and vice versa, especially for happy gait, where the second most frequent classification is in fact neutral. Neutral gait itself was classified as neutral in more than 70% of trials, demonstrating that there are specifically emotional aspects in emotionally expressive gait that differ from neutral. 
Table 3
 
Classification of emotional gait including neutral (N = 5 subjects). Columns show stimulus affect (75 trials per affect); mean (± SD) percentages of subjects' responses in rows.
Table 3
 
Classification of emotional gait including neutral (N = 5 subjects). Columns show stimulus affect (75 trials per affect); mean (± SD) percentages of subjects' responses in rows.
Anger Happiness Neutral Fear Sadness
Anger 76.00 ± 2.8 14.9 ± 4.2 8.5 ± 2.4 1.9 ± 1.8 0.5 ± 0.7
Happiness 15.5 ± 3.2 65.1 ± 6.5 12.3 ± 3.5 2.9 ± 3.8 1.9 ± 1.8
Neutral 5.3 ± 4.9 18.4 ± 6.2 71.5 ± 3.1 5.1 ± 3.5 3.5 ± 2.2
Fear 1.6 ± 1.7 1.1 ± 0.6 4.0 ± 1.9 80.0 ± 10.0 2.1 ± 0.7
Sadness 1.6 ± 1.5 0.5 ± 0.7 3.7 ± 2.9 10.1 ± 5.4 92.0 ± 3.1
Classification: Influence of movement and posture features
To investigate the relationship between the features identified by the analysis in Experiment 1 and the perceptual classification results in Experiment 2, we performed two discriminant analyses, separately for the posture and movement features. 
For body posture, the discriminant analysis determined one strong discriminant function (eigenvalue 1.37), which loaded highly on head inclination (0.56), and which separated sadness expressions from those of the other affects, most strongly happiness. The second discriminant function loaded moderately highly on limb flexion. It provided a coarse separation of expressions of anger and fear from expressions of happiness and sadness. However, since the eigenvalues for the second and third discriminant functions were rather small (0.51 and 0.18, respectively), we decided to refrain from further analyzing these discriminant functions. 
The discriminant analysis for the dynamic features was based on the weights of the source functions of the trajectory models defined by Equation 2. Since the third source function explained only approximately 5% of the variance in the data, we restricted our feature analysis to the first two sources. Again, we restricted the discriminant analysis to the left side of the body, which has been shown to be more emotionally expressive than the right (Roether, Omlor, & Giese, 2008) and included average gait velocity as an additional predictor. Our analysis revealed only one strong discriminant function: the first, which explained 90.0% of the variance (eigenvalue 4.11). It loaded highly on gait velocity and was strongly correlated with the weights of the first source for the shoulder and elbow joint. It separated the emotions according to gait velocity, the highest values associated with angry walking, followed by happy walking (mean ± SD gait velocity 1.82 ± 0.22 m/s for anger; 1.31 ± 0.36 m/s for happiness). Negative values were obtained for fearful and sad gaits (fear: 0.83 ± 0.31 m/s; sadness: 0.68 ± 0.21 m/s). The other two discriminant functions accounted only for a small amount of variance and were therefore not considered for further analysis. 
In summary, the discriminant analysis confirmed the high influence of gait velocity, arm swing and head inclination on emotion classification observed in the analysis of the motor patterns in Experiment 1. Yet discriminant analysis of classification data provides a rather insensitive tool for feature extraction, as will be shown in the following analysis of the rating data that provides more information about the features that are critical for perception. 
Expressiveness ratings: Influence of posture features
In the following, we describe the relationship between posture features and observers' ratings of perceived intensity of emotional expression. In order to identify the posture features that were most strongly predictive of emotional expressiveness, we modeled the expressiveness ratings by a linear regression model according to Equation 3. In this model the dependent variable Y is given by the expressiveness ratings, and the predictors X are given by the posture features (average joint angles over one gait cycle). In order to determine the relative importance of the different features for predicting expressiveness, we estimated the regression coefficients β by sparse regression, minimizing the error function defined by Equation 6 for different values of the sparseness parameter λ, where the case λ = 0 corresponds to a standard linear regression without sparsification. For increasing values of the sparseness parameter the resulting model contains fewer and fewer active features, i.e. features for which the corresponding regression weight β k is different from zero. Such regression models have reduced complexity at the cost of less accurate approximation of the data, and only the most important features will still be active for large values of the sparseness parameter. Thus, sparse regression provides an elegant way of defining a rank ordering for the importance of the different features. 
Bilateral features were very similar on the left and the right body side, as indicated by high correlation coefficients (smallest r > 0.52, p < 0.001). During stimulus presentation the avatar's (anatomically) left side was always shown facing the observer, making the left side of the body more visible than the right. In addition, we have previously demonstrated an emotional-expressiveness advantage for the movement of the left side of the body (Roether et al., 2008). For these reasons, for bilateral features, we constrained the feature analysis to the left joints. 
Figure 5 shows the regression weights of the different posture features for different levels of sparsification (determined by the sparseness parameter λ). Red and blue indicate positive and negative values of the coefficients β k respectively. As expected, without sparsification (sparseness parameter λ = 0) the models typically contain all features with often small non-zero weights, which makes an interpretation of the importance of such features rather difficult. Increasing the sparseness parameter λ resulted in models with fewer and fewer active features (non-zero regression coefficients), providing a ranking of models with different numbers of features. 
Figure 5
 
Relationship between posture features and perceived intensity of emotional expression. Weights β k of the posture features derived by sparse regression, where emotional-expression intensity was predicted by a sparsified linear combination of the posture angles. Weights are color-coded and plotted as a function of the sparseness parameter λ. Increasing values of this parameter along the vertical axis indicate increasingly sparse models, which are based on fewer and fewer features. Black horizontal lines mark optimal value of the sparseness parameter λ opt estimated by GCV (see 2). Mean joint flexion served as measure of posture; joint abbreviations as in Figure 2A. (A) Anger, (B) Fear, (C) Happiness, (D) Sadness.
Figure 5
 
Relationship between posture features and perceived intensity of emotional expression. Weights β k of the posture features derived by sparse regression, where emotional-expression intensity was predicted by a sparsified linear combination of the posture angles. Weights are color-coded and plotted as a function of the sparseness parameter λ. Increasing values of this parameter along the vertical axis indicate increasingly sparse models, which are based on fewer and fewer features. Black horizontal lines mark optimal value of the sparseness parameter λ opt estimated by GCV (see 2). Mean joint flexion served as measure of posture; joint abbreviations as in Figure 2A. (A) Anger, (B) Fear, (C) Happiness, (D) Sadness.
With respect to body posture, we found good agreement between the prominent features directly extracted from the motor behavior ( Experiment 1) and those features (with non-zero weights even for strong sparsification) most strongly related to ratings of emotional expressiveness: for sad walking the most important feature that determined the expressiveness rating ( Figure 5D) was increased head inclination (indicated by a negative regression weight in the chosen parameterization). The most important predictor for the expressiveness ratings for angry and fearful walking ( Figures 5A and 5B) was the elbow-flexion angle, corresponding to the important role of this feature in the analysis described in Experiment 1. For happy walking the most important features predicting the perceived emotional expressiveness were an increased elbow angle and a decrease of the shoulder angle ( Figure 5C). The former feature corresponds to one of the less prominent features in the analysis of the motor patterns, while no change of the shoulder angle was observed in this analysis. 
In summary, the posture features that most prominently influenced the perceptual expressiveness ratings matched those that were directly extracted from the trajectory in Experiment 1. For the less prominent features we found some mismatches with the features extracted directly from the motor patterns. 
Expressiveness ratings: Influence of dynamic features
Since movement speed proved to be an important cue for emotion classification ( Tables 1 and 2), we also correlated gait velocity and the ratings of emotional expressiveness. Expressions of both, anger ( r 41 = 0.76, p < 0.001) and happiness ( r 41 = 0.36 p = 0.002), were rated as more intense the higher the gait velocity. For expressions of fear and sadness, gait velocity was inversely related to expressiveness ratings, significant for sadness only ( r 67 = −0.59, p < 0.001). A non-significant correlation between gait velocity and expressiveness for fear ( r 52 = −0.19, p = 0.19) fits the dominance of postural over kinematic cues for fear perception. This result parallels the strong influence of speed on emotion classification that we also found in the discriminant analysis. 
The analysis of dynamic features that were strongly related to perceived expressiveness followed exactly the same procedures as the analysis of important posture cues discussed in the last section. Sparse regression was applied to predict the emotional expressiveness ratings from the linear weights of the first and second source of the model as defined by Equation 2. Again the analysis was restricted to the left side of the body for bilateral joints. 
Consistent with the features derived from the trajectory analysis ( Figure 3A), high expressiveness of anger and happiness expressions was mostly associated with increases of the linear weights, and with weight decreases for gaits expressing sadness or fear ( Figure 6). For anger, the features with the most prominent relationship between expressiveness ratings were the weights for the shoulder and hip joint on the first source, matching the prominent features extracted from the motor patterns in Experiment 1. Additional important features for expressiveness were the weights of the elbow and knee joint on the second source. The expressiveness of happy gaits was most influenced by the elbow joint on both source functions, but also by the shoulder joint on the first source, and by both leg joints on the second, again matching the results from the analysis of the motor patterns. Deviating from these results, the weight of the knee joint on the first source was negatively related to expressiveness. Consistent with the feature analysis in Experiment 1, the expressiveness of angry and happy walks was positively influenced by the weights of elbow and shoulder (first and second source). Also consistent is that a main difference between angry and happy walking was the relevance of the hip movement (first source). In addition, for the expressiveness ratings also there was a negative influence of the weight of the knee angle (first source) that had no equivalent in Experiment 1. Like for the analysis of the motor patterns, the expressiveness of both fearful and sad gaits was strongly negatively related to the shoulder-movement amplitudes, accompanied by an influence of leg-movement amplitude particularly for fearful gaits. For sadness and fear the weight of the knee movement on the second source function was the dominant factor influencing expressiveness, indicating that the expressiveness of sad or fearful gaits is increased by a reduction in some higher-frequency components in the knee (since the second source function oscillates with double gait frequency). 
Figure 6
 
Relationship between dynamic features and perceived intensity of emotional expression. Weights β i of the dynamic features derived by sparse regression, where emotional-expression intensity was predicted from the dynamic features (weights for the first and second source function). Four emotions (anger, fear, happiness and sadness) shown in rows, as marked. Joint abbreviations as in Figure 2. (A) Weights for first source function, (B) weight for second source function. Conventions as in Figure 5.
Figure 6
 
Relationship between dynamic features and perceived intensity of emotional expression. Weights β i of the dynamic features derived by sparse regression, where emotional-expression intensity was predicted from the dynamic features (weights for the first and second source function). Four emotions (anger, fear, happiness and sadness) shown in rows, as marked. Joint abbreviations as in Figure 2. (A) Weights for first source function, (B) weight for second source function. Conventions as in Figure 5.
Discussion
The results of Experiment 2 confirm our initial hypothesis that features that are critical for perceiving emotions from gait closely mirror the features that we extracted in Experiment 1 as being critical for accurately approximating the movements' trajectories. Thus, we found that individual emotion-specific features were strongly related to observers' affect judgements, both for classifying the movements and for rating the intensity of their emotional expression. 
One particularly good example is provided by head inclination, which turned out to be the single most important feature both in a discriminant analysis on the classification of emotional expressions, and in the sparse regression between expressiveness judgements and body-posture features. Similarly, elbow flexion dominated the perception of the emotional intensity of anger and fear expressions, surpassing this feature's discriminative power for emotion classification. In terms of kinematics, increases and decreases in the size and the speed of movements were among the most important features driving the perception of emotional gaits, strongly related to both emotion classification and emotional-intensity judgments. Yet the role of body posture must not be underestimated, since kinematic information would not have been sufficient for differentiating between the expressions of e.g. fear and sadness in our database. 
The strong influence of arm movement and posture is in line with findings that human observers can identify emotional expressions from considering isolated arm movements, either from abstract movements (Sawada et al., 2003) or from point-light renderings of drinking or knocking movements (Pollick et al., 2001). Our findings demonstrate that arm movements play a dominant role in observers' perception of emotional expressions even when movements of the entire body are presented, and for the example of emotionally expressive gait. The role of movement activation has been demonstrated in published studies by statistical analysis of the relationship between e.g. movement velocity or acceleration, which holds even for phase-scrambled point-light stimuli (Pollick et al., 2001). The somewhat stronger emotional differentiation of movements according to emotional valence in this study for original compared to phase-scrambled versions suggests a role of postural or phase relationships between limb segments for emotion judgements, similar to the findings reported for the different stimulus manipulations reported for full-body expressions (Atkinson et al., 2007). Our study goes beyond these studies in presenting quantitative data of individual features' roles for full-body emotional expressions, allowing a precise description of both postural and kinematic influences on the perception of emotional body expressions. 
One crucial way in which we could underscore the significance of the emotion-specific movement and posture features extracted in Experiment 1, and further investigated in Experiment 2, would be to perform a causal test of the importance of the extracted features for emotion perception. Such a study would involve manipulating the intensity with which a candidate feature is present in the moving figure, rather than using the natural variation present in the recorded data set, and testing the consequences of this manipulation for emotion perception. This approach was taken in Experiment 3
Experiment 3: Completeness of the extracted feature set
In Experiment 1 we applied a statistical analysis to extract a minimum set of maximally informative features from the trajectories of emotionally expressive gait; Experiment 2 demonstrated that this distinct set of postural and kinematic cues was highly informative about the expression of emotions in human gait. In Experiment 3 we went one step further toward testing whether the extracted set of emotion-specific features captures the crucial aspects of the emotion-specific movement information, by comparing the effects on emotion perception of natural emotional gait patterns with those only containing those features showing the largest emotion-specific changes. This experiment was possible because the model given by Equation 2 is generative and thus allowed us to synthesize gait patterns that include only those features exhibiting the largest emotion-related changes in Experiment 1
To measure the emotion-specific perceptual effects induced by these stimuli we exploited a paradigm that is based on high-level after-effects in motion recognition. High-level after-effects were first described for the perception of static face pictures (Leopold et al., 2001; Webster et al., 2004), but they have recently been reported to apply to the perception of point-light biological motion stimuli too (Jordan et al., 2006; Troje et al., 2006): extended observation of a male gait pattern results in a bias for judging a gender-neutral motion morph between male and female gait as female, while adaptation with a female gait pattern results in the opposite bias, i.e., a tendency to judge the neutral pattern as male. Such high-level after-effects might also arise for emotional gait stimuli, providing a tool for comparing emotion-specific perceptual effects induced by emotional gait stimuli that contain different types of features. 
In Experiment 3 we compared the adaptation effect induced by emotional gait patterns with adaptation following presentation of artificial emotional walkers that only exhibited the largest critical features extracted in Experiment 1 ( Figures 2 and 3). In order to limit the duration of the experiment, we constrained our analysis to the emotions happy and sad, proving examples for a positive emotion, associated with high activation, and an example of a negative emotion, associated with low activation. Testing with motion morphs between happy and sad walking, and by comparing the sizes of the induced after-effects we were able to investigate if the extracted feature set adequately captures the critical information for driving the perception of emotional style. The reasoning behind this experiment is that an artificial walker that presents an efficient set of emotion-specific features should result in high-level after-effects comparable in size to those induced by adaptation with a real emotional walker stimulus. 
Methods
In the adaptation experiment we compared the influence of different types of happy or sad adapting stimuli on the discrimination between stimuli falling along a continuum between sad and happy gait. The adaptor presented preceding the test stimulus could be either a natural sad or happy walk, or an artificial stimulus exhibiting only several critical features for these emotions. The test stimuli were generated by morphing between a natural happy and a natural sad walk, providing different intermediate levels of emotional style between these two emotions. In order to rule out simple low-level motion after-effects as major source of the observed adaptation, all presented gait patterns were resampled as to guarantee that the average speed of the feet had a value of 1.35 m/s (the average value of the speeds for sad and happy walking). We verified that the average speed of the other markers did not significantly differ between the different gait styles. This ensures that the motion energy spectrum of the different adaptors was very similar. 
Natural and artificial adaptor stimuli
All stimuli in Experiment 3 were derived from one example each of sad, happy and neutral gait executed by the same individual. The selected emotional gaits had been correctly classified by more than 85% of the observers in Experiment 2, and they had received expressiveness ratings in the highest quartile. As natural adaptors we used the happy and sad walk (one cycle) of this actor. In order to minimize the influence of low-level motion adaptation we rendered all stimuli (adaptation and test stimuli) to have the same gait-cycle duration as the neutral prototype. 
The artificial adaptors were based on the neutral gait of the same person. To this pattern we added the two largest postural and kinematic changes for sad and happy walking as they had been extracted in Experiment 1. For generating the artificial sad-gait stimulus, we approximated the trajectories of neutral walking by Equation 2, and then modified the weights by adding the population average of the weight difference between sad and neutral walking for the shoulder and the elbow joints. These two joints had shown the maximum differences between sad and neutral walking (shoulder joints: −0.67, elbow joints: −0.79; opposite joints were treated symmetrically). Likewise, for the joints with the largest posture changes between sad and neutral walking, we added the population average of the differences between the posture angles between sad and neutral walking (−18.9 deg for the head, and −16.6 deg for the elbow joints). Correspondingly, the artificial happy gait was generated by adding the weight changes between happy and neutral walking to the weights of the shoulder and elbow (shoulder: +0.42 and elbow +0.61), the two joints showing the largest emotion-specific change relative to neutral walking. In this case, elbow and head showed the largest changes of the posture angles compared to neutral walking, and we added 2.5 deg to the elbow flexion angle and 6.3 deg to the head inclination. As an example, the artificial happy adaptor stimulus is shown in Movie 4
 
Movie 4
 
Artificial happy adaptor. Example of adapting stimulus used in the adaptation experiment. To generate this stimulus, the two largest posture and movement changes in happy gait relative to neutral were added to the trajectories of a neutral-gait trial. See text for details.
Test stimuli: Happy-sad morphs
The test stimuli were motion morphs between the selected sad and happy gait described in the last section. By morphing we created a continuum of expressions between happy and sad walking. Morphing was based on spatio-temporal morphable models (Giese & Poggio,2000), a method which generates morphs by linearly combining prototypical movements exploiting a spatio-temporal correspondence algorithm. The method has previously been shown to produce morphs with high degrees of realism for rather dissimilar movements (Giese & Lappe,2002) and even for complex movements such as martial-arts techniques (Mezger, Ilg, & Giese, 2005). 
For the generation of the stimuli we used only the two emotional gaits as prototypes. In the different test conditions the weight of the sad prototype was set to the values 0.2, 0.3, 0.38, 0.42, 0.46, 0.5, 0.54, 0.58, 0.62, 0.7, and 0.8; in a pilot experiment, these values had been determined as optimal for sampling the relevant region of the response curves. The weights of the happy prototype were always chosen such that the sum of the morphing weights was equal to one. 
Participants
In Experiment 3, eight participants were tested individually, five of which were female, and three male (mean age 22 years 11 months, ages ranging from 20 years 8 months to 27 years 11 months). They all completed each of the five blocks within a maximum of approximately 12 to 15 minutes, and they were allowed to take up to five minutes' break between every two blocks. They had normal or to-normal corrected vision, were students at the University of Tübingen, and were paid for their participation. 
Task and procedure
Experiment 3 was based on a discrimination task. On individual trials, the test stimulus was presented, which participants had to classify as ‘sad’ or ‘happy’ by pressing one of two keys. Except during the no-adaptation block, each presentation of the test stimulus was preceded by presentation of one of four adapting stimuli per block (Natural Happy, Artificial Happy, Natural Sad, or Artificial Sad) for 8 s, followed immediately by a noise mask presented for 260 ms. The mask comprised 49 darker gray dots on the uniform gray background, moving along a planar projection of the trajectories of human arm movements. Each dot moved about a randomly chosen position, and with random phase. Fully extended, the mask had an approximate size of 5 × 9.5 degrees of visual angle. Following the mask, the test stimulus was presented for a maximum of 2 s, followed by a gray screen with a response prompt; stimulus presentation was interrupted immediately after the subject's response. 
The experiment constituted altogether five blocks, each consisting of 55 trials in random order, corresponding to five presentations of each test stimulus. The no-adaptation block was always the first, followed by four blocks in random order, including two artificial adaptors (happy or sad) and two natural-adaptor (happy or sad) blocks. 
Results
Figure 7 shows the response curves (proportions of ‘sad’ responses as a function of the morphing weight of the sad prototype in the test stimulus) for the five different adaptation conditions. All curves could be closely fitted by sigmoidals separately for the individual subjects. Based on these curves we determined the ‘ambiguity points’ (AP), i.e. the morph levels at which subjects gave sad and happy responses equally often (Jordan et al.,2006). These values formed the basis of our statistical analysis. 
Figure 7
 
Adaptation of emotion perception. High-level after-effects induced by artificial emotion stimuli, containing only the two largest critical posture and kinematic features, in comparison with after-effect following presentation of natural emotionally expressive gait. Mean proportions of ‘sad' responses are shown as a function of the linear weight of the sad prototype in the test stimulus. Responses for the condition without presentation of adaptors shown in black. Green (blue): responses following presentation of happy (sad) adaptor; solid lines and filled circles represent results for adaptation with the artificial adaptors, while dashed lines and open circles represent responses for natural adaptors.
Figure 7
 
Adaptation of emotion perception. High-level after-effects induced by artificial emotion stimuli, containing only the two largest critical posture and kinematic features, in comparison with after-effect following presentation of natural emotionally expressive gait. Mean proportions of ‘sad' responses are shown as a function of the linear weight of the sad prototype in the test stimulus. Responses for the condition without presentation of adaptors shown in black. Green (blue): responses following presentation of happy (sad) adaptor; solid lines and filled circles represent results for adaptation with the artificial adaptors, while dashed lines and open circles represent responses for natural adaptors.
The different adaptation conditions clearly influenced the measured psychometric functions. First, evidently, the curves obtained for the ‘happy’ and ‘sad’ adaptors were shifted in opposite directions away from the baseline curve (black) obtained for the no-adaptation blocks; the statistical significance of this effect was confirmed by separate repeated-measures ANOVAs for both the happy ( F 2, 14 = 5.64, p = 0.016) and the sad adaptor ( F 2, 14 = 12.60, p = 0.009) with the three-level factor Adaptor (levels: no-adaptor, artificial-adaptor and natural-adaptor). Crucially, the shifts induced by both artificial adaptors were significantly different from baseline: presenting the artificial sad adaptor shifted the AP to the right (mean ± SEM 0.62 ± 0.043 compared to 0.57 ± 0.040 for no adaptation; t 7 = −2.28, p = 0.029), while presenting the artificial happy adaptor shifted it to the left (0.48 ± 0.041, t 7 = 2.95, one-tailed p = 0.011). 
The shift associated with presentation of the natural adaptor was larger for both emotions compared with the influence of the natural adaptor (happy: 0.42 ± 0.023, sad: 0.65 ± 0.029), but neither of these differences reached statistical significance, as reflected in the non-significant effect of adaptor naturalness in a repeated-measures ANOVA on all APs to stimuli preceded by an adaptor ( F 1, 7 = 0.063, p = 0.45). There was thus no significant difference between the high-level after-effects induced by natural and artificial adapting stimuli. 
Discussion
Using test stimuli generated by motion morphing, Experiment 3 compared the high-level after-effects induced by natural emotional (sad and happy) walks with those induced by the presentation of artificial stimuli that contained only the most prominent posture and dynamic features extracted by the analysis of the motor patterns of emotionally expressive gait in Experiment 1. First and foremost, our results demonstrate emotion-specific high-level after-effects in the perception of emotions expressed in human full-body movement. More specifically, they show that the dominant features extracted by analyzing motor behavior ( Experiment 1) had a powerful effect on perceivers' sensitivity to emotional expression, shifting it in the same direction as presenting natural adaptors did. The extent of these shifts was not significantly smaller than of those induced by the natural adaptors, which implies that the extracted feature sets are complete in the sense that they capture the crucial aspects for determining observers' perception of emotional body expressions. 
Morphed stimuli might have been expected to yield rating curves following a strictly sigmoidal shape, as has been demonstrated for morphs of facial expressions of emotion (Etcoff & Magee, 1992). Further experiments could be designed to test whether the ratings given by participants were relatively close to following a linear relationship with morphing weight due to any fundamental differences between the expressiveness of morphed facial and bodily expressions, or whether this was due to the fact that only a limited range of morphing weights was tested. 
General discussion
We investigated the influence of posture and dynamic cues for the expression and recognition of emotions in gait. Our study combined an analysis of the motor behavior, based on the angle trajectories of the major joints, with an analysis of the perception of emotions from gaits, including discriminative judgments between different emotions and ratings of the intensity of emotional expression. Beyond classical statistical techniques, our analysis exploited advanced methods from machine learning in order to derive an easily interpretable parameterization of dynamic trajectory features, and to select sets of highly informative features. In addition, we studied the influence of average gait velocity on the emotional expressiveness of gaits, by comparing the trajectories and perceptual judgements of emotional gaits with the ones of speed-matched neutral gaits. This analysis confirmed the strong influence of movement speed on the perception of emotions from gait, and also provides evidence of additional, emotion-specific posture and dynamic features that cannot be explained by variations in gait speed alone. Altogether, we found a well-defined set of movement and posture features that were critical both for the expression and for the perception of emotional gaits. In addition, exploiting high-level motion after-effects, we showed that artificial stimuli containing only these critical features produce similar emotion-specific after-effects as natural emotionally expressive gaits, which confirms the high relevance of the feature set we extracted. 
The majority of recorded movements were categorized as expressing the emotion that the actor was intending to express. This finding indicates a high validity of the recorded expressions. Expressions of anger were recognized at the lowest rate (around 70%), and the highest recognition rate was achieved for sadness expressions (around 90%). These rates are comparable to rates reported in studies based on expressions executed by professional actors (e.g. Grezes et al., 2007), invalidating the possible criticism that the emotional expressions of non-professional actors might not be sufficiently expressive or convincing. 
In general, the recognizability of emotions was critically dependent on actor gender in the classification experiment: gaits intended to express fear were recognized at a 30% (!) higher rate if the actor was female than if he was male. We assume that this effect arose at the encoding stage, since the female actors showed stronger fear-related modifications of their body postures than did the male actors. In other studies considering the gender of the encoder similar effects have been observed for aggression, which is attributed more readily to male actors than to female actors (de Meijer, 1991; Pollick, Lestou, Ryu, & Cho, 2002). This effect might be related to stereotypes about (Henley & Harmon, 1985; Spiegel & Machotka, 1974) or actual (Brescoll & Uhlmann, 2008) gender differences in the frequency of aggressive behaviors. Alternatively, females' body movements might be less compatible with anger-related movement qualities, such as force (Pollick et al., 2001; Wallbott, 1998). For facial expressions females are generally more expressive than males, across emotions (Zuckerman, Lipets, Koivumaki, & Rosenthal, 1975). Our present finding of a female advantage for fear expression could also be related to social context: the presence of the female experimenter might have differentially influenced male and female participants. However, effects of social context have only rarely been demonstrated for emotionally expressive behavior in adults (Fridlund, 1990; Fridlund, Kenworthy, & Jaffey, 1992). While the present study does not provide a conclusive answer about the causes of this gender difference, the sheer size of the effect makes it appear worthy of further investigation. 
Influence of gait speed
Movement speed had a strong influence on the perception and expression of emotions in gait. First, in the classification experiment confusions preferentially occurred between emotions that shared a similar level of movement activation: angry gaits tended to be confused with happy gaits, and sad gaits with fearful ones. This pattern of confusions also matched the results of our discriminant analysis, which revealed a strong influence of gait speed on the classification results, and it is consistent with the classical hypothesis that the general level of movement activity is an important variable for the perception of emotions from movements (Ekman, 1965; Montepare et al., 1987; Pollick et al., 2001; Sawada et al., 2003; Wallbott, 1998). The importance of gait speed for emotion classification is also supported by our results on the classification of speed-matched neutral gaits: they were usually classified as expressing the affect associated with a similar average gait speed. Compared to facial expressions of emotion, bodily expressions therefore seem to bear a stronger relationship between visual cues and the dimension of emotional activation or arousal (Osgood, 1966; Schlosberg, 1954; Wundt, 2004), some of these cues (e.g. retraction as a cue for fear) potentially even revealing underlying ‘action tendencies’ (Frijda, 1988). In this respect, emotional body expressions more closely resemble the expression of emotions in vocal prosody than in the face: in prosody, anger and happiness are associated with intensity increases relative to neutral speech. Opposed to facial expressions of emotion (Scherer, Johnstone, & Klasmeyer, 2003), our results revealed a stronger tendency for confusing affects associated with similar activation for emotional body expressions. 
Beyond the influence of gait speed, our study provides strong evidence for additional emotion-specific dynamic and postural features. Emotion classification was substantially more consistent for emotional gaits than for speed-matched neutral gaits. In addition, we observed characteristic asymmetries in the emotion attribution to the speed-matched neutral gaits: neutral gaits velocity-matched to angry gaits were more often classified as expressing happiness (nearly 40%) than neutral gaits speed-matched to happiness were classified as angry (20%). This asymmetry was observed even though the velocity of angry gaits (1.80 ± 0.25 m/s) substantially exceeded the speed of happy gaits (1.41 ± 0.19 m/s). Likewise, nearly 60% of the slow neutral gaits were classified as expressing sadness rather than fear, an asymmetry that was observed despite the fact that head inclination provides an additional diagnostic cue for the expression of sadness (Wallbott, 1998)—head inclination was never observed for slow neutral walking. Such asymmetries in the confusion patterns suggest that neutral gait lacks emotion-specific postural and/or dynamic features that are not induced by changes in gait speed, especially for limb flexion, which appears to be a necessary prerequisite for the attribution of fear. Interestingly, previous studies also have also indicated a stronger influence of postural cues compared to dynamic cues for expressing fear (Atkinson et al., 2007; Dittrich et al., 1996) and anger (Aronoff, Woike, & Hyman, 1992) than for expressing sadness or happiness. 
Previous studies have demonstrated an effect of velocity on different parameters, even in neutral gait: arm-swing amplitude, for instance, increases with walking speed (Donker et al., 2001; Donker, Mulder, Nienhuis, & Duysens, 2002), and so does the duration of knee flexion during the stance phase (Kirtley, 2006). In order to characterize emotion-specific posture and movement features that go beyond such effects induced by gait speed, we compared the values of posture and movement parameters for emotionally expressive gaits with the same parameters extracted from speed-matched neutral gait. This quantitative analysis demonstrates that for affects associated with high gait velocities, there was even higher movement activity (indicated by the weights of sources in particular for the arm joints) compared to speed-matched neutral gait. Likewise, for affects associated with low gait velocities, the weights of several source functions were decreased relative to velocity-matched neutral gaits, indicating lower movement activity than for speed-matched neutral gaits. In summary, these observations entail that the emotion-specific dynamic features in some sense ‘exaggerate’ the effects induced by emotion-specific speed changes. 
Contrasting with the dynamic features, body posture was hardly affected by gait velocity, and the same posture features were identified for emotions associated with slow speed as for emotions with high speed. Emotion-specific body posture is thus modulated largely independently of gait velocity. 
Role of posture and movement features for emotion perception
The mathematical parameterization we developed allowed us to separate posture and dynamic features of emotional gaits. For all tested emotions we found a significant influence of both types of features. For example, as previously discussed, gait speed and other dynamic features strongly influenced the perception of emotion. At the same time, limb flexion represented an important feature for the perception of anger and fear, while the perception of sadness was dominantly influenced by head inclination. In general, critical features extracted from the rating and from the classification experiments were in good agreement with one another. Besides, we found surprising concordance between the most prominent movement and posture features extracted from the analysis of the movement trajectories and the features derived from the perception experiments, which was further confirmed by the third experiment showing that the critical features extracted from motor behavior induced strong high-level after-effects in the perception of emotional gaits. These results imply that the perceptual system efficiently extracts the dominant trajectory features from body motion stimuli, consistent with other experiments demonstrating that the visual perception of body motion, and the underlying neural representations, are ‘veridical’ in the sense that they reflect the metrics of physical differences between joint-motion trajectories (Giese & Lappe, 2002; Giese, Thornton, & Edelman, 2008; Vangeneugden et al., 2008). 
Our findings are in line with previous experiments showing correlations between observed kinematic features with perceived emotional expressiveness (Coulson, 2004; de Meijer, 1989, 1991; Montepare et al., 1999; Schouwstra & Hoogstraten, 1995; Sogon & Masutani, 1989; Wallbott, 1998). 
The finding that posture cues are especially important for the recognition of anger is consistent with the observation that the presence of angular arrangements between body segments influences observers' attribution of anger to body poses (Aronoff et al., 1992), while vertical body extension, appears to encode valence (de Meijer, 1989). A greater reliance on posture cues for the perception of anger and fear as compared to expressions of sadness or happiness is also consistent with the observation that full-light presentation improved the recognition of anger and fear expressions more than of happiness and sadness expressions, compared with the recognition of the same stimuli under point-light conditions (Atkinson et al., 2004). 
However, our study also reveals a number of features that have not been reported in the published literature, for instance, the widespread posture changes observed during fear expressions, and the leg-movement changes especially during expressions of anger and fear. Another surprising finding was the observation of lateral asymmetries, studied more systematically in a recent study using the same set of trajectories, and showing that the left body side is more emotionally expressive than the right (Roether et al., 2008). 
Together, these observations show that systematic approaches for an automatic identification of relevant features have the advantage of identifying all informative posture and dynamic features within the given parameterization, while previous approaches based on sets of pre-selected heuristically features and on subjective ratings are prone to miss important information, especially if features are not obvious or intuitive so that observers do not expect prominent emotion-specific changes to occur. 
Implications for the relationship between different affects
Expressions of both fear and anger were characterized by increased limb flexion (in particular elbow and hip joints) compared to the emotions happiness and sadness. Emotions related to danger thus seemingly were associated with increased postural tension. Increased muscle tension has been reported during the experience of angry and fearful states: experience of anger is associated with feelings of tension and bodily strength (Bartlett & Izard, 1972), and the intensity of the subjective experience of fear correlates with the degree of contraction of different facial muscles (Izard, 1977). Besides, muscle contraction and body rigidity represent prominent characteristics of the subjective experience of hypnosis-induced fear (Bull, 1951). Such increases in muscle tension seem appropriate in the light that anger and fear represent states related to the organism's preparation for attack or flight actions. The posture adopted by an individual might thus be part of a preparatory response of the organism, potentially associated with an activation of the sympathetic nervous system (Gellhorn, 1964). We have to leave to speculation whether the importance of posture cues for the perception of fear and anger expressions has evolutionary origins (Atkinson et al., 2007), e.g. being advantageous for a fast processing of emotions that are relevant in dangerous situations (‘alarm hypothesis’) (Walk & Homan, 1984). 
Interestingly, the postural similarities discussed in the above paragraph contrast the similarities in movement between the affects: happiness and anger are similar in terms of movement, and so are fear and sadness. The observers' particularly strong reliance on limb posture for the perception of anger and fear suggests that the analysis of posture cues helps to disambiguate emotions that are associated with similar dynamic cues or, more generally, movement activation. Compared with facial emotion expression, where there is a tendency for confusions according to valence, the importance of postural tension for distinguishing between different emotions seems to contrast previous accounts, in which it has been assumed that (for non-facial expressions) posture might carry information about gross affect state, while body movement reveals information about the nature of the emotion (Ekman & Friesen, 1967). 
Outlook and limitations
The methodology we developed for assessing critical features in motion stimuli can be applied to a broad field of studies on body movement, far beyond the domain of emotional body expressions. Regarding the method we developed for the parameterization of dynamic features of body-movement trajectories, this method carries the advantage, compared e.g. to methods such as Fourier series, of not being restricted to periodic movements. It has been successfully applied for the modeling of highly complex non-periodic movements, including even martial-arts techniques (Mukovskiy, Park, Omlor, Slotine, & Giese, 2008). Our methodology could also profitably be applied for other purposes within the domain of body motion, including the study of the attractiveness of body movements, or of communicative body expressions. Besides body movement, the same method can also be applied to the study of dynamic facial expressions (O'Toole et al., 2002). In terms of applications for the statistical methods we propose, sparse regression could also be applied to extract features influencing observers' ratings for completely different types of stimuli. 
Beyond the transfer of methodology, several future studies are motivated by limitations of the present study. Firstly, we only tested a small class of movements: gait expressing only four emotions. The investigation of more general principles of emotional body expressions, allowing the establishment of potential ‘universals’ of dynamic body expressions, similar to the previous work with facial expression of emotion (Ekman, 1992), critically requires the study of much broader movement classes, including in particular non-periodic movements, communicative gestures and goal-directed actions (Grezes et al., 2007), as well as the study of expressions in different cultures and in varying contexts, specifically including spontaneous emotional expressions recorded in naturalistic scenarios (Kleinsmith, De Silva, & Bianchi-Berthouze, 2006). Likewise, the study in Experiment 3 had to be limited to two emotions. In the context of a much more extended experimental study, the same approach could easily be applied for a detailed comparison of many different emotion pairs, and by inclusion of multiple sets of critical features for each emotion. 
Secondly, the relationship between the movements we studied and emotional body expressions is not necessarily bidirectional. Although all actors underwent mood induction prior to recording, and although observers could usually recognize the intended affect, neither of these results unequivocally proves that these movements represent universally valid bodily expressions of emotion. A first indication of the specificity of the expressions our study is based on is provided by our finding that emotion classification was not strongly affected by including neutral as a stimulus and response category. The vast majority of neutral gaits were classified as neutral, which shows that there are characteristic differences between neutral body movements and emotionally expressive body movements that observers can use to distinguish between them. This is an important result in the light of the finding that observers in a forced-choice situation even attribute emotional states to simple, static geometric shapes (Pavlova, Sokolov, & Sokolov, 2005) with above-chance consistency. However, a conclusive answer to the question of the relationship between expressive body movements and emotions would require monitoring the emotional experience of the actors more closely, perhaps by parallel assessment of psychophysiological measures (Cacioppo, Berntson, Larsen, Poehlmann, & Ito, 2000). In the absence of such data, one might also consider subjective mood ratings as possible method for the assessment of the affective changes that were experienced by the actors. However, we chose not to collect such subjective ratings, since we feared that this additional introspective step might disturb the immediacy of the actor's emotional experience. Besides, subjective reports of mood states are subject to strong demand effects, actors being inclined to report stronger effects, presumably in order to conform with the experimenter's intentions (Westermann et al., 1996). 
Third, the nature of the stimulus material is a key issue in any study of emotional expression, since it strongly influences the validity of the possible conclusions. In order to record expressions as close as possible to spontaneous expressions of emotion, we chose not to record the movements of professional actors. Although skilled at producing highly expressive body movements, professionally trained actors potentially use overlearned, standardized ways of emotional expression they know to evoke the intended reactions in observers. Individuals from the general population, without any acting experience, would represent the other extreme. One half of our data set was recorded with such novices, the other half was performed by university students who had at most two years' experience with acting in lay theatre groups, but who had received no formal acting training. Although the latter group reported less inhibition during the recording of emotional movements than did the novices, we combined their data for analysis throughout this study since there were no statistically significant differences in recognizability between the movements executed by the two groups. However, it remains an interesting question for future research to work out in detail the possible differences between emotional expressions recorded with naïve and professionally trained actors. 
Fourth, the majority of our analyses were based on correlations between observed natural differences between trajectories and emotion perception. An exception was Experiment 3 in which we tested the causal relationship between selected emotion-specific features and the induced emotion-specific adaptation effects. The methodology we developed, which at the same time provides a generative model for emotional body expressions, opens up a whole range of possibilities for detailed studies investigating causal relationships between movement features and perceptual judgments. Such studies could investigate the relationship between parametric variations of individual features and their perceptual effects. Finding out how such features are integrated seems an especially interesting question, and ideal-observer models exploiting Bayesian approaches for cue fusion (Ernst & Banks, 2002; Knill & Richards, 2008; Maloney, 2002) provide a powerful approach for such questions (Roether et al., in press). 
Finally, a potential limitation of our study lies in our features' being defined on the basis of the three-dimensional joint trajectories. One might validly object that features relevant for visual perception be defined in the domain of two-dimensional images and that they are not necessarily related to the structure of the joint trajectories, at least not in a simple way. It thus seems interesting to compare the results from our or related studies (Pollick et al., 2001; Sawada et al., 2003) with future studies that try to extract informative features directly from image sequences, e.g. by extending techniques such as classification images (Eckstein & Ahumada, 2002) or ‘bubbles’ (Gosselin & Schyns, 2001) to space-time (Lu & Liu, 2006; Thurman & Grossman, 2008). 
Appendix A
As described above, we modeled the trajectories by applying a blind source separation algorithm that learns independent components that are linearly combined with joint-specific time delays (Omlor & Giese, 2007a, 2007b). The joint-angle trajectories xi(t) were thus approximated by linear superpositions of the statistically independent source signals (basis functions) sj(t), weighted by the mixing weights αij (Equation A1). As described above, the model incorporates phase differences between different limbs by allowing for time delays τij between source signals and angle trajectories: 
xi(t)=j=1nαijsj(tτij).
(A1)
Exploiting the framework of time-frequency analysis (Wigner-Ville transformation) and critically the fact that the sources are mutually uncorrelated, this relationship can be transformed into the following identities for the Fourier transforms of the trajectories and the source signals (Omlor & Giese, 2007a): 
|Fxi(ω)|2=jαij2|Fsj(ω)|2,
(A2)
 
arraylll|Fxi(ω)|2=ωarg(Fxi(ω))=jαij2|Fsj(ω)|2(ωarg(Fsj(ω))+τij).array
(A3)
These two equations can be solved by consecutive iteration of the following two steps, until convergence is achieved:
  1.  
    Solving Equation A2, by applying source separation methods with additional positivity constraint, such as non-negative PCA (Oja & Plumbley, 2003), positive ICA (Hojen-Sorensen, Winther, & Hansen, 2002) or non-negative matrix factorization (NMF) (Lee & Seung, 1999). This is justified by the fact that the only difference between Equation A2 and the standard instantaneous mixing model of standard PCA or ICA is the fact that all variables are non-negative.
  2.  
    Solving Equation A3 numerically, given the results of the preceding step. The solution provides the unknown delays τ ij and the phases of the Fourier transforms of the source signals arg( Fs j). To separate these two variables, we estimate τ ij in a separate step which is then iterated with the solution of Equation A3.
This separate step for delay estimation exploits the phase information in the Fourier domain. The Fourier transform of a delayed signal simply corresponds to the original Fourier transform multiplied by a complex exponential that depends on the time shift. Assuming the signal x 2( t) is a scaled and time-shifted copy of the signal x 1( t), such that x 2( t) = αx 1( tτ), the following relationship in the Fourier domain holds (
z
specifying the complex conjugate of z):  
F x 1 ( ω ) · F x 2 ( ω ) = α | F x 1 ( ω ) | 2 e 2 π i ω τ .
(A4)
Equation A4 implies that arg( Fx 1( ω) · Fx 2( ω)) = 2 πωτ, which has to hold for all frequencies. The delay can thus be estimated by linear regression, concatenating the equations for a set of different frequencies, τ specifying the slope of the regression line. Equation A4 shows how the complex phase of the cross-spectrum is connected with the unknown delay τ ij.
If the two signals x 1 and x 2 are influenced by Gaussian additive noise, it can be shown that the delay can be estimated by linear regression using the equation  
arg ( F x 1 ( ω ) · F x 2 ( ω ) ) = 2 π ω τ + ɛ ( ω ) ,
(A5)
where ɛ( ω) is a composite noise term. Under appropriate assumptions, the estimated slope 2 πτ of this regression line is the best unbiased linear estimator (Chan, Hattin, & Plant, 1978). 
Since the time delays for the individual joints varied only weakly between the different emotions (Omlor & Giese, 2007b), we constrained all delays belonging to the same joint across all emotions to the same value (i.e., τij = τkj if i, k specify the same joint and source, but different emotions). This constraint resulted in a higher interpretability of the mixing weights. Assuming we want to estimate a common delay from the time shifts between a reference signal x0(t) and the signals xl(t), 1 ≤ lL, we can concatenate all regression equations belonging to the same joint into the vector relationship 
c=(arg(Fx1(ω)·Fx0(ω)))1lL=ωτ·2πu+ɛ(ω),
(A6)
where the vector c contains the values of the cross spectrum for the different signals, and where u is a one-element vector. Concatenating these equations over different values of the frequency ω results in a regression problem from which the joint delay can be estimated in the same way as from Equation A5
Appendix B
The sparseness parameter λ in Equation 6 is a free parameter of our analysis method. Large values of this parameter result in highly compact models with few features, but limited approximation quality, while small values lead to better fitting models with more features. One might ask if there is an optimal value for the choice of this parameter, which results in an optimized trade-off between prediction error and model complexity. 
In statistics, several methods for optimal sparsification have been developed. One such method (Tibshirani, 1996) is based on minimizing the generalized cross-validation (GCV) error of the sparsified model. It can be shown that the GCV error is given by: 
GCV(λ)=YXβ22n(1p(λ)n)2,
(B1)
where p(λ) signifies the number of active parameters of the model and n is the number of variables (dimensionality of β). It can be shown that the number of active parameters is given by the relationship 
p(λ)=trace(X(XTX+λW1)1XT)n0
(B2)
with W−1 being the generalized inverse of the matrix W = diag(2∣βj∣) and n0 signifying the number of zero entries in the vector of regression coefficients (i.e. βj = 0). This number is determined after solving the constrained regression problem described in Equation 6 for all values of the sparseness parameter λ. An optimal estimate for the sparseness parameter λopt can thus be determined by solving the minimization problem 
λopt=argminλGCV(λ) .
(B3)
 
Acknowledgments
We thank W. Ilg for assistance in the motion-capture lab, J. Scharm for help with data collection, and T. Flash, T. Hendler, A. Berthoz and M. Pavlova for interesting discussions. We are grateful to two anonymous reviewers for constructive comments on the manuscript. The work was supported by the Volkswagenstiftung, DFG SFB 550 and the Forschergruppe Perceptual Graphics, HFSP, and the EC FP6 project COBOL. Additional support was provided by the Hermann and Lilly Schilling Foundation and the fortüne program of the University Clinic Tübingen. 
Commercial relationships: none. 
Corresponding author: Martin A. Giese. 
Email: martin.giese@uni-tuebingen.de. 
Address: Section for Computational Sensomotorics, Hertie Institute for Clinical Brain Research & Center for Integrative Neuroscience, Frondsbergstr. 23, 72070 Tübingen, Germany. 
References
Aronoff, J. Woike, B. A. Hyman, L. M. (1992). Which are the stimuli in facial displays of anger and happiness Configurational bases of emotion recognition. Journal of Personality and Social Psychology, 62, 1050–1066. [CrossRef]
Atkinson, A. P. Dittrich, W. H. Gemmell, A. J. Young, A. W. (2004). Emotion perception from dynamic and static body expressions in point-light and full-light displays. Perception, 33, 717–746. [PubMed] [CrossRef] [PubMed]
Atkinson, A. P. Tunstall, M. L. Dittrich, W. H. (2007). Evidence for distinct contributions of form and motion information to the recognition of emotions from body gestures. Cognition, 104, 59–72. [PubMed] [CrossRef] [PubMed]
Bartlett, E. S. Izard, C. E. Izard, C. E. (1972). A dimensional and discrete emotions investigation of the subjective experience of emotion. Patterns of emotions: A new analysis of anxiety and depression. New York: Academic Press.
Bartlett, M. S. Movellan, J. R. Sejnowski, T. J. (2002). Face recognition by independent component analysis. IEEE Transactions on Neural Networks, 13, 1450–1464. [PubMed] [CrossRef] [PubMed]
Bassili, J. N. (1978). Facial motion in the perception of faces and of emotional expression. Journal of Experimental Psychology: Human Perception and Performance, 4, 373–379. [PubMed] [CrossRef] [PubMed]
Bassili, J. N. (1979). Emotion recognition: The role of facial motion and the relative importance of upper and lower areas of the face. Journal of Personality and Social Psychology, 37, 2049–2059. [PubMed] [CrossRef] [PubMed]
Bertenthal, B. I. Pinto, J. (1994). Global processing of biological motions. Psychological Science, 5, 221–225. [CrossRef]
Bortz, J. (1993). Statistik für Sozialwissenschaftler. (6 ed). Heidelberg: Springer-Verlag.
Brainard, D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433–436. [PubMed] [CrossRef] [PubMed]
Brescoll, V. L. Uhlmann, E. L. (2008). Can an angry woman get ahead? Status conferral, gender, and expression of emotion in the workplace. Psychological Science, 19, 268–275. [PubMed] [CrossRef] [PubMed]
Bull, N. (1951). The attitude theory of emotion (vol. 81). New York: Johnson Reprint Corporation.
Cacioppo, J. T. Berntson, G. G. Larsen, J. T. Poehlmann, K. M. Ito, T. A. Lewis, R. Haviland-Jones, J. M. (2000). The psychophysiology of emotion. The handbook of emotion. (pp. 173–191). New York: Guilford Press.
Calder, A. J. Burton, A. M. Miller, P. Young, A. W. Akamatsu, S. (2001). A principal component analysis of facial expressions. Vision Research, 41, 1179–1208. [PubMed] [CrossRef] [PubMed]
Carey, S. Diamond, R. (1994). Are faces perceived as configurations more by adults than by children? Visual Cognition, 1, 253–274. [CrossRef]
Chan, Y. T. Hattin, R. V. Plant, J. B. (1978). The least squares estimation of time delay and its use in signal detection. IEEE Transactions on Acoustics, Speech, and Signal Processing, 26, 217–222. [CrossRef]
Cichocki, A. Amari, S. (2002). Adaptive blind signal and image processing: Learning algorithms and applications. Chichester, UK: John Wiley & Sons.
Coulson, M. (2004). Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence. Journal of Nonverbal Behavior, 28, 117–139. [CrossRef]
d'Avella, A. Bizzi, E. (2005). Shared and specific muscle synergies in natural motor behaviors. Proceedings of the National Academy of Sciences of the United States of America, 102, 3076–3081. [PubMed] [Article] [CrossRef] [PubMed]
Davis, B. L. Vaughan, C. L. (1993). Phasic behavior of EMG signals during gait: Use of multivariate statistics. Journal of Electromyography and Kinesiology, 3, 51–60. [CrossRef] [PubMed]
de Gelder, B. (2006). Towards the neurobiology of emotional body language. Nature Reviews Neuroscience, 7, 242–249. [PubMed] [CrossRef] [PubMed]
de Meijer, M. (1989). The contribution of general features of body movement to the attribution of emotions. Journal of Nonverbal Behavior, 13, 247–268. [CrossRef]
de Meijer, M. (1991). The attribution of aggression and grief to body movements: The effect of sex-stereotypes. European Journal of Social Psychology, 21, 249–259. [CrossRef]
Dittrich, W. H. (1993). Action categories and the perception of biological motion. Perception, 22, 15–22. [PubMed] [CrossRef] [PubMed]
Dittrich, W. H. Troscianko, T. Lea, S. E. Morgan, D. (1996). Perception of emotion from dynamic point-light displays represented in dance. Perception, 25, 727–738. [PubMed] [CrossRef] [PubMed]
Donker, S. F. Beek, P. J. Wagenaar, R. C. Mulder, T. (2001). Coordination between arm and leg movements during locomotion. Journal of Motor Behavior, 33, 86–102. [PubMed] [CrossRef] [PubMed]
Donker, S. F. Mulder, T. Nienhuis, B. Duysens, J. (2002). Adaptations in arm movements for added mass to wrist or ankle during walking. Experimental Brain Research, 146, 26–31. [PubMed] [CrossRef] [PubMed]
Eckstein, M. P. Ahumada, A. J.Jr. (2002). Classification images: A tool to analyze visual strategies. Journal of Vision, 2,1: i, http://journalofvision.org/2/1/i/, doi:10.1167/2.1.i. [PubMed] [Article] [CrossRef]
Ekman, P. (1965). Differential communication of affect by head and body cues. Journal of Personality and Social Psychology, 2, 726–735. [PubMed] [CrossRef] [PubMed]
Ekman, P. (1969). The repertoire of nonverbal behavior: Categories, origins, usage and coding. Semiotica, 1, 49–98. [CrossRef]
Ekman, P. (1992). Are there basic emotions? Psychological Review, 99, 550–553. [PubMed] [CrossRef] [PubMed]
Ekman, P. Friesen, W. V. (1967). Head and body cues in the judgment of emotion: A reformulation. Perceptual and Motor Skills, 24, 711–724. [PubMed] [CrossRef] [PubMed]
Ekman, P. Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17, 124–129. [PubMed] [CrossRef] [PubMed]
Ekman, P. Friesen, W. V. (1978). Facial action coding system. Palo Alto: Consulting Psychologists Press.
Ekman, P. Sorenson, E. R. Friesen, W. V. (1969). Pan-cultural elements in facial displays of emotion. Science, 164, 86–88. [PubMed] [CrossRef] [PubMed]
Elfenbein, H. A. Foo, M. D. White, J. B. Tan, H. H. Aik, V. C. (2007). Reading your counterpart: The benefit of emotion recognition accuracy for effectiveness in negotiation. Journal of Nonverbal Behavior, 31, 205–223. [CrossRef]
Ellison, J. W. Massaro, D. W. (1997). Featural evaluation, integration, and judgment of facial affect. Journal of Experimental Psychology: Human Perception and Performance, 23, 213–226. [PubMed] [CrossRef] [PubMed]
Ernst, M. O. Banks, M. S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429–433. [PubMed] [CrossRef] [PubMed]
Etcoff, N. L. Magee, J. J. (1992). Categorical perception of facial expressions. Cognition, 44, 227–240. [PubMed] [CrossRef] [PubMed]
Fridlund, A. J. (1990). Sociality of solitary smiling: Potentiation by an implicit audience. Journal of Personality and Social Psychology, 60, 229–240. [CrossRef]
Fridlund, A. J. Kenworthy, K. G. Jaffey, A. K. (1992). Audience effects in affective imagery: Replication and extension to dysphoric memory. Journal of Nonverbal Behavior, 16, 191–212. [CrossRef]
Frijda, N. H. (1988). The laws of emotion. American Psychologist, 43, 349–358. [PubMed] [CrossRef] [PubMed]
Fu, W. J. (1998). Penalized regressions: The bridge versus the lasso. Journal of Computational and Graphical Statistics, 7, 397–416.
Gellhorn, E. (1964). Motion and emotion: The role of proprioception in the physiology and pathology of the emotions. Psychological Review, 71, 457–472. [PubMed] [CrossRef] [PubMed]
Giese, M. A. Lappe, M. (2002). Measurement of generalization fields for the recognition of biological motion. Vision Research, 42, 1847–1858. [PubMed] [CrossRef] [PubMed]
Giese, M. A. Poggio, T. (2000). Morphable models for the analysis and synthesis of complex motion patterns. International Journal of Computer Vision, 38, 59–73. [CrossRef]
Giese, M. A. Thornton, I. Edelman, S. (2008). Metrics of the perception of body movement. Journal of Vision, 8(9):13, 1–18, http://journalofvision.org/8/9/13/, doi:10.1167/8.9.13. [PubMed] [Article] [CrossRef] [PubMed]
Golubitsky, M. Stewart, I. Buono, P. L. Collins, J. J. (1999). Symmetry in locomotor central pattern generators and animal gaits. Nature, 401, 693–695. [PubMed] [CrossRef] [PubMed]
Gosselin, F. Schyns, P. G. (2001). Bubbles: A technique to reveal the use of information in recognition tasks. Vision Research, 41, 2261–2271. [PubMed] [CrossRef] [PubMed]
Grèzes, J. Pichon, S. de Gelder, B. (2007). Perceiving fear in dynamic body expressions. Neuroimage, 35, 959–967. [PubMed] [CrossRef] [PubMed]
Hancock, P. J. Burton, A. M. Bruce, V. (1996). Face processing: Human perception and principal components analysis. Memory and Cognition, 24, 21–40. [PubMed] [CrossRef] [PubMed]
Henley, N. M. Harmon, S. Ellyson, S. L. Dovidio, J. F. (1985). The nonverbal semantics of power and gender. Power, dominance, and nonverbal behavior. New York: Springer.
Hojen-Sorensen, P. A. d. F. R. Winther, O. Hansen, L. K. (2002). Mean-field approaches to independent component analysis. Neural Computation, 14, 889–918. [PubMed] [CrossRef] [PubMed]
Ivanenko, Y. P. Cappellini, G. Dominici, N. Poppele, R. E. Lacquaniti, F. (2005). Coordination of locomotion with voluntary movements in humans. Journal of Neuroscience, 25, 7238–7253. [PubMed] [Article] [CrossRef] [PubMed]
Ivanenko, Y. P. Poppele, R. E. Lacquaniti, F. (2004). Five basic muscle activation patterns account for muscle activity during human locomotion. Journal of Physiology, 556, 267–282. [PubMed] [Article] [CrossRef] [PubMed]
Izard, C. E. (1977). Human emotions. New York: Plenum Press.
Jolliffe, I. T. (2002). Principal component analysis (2 ed.). New York: Springer.
Jordan, H. Fallah, M. Stoner, G. R. (2006). Adaptation of gender derived from biological motion. Nature Neuroscience, 9, 738–739. [PubMed] [CrossRef] [PubMed]
Kamachi, M. Bruce, V. Mukaida, S. Gyoba, J. Yoshikawa, S. Akamatsu, S. (2001). Dynamic properties influence the perception of facial expressions. Perception, 30, 875–887. [PubMed] [CrossRef] [PubMed]
Kirtley, C. (2006). Clinical gait analysis. (1 ed.) Oxford: Churchill Livingstone.
Kleinsmith, A. De Silva, R. Bianchi-Berthouze, N. (2006). Cross-cultural differences in recognizing affect from body posture. Interacting with Computers, 18, 1371–1389. [CrossRef]
Knill, D. C. Richards, W. (2008). Perception as Bayesian inference. (1 ed.). Cambridge: Cambridge University Press.
Lee, D. D. Seung, H. S. (1999). Learning the parts of objects by non-negative matrix factorization. Nature, 401, 788–791. [PubMed] [CrossRef] [PubMed]
Leopold, D. A. O'Toole, A. J. Vetter, T. Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89–94. [PubMed] [CrossRef] [PubMed]
Lu, H. Liu, Z. (2006). Computing dynamic classification images from correlation maps. Journal of Vision, 6(4):12, 475–483, http://journalofvision.org/6/4/12/, doi:10.1167/6.4.12. [PubMed] [Article] [CrossRef]
Maloney, L. T. (2002). Illuminant estimation as cue combination. Journal of Vision, 2(6):6, 493–504, http://journalofvision.org/2/6/6/, doi:10.1167/2.6.6. [PubMed] [Article] [CrossRef]
Mather, G. Radford, K. West, S. (1992). Low-level visual processing of biological motion. Proceedings in Biological Sciences, 249, 149–155. [PubMed] [CrossRef]
Meinshausen, N. Rocha, G. Yu, B. (2007). A tale of three cousins: Lasso, L2Boosting and Dantzig. Annals of Statistics, 35, 2373–2384. [CrossRef]
Merkle, L. A. Layne, C. S. Bloomberg, J. J. Zhang, J. J. (1998). Using factor analysis to identify neuromuscular synergies during treadmill walking. Journal of Neuroscience Methods, 82, 207–214. [PubMed] [CrossRef] [PubMed]
Mezger, J. Ilg, W. Giese, M. A. (2005). Trajectory synthesis by hierarchical spatio-temporal correspondence: Comparison of different methods. Paper presented at the ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization, A Coruna, Spain.
Montepare, J. M. Goldstein, S. B. Clausen, A. (1987). The identification of emotions from gait information. Journal of Nonverbal Behavior, 11, 33–42. [CrossRef]
Montepare, J. M. Koff, E. Zaitchik, D. Albert, M. (1999). The use of body movements and gestures as cues to emotions in younger and older adults. Journal of Nonverbal Behavior, 23, 133–152. [CrossRef]
Mukovskiy, A. Park, A.-N Omlor, L. Slotine, J. J. Giese, M. A. (2008). Selforganization of character behavior by mixing of learned movement primitives. Konstanz, Germany: The 13th International FallWorkshop of Vision, Modeling, and Visualization (VMV).
Nocedal, J. Wright, S. (2006). Numerical optimization. (2nd ed.) Berlin: Springer.
O'Toole, A. J. Roark, D. A. Abdi, H. (2002). Recognizing moving faces: A psychological and neural synthesis. Trends in Cognitive Sciences, 6, 261–266. [PubMed] [CrossRef] [PubMed]
Oja, E. Plumbley, M. (2003). Blind separation of positive sources using non-negative PCA. Paper presented at the 4th International Symposium on Independent Component Analysis and Blind Signal Separation (ICA), Nara, Japan.
Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9, 97–113. [PubMed] [CrossRef] [PubMed]
Olree, K. S. Vaughan, C. L. (1995). Fundamental patterns of bilateral muscle activity in human locomotion. Biological Cybernetics, 73, 409–414. [PubMed] [CrossRef] [PubMed]
Omlor, L. Giese, M. A. Schölkopf,, B. Platt,, J. Hoffman, T. (2007a). Blind source separation for over-determined delayed mixtures. Advances in neural information processing systems. (vol. 19, pp. 1049–1056). Cambridge, MA: MIT Press.
Omlor, L. Giese, M. A. (2007b). Extraction of spatio-temporal primitives of emotional body expressions. Neurocomputing, 70, 1938–1942. [CrossRef]
Osgood, C. E. (1966). Dimensionality of the semantic space for communication via facial expressions. Scandinavian Journal of Psychology, 7, 1–30. [PubMed] [CrossRef] [PubMed]
Padgett, C. Cottrell, G. (1995). Identifying emotion in static face images. Paper presented at the 2nd Joint Symposium on Neural Computation. La Jolla, CA, University of California, San Diego.
Pavlova, M. Sokolov, A. Sokolov, A. A. (2005). Perceived dynamics of static images enables emotional attribution. Perception, 34, 1107–1116. [PubMed] [CrossRef] [PubMed]
Pinto, J. Shiffrar, M. (1999). Subconfigurations of the human form in the perception of biological motion displays. Acta Psychologica (Amsterdam), 102, 293–318. [PubMed] [CrossRef]
Pollick, F. E. Lestou, V. Ryu, J. Cho, S. B. (2002). Estimating the efficiency of recognizing gender and affect from biological motion. Vision Research, 42, 2345–2355. [PubMed] [CrossRef] [PubMed]
Pollick, F. E. Paterson, H. M. Bruderlin, A. Sanford, A. J. (2001). Perceiving affect from arm movement. Cognition, 82, B51–61. [PubMed] [CrossRef] [PubMed]
Roether, C. L. Omlor, L. Giese, M. A. (2008). Lateral asymmetry of bodily emotion expression. Current Biology, 18, R329–330. [PubMed] [CrossRef] [PubMed]
Roether, C. L. Omlor, L. Giese, M. A. Masson, G. Ilg, U. (in press).Features in the recognition of emotions from dynamic bodily expression. Dynamics of visual motion processing: Neuronal, behavioral and computational approaches. Berlin, Heidelberg: Springer.
Santello, M. Flanders, M. Soechting, J. F. (2002). Patterns of hand motion during grasping and the influence of sensory guidance. Journal of Neuroscience, 22, 1426–1435. [PubMed] [Article] [PubMed]
Santello, M. Soechting, J. F. (1997). Matching object size by controlling finger span and hand shape. Somatosensory and Motor Research, 14, 203–212. [PubMed] [CrossRef] [PubMed]
Sawada, M. Suda, K. Ishii, M. (2003). Expression of emotions in dance: Relation between arm movement characteristics and emotion. Perceptual and Motor Skills, 97, 697–708. [PubMed] [CrossRef] [PubMed]
Scherer, K. R. Johnstone, T. Klasmeyer, G. Davidson, R. J. (2003). Vocal expression of emotion. Handbook of affective sciences. (pp. 433–456). Oxford: Oxford University Press.
Schlosberg, H. (1954). Three dimensions of emotion. Psychological Review, 61, 81–88. [PubMed] [CrossRef] [PubMed]
Schmidt, K. L. Cohn, J. F. (2001). Human facial expressions as adaptations: Evolutionary questions in facial expression research. Yearbook of Physical Anthropology, 44, 3–24. [PubMed] [Article] [CrossRef]
Schouwstra, S. J. Hoogstraten, J. (1995). Head position and spinal position as determinants of perceived emotional state. Perceptual and Motor Skills, 81, 673–674. [PubMed] [CrossRef] [PubMed]
Schyns, P. G. Petro, L. S. Smith, M. L. (2007). Dynamics of visual information integration in the brain for categorizing facial expressions. Current Biology, 17, 1580–1585. [PubMed] [CrossRef] [PubMed]
Sogon, S. Masutani, M. (1989). Identification of emotion from body movements: A cross-cultural study of Americans and Japanese. Psychological Reports, 65, 35–46. [CrossRef]
Spiegel, J. Machotka, P. (1974). Messages of the body. (1 ed.) New York: Free Press.
Tanaka, J. W. Farah, M. J. (1993). Parts and wholes in face recognition. Quarterly Journal of Experimental Psychology A: Human Perceptual Psychology, 46, 225–245. [PubMed] [CrossRef]
Thurman, S. M. Grossman, E. D. (2008). Temporal “Bubbles” reveal key features for point-light biological motion perception. Journal of Vision, 8(3):28, 1–11, http://journalofvision.org/8/3/28/, doi:10.1167/8.3.28. [PubMed] [Article] [CrossRef] [PubMed]
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B, 58, 267–288.
Tresch, M. C. Cheung, V. C. d'Avella, A. (2006). Matrix factorization algorithms for the identification of muscle synergies: Evaluation on simulated and experimental data sets. Journal of Neurophysiology, 95, 2199–2212. [PubMed] [Article] [CrossRef] [PubMed]
Troje, N. F. (2002). Decomposing biological motion: A framework for analysis and synthesis of human gait patterns. Journal of Vision, 2(5), 371–387, http://journalofvision.org/2/5/2/, doi:10.1167/2.5.2. [PubMed] [Article] [CrossRef] [PubMed]
Troje, N. F. Sadr, J. Geyer, H. Nakayama, K. (2006). Adaptation aftereffects in the perception of gender from biological motion. Journal of Vision, 6(8):7, 850–857, http://journalofvision.org/6/8/7/, doi:10.1167/6.8.7. [PubMed] [Article] [CrossRef]
Turk, M. Pentland, A. (1991). Eigen faces for recognition. Journal of Cognitive Neuroscience, 3, 71–86. [CrossRef] [PubMed]
Unuma, M. Anjyo, K. Takeuchi, R. (1995). Fourier principles for emotion-based human figure animation. Motion signal processing. Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques (pp. 91–96). New York: ACM Press.
Valentin, D. Abdi, H. Edelman, B. O'Toole, A. J. (1997). Principal component and neural network analyses of face images: What can be generalized in gender classification? Journal of Mathematical Psychology, 41, 398–413. [PubMed] [CrossRef] [PubMed]
Vangeneugden, J. Pollick, F. Vogels, R. (2008). Cerebral Cortex. [.
Vapnik, V. N. (1999). An overview of statistical learning theory. IEEE Transactions on Neural Networks, 10, 988–999. [CrossRef] [PubMed]
Walk, R. D. Homan, C. P. (1984). Emotion and dance in dynamic light displays. Bulletin of the Psychonomic Society, 22, 437–440. [CrossRef]
Wallbott, H. G. (1998). Bodily expression of emotion. European Journal of Social Psychology, 28, 879–896. [CrossRef]
Wallbott, H. G. Scherer, K. R. (1986). Cues and channels in emotion recognition. Journal of Personality and Social Psychology, 51, 690–699. [CrossRef]
Webster, M. A. Kaping, D. Mizokami, Y. Duhamel, P. (2004). Adaptation to natural facial categories. Nature, 428, 557–561. [PubMed] [CrossRef] [PubMed]
Westermann, R. Spies, K. Stahl, G. Hesse, F. W. (1996). Relative effectiveness and validity of mood induction procedures: A meta-analysis. European Journal of Social Psychology, 26, 557–580. [CrossRef]
Wundt, W. (2004). Grundriss der Psychologie. (1 ed.) Saarbrücken: Vdm Verlag Dr Müller.
Xu, H. Dayan, P. Lipkin, R. M. Qian, N. (2008). Adaptation across the cortical hierarchy: Low-level curve adaptation affects high-level facial-expression judgments. Journal of Neuroscience, 28, 3374–3383. [PubMed] [Article] [CrossRef] [PubMed]
Yacoob, Y. Black, M. J. (1999). Parameterized modeling and recognition of activities. Computer Vision and Image Understanding, 73, 232–247. [CrossRef]
Zuckerman, M. Lipets, M. S. Koivumaki, J. H. Rosenthal, R. (1975). Encoding and decoding nonverbal cues of emotion. Journal of Personality and Social Psychology, 32, 1068–1076. [CrossRef] [PubMed]
Figure 1
 
Joint angles and stimulus design. The joint angles describe rotation around the three axes defining flexion (red, marked flex), abduction (blue, marked abd) and rotation angles (green, marked rot).
Figure 1
 
Joint angles and stimulus design. The joint angles describe rotation around the three axes defining flexion (red, marked flex), abduction (blue, marked abd) and rotation angles (green, marked rot).
Figure 2
 
Emotion-specific posture effects. (A) Regression weights from the sparse regression analysis for the posture changes for emotional relative to neutral gait for six different joints (averaging the data for corresponding bilateral joints), for head (He), spine (Sp), shoulder (Sh), elbow (El), hip (Hi) and knee (Kn) joints. Color code as in flanking color bar. Signs (+ and −) indicate critical posture features reported in previous psychophysics experiments on the perception of emotional body expressions. (B) Mean ± SEM posture change (in rad), for emotional relative to neutral gait for six different joints (averaging the values for bilateral joints). Emotions are indicated by different colors of the bars. For head and spine, negative values indicate increased inclination; upper-arm retraction is indicated by negative shoulder flexion. For elbow, hip and knee positive values indicate increased flexion. Asterisks mark significant posture changes ( p < 0.05).
Figure 2
 
Emotion-specific posture effects. (A) Regression weights from the sparse regression analysis for the posture changes for emotional relative to neutral gait for six different joints (averaging the data for corresponding bilateral joints), for head (He), spine (Sp), shoulder (Sh), elbow (El), hip (Hi) and knee (Kn) joints. Color code as in flanking color bar. Signs (+ and −) indicate critical posture features reported in previous psychophysics experiments on the perception of emotional body expressions. (B) Mean ± SEM posture change (in rad), for emotional relative to neutral gait for six different joints (averaging the values for bilateral joints). Emotions are indicated by different colors of the bars. For head and spine, negative values indicate increased inclination; upper-arm retraction is indicated by negative shoulder flexion. For elbow, hip and knee positive values indicate increased flexion. Asterisks mark significant posture changes ( p < 0.05).
Figure 3
 
Emotion-specific dynamic features. (A) Regression coefficients from sparse regression based on the weight differences between emotional and neutral walking for the first source that explains the maximum of the variance (see Methods). The signs indicate corresponding features derived from psychophysical experiments (see text) for left (L) or right (R) side of body. Joint abbreviations as in Figure 2A. (B) Mean ± SEM of differences in mixing weights between emotional and neutral gait extracted by the novel algorithm, for the first source function ( s 1). Emotions are color-coded, and asterisks mark significant weight changes ( p < 0.05). (C) Features extracted by PCA; significant features do not match results from psychophysics. Features extracted by PCA plotted in the same way as in (A).
Figure 3
 
Emotion-specific dynamic features. (A) Regression coefficients from sparse regression based on the weight differences between emotional and neutral walking for the first source that explains the maximum of the variance (see Methods). The signs indicate corresponding features derived from psychophysical experiments (see text) for left (L) or right (R) side of body. Joint abbreviations as in Figure 2A. (B) Mean ± SEM of differences in mixing weights between emotional and neutral gait extracted by the novel algorithm, for the first source function ( s 1). Emotions are color-coded, and asterisks mark significant weight changes ( p < 0.05). (C) Features extracted by PCA; significant features do not match results from psychophysics. Features extracted by PCA plotted in the same way as in (A).
Figure 4
 
Emotion-specific kinematic effects, relative to velocity-matched neutral gait. Mean ± SEM difference in linear weights for the first (four darker gray bars) and second (lighter gray bars) source function. Joint abbreviations as in Figure 2, asterisks mark significant differences between emotional and speed-matched neutral walking (* p < 0.05; ** p < 0.01). (A) Anger, (B) Fear, (C) Happiness, (D) Sadness.
Figure 4
 
Emotion-specific kinematic effects, relative to velocity-matched neutral gait. Mean ± SEM difference in linear weights for the first (four darker gray bars) and second (lighter gray bars) source function. Joint abbreviations as in Figure 2, asterisks mark significant differences between emotional and speed-matched neutral walking (* p < 0.05; ** p < 0.01). (A) Anger, (B) Fear, (C) Happiness, (D) Sadness.
Figure 5
 
Relationship between posture features and perceived intensity of emotional expression. Weights β k of the posture features derived by sparse regression, where emotional-expression intensity was predicted by a sparsified linear combination of the posture angles. Weights are color-coded and plotted as a function of the sparseness parameter λ. Increasing values of this parameter along the vertical axis indicate increasingly sparse models, which are based on fewer and fewer features. Black horizontal lines mark optimal value of the sparseness parameter λ opt estimated by GCV (see 2). Mean joint flexion served as measure of posture; joint abbreviations as in Figure 2A. (A) Anger, (B) Fear, (C) Happiness, (D) Sadness.
Figure 5
 
Relationship between posture features and perceived intensity of emotional expression. Weights β k of the posture features derived by sparse regression, where emotional-expression intensity was predicted by a sparsified linear combination of the posture angles. Weights are color-coded and plotted as a function of the sparseness parameter λ. Increasing values of this parameter along the vertical axis indicate increasingly sparse models, which are based on fewer and fewer features. Black horizontal lines mark optimal value of the sparseness parameter λ opt estimated by GCV (see 2). Mean joint flexion served as measure of posture; joint abbreviations as in Figure 2A. (A) Anger, (B) Fear, (C) Happiness, (D) Sadness.
Figure 6
 
Relationship between dynamic features and perceived intensity of emotional expression. Weights β i of the dynamic features derived by sparse regression, where emotional-expression intensity was predicted from the dynamic features (weights for the first and second source function). Four emotions (anger, fear, happiness and sadness) shown in rows, as marked. Joint abbreviations as in Figure 2. (A) Weights for first source function, (B) weight for second source function. Conventions as in Figure 5.
Figure 6
 
Relationship between dynamic features and perceived intensity of emotional expression. Weights β i of the dynamic features derived by sparse regression, where emotional-expression intensity was predicted from the dynamic features (weights for the first and second source function). Four emotions (anger, fear, happiness and sadness) shown in rows, as marked. Joint abbreviations as in Figure 2. (A) Weights for first source function, (B) weight for second source function. Conventions as in Figure 5.
Figure 7
 
Adaptation of emotion perception. High-level after-effects induced by artificial emotion stimuli, containing only the two largest critical posture and kinematic features, in comparison with after-effect following presentation of natural emotionally expressive gait. Mean proportions of ‘sad' responses are shown as a function of the linear weight of the sad prototype in the test stimulus. Responses for the condition without presentation of adaptors shown in black. Green (blue): responses following presentation of happy (sad) adaptor; solid lines and filled circles represent results for adaptation with the artificial adaptors, while dashed lines and open circles represent responses for natural adaptors.
Figure 7
 
Adaptation of emotion perception. High-level after-effects induced by artificial emotion stimuli, containing only the two largest critical posture and kinematic features, in comparison with after-effect following presentation of natural emotionally expressive gait. Mean proportions of ‘sad' responses are shown as a function of the linear weight of the sad prototype in the test stimulus. Responses for the condition without presentation of adaptors shown in black. Green (blue): responses following presentation of happy (sad) adaptor; solid lines and filled circles represent results for adaptation with the artificial adaptors, while dashed lines and open circles represent responses for natural adaptors.
Table 1
 
Classification of emotional gaits (N = 20 subjects). Expression intended in the stimuli is shown in columns (75 trials per affect), and mean (± SD) percentages of subjects' responses in rows. Diagonal entries (in bold) mark the percentage of trials in which the movement was classified as expressing the intended emotion.
Table 1
 
Classification of emotional gaits (N = 20 subjects). Expression intended in the stimuli is shown in columns (75 trials per affect), and mean (± SD) percentages of subjects' responses in rows. Diagonal entries (in bold) mark the percentage of trials in which the movement was classified as expressing the intended emotion.
Anger Happiness Fear Sadness
Anger 70.3 ± 21.4 15.6 ± 11.3 3.2 ± 5.2 1.0 ± 1.4
Happiness 23.2 ± 19.2 75.1 ± 23.0 1.9 ± 4.1 1.2 ± 1.4
Fear 4.7 ± 8.4 6.6 ± 8.6 77.1 ± 14.1 8.0 ± 5.5
Sadness 1.8 ± 3.1 2.7 ± 1.5 17.9 ± 5.7 89.8 ± 5.7
Table 2
 
Classification of velocity-matched neutral gaits (N = 20 subjects). Columns show the emotion for which the gait velocity was matched (22 trials per affect); mean (± SD) percentages of subjects' responses in rows.
Table 2
 
Classification of velocity-matched neutral gaits (N = 20 subjects). Columns show the emotion for which the gait velocity was matched (22 trials per affect); mean (± SD) percentages of subjects' responses in rows.
Anger Happiness Fear Sadness
Anger 48.8 ± 13.8 20.7 ± 12.2 2.5 ± 6.3 3.7 ± 7.8
Happiness 39.3 ± 10.3 42.6 ± 15.8 8.3 ± 10.9 8.3 ± 13.4
Fear 7.0 ± 2.5 19.0 ± 5.4 28.1 ± 9.4 33.9 ± 10.7
Sadness 5.0 ± 3.6 17.8 ± 6.1 61.2 ± 11.4 54.1 ± 14.1
Table 3
 
Classification of emotional gait including neutral (N = 5 subjects). Columns show stimulus affect (75 trials per affect); mean (± SD) percentages of subjects' responses in rows.
Table 3
 
Classification of emotional gait including neutral (N = 5 subjects). Columns show stimulus affect (75 trials per affect); mean (± SD) percentages of subjects' responses in rows.
Anger Happiness Neutral Fear Sadness
Anger 76.00 ± 2.8 14.9 ± 4.2 8.5 ± 2.4 1.9 ± 1.8 0.5 ± 0.7
Happiness 15.5 ± 3.2 65.1 ± 6.5 12.3 ± 3.5 2.9 ± 3.8 1.9 ± 1.8
Neutral 5.3 ± 4.9 18.4 ± 6.2 71.5 ± 3.1 5.1 ± 3.5 3.5 ± 2.2
Fear 1.6 ± 1.7 1.1 ± 0.6 4.0 ± 1.9 80.0 ± 10.0 2.1 ± 0.7
Sadness 1.6 ± 1.5 0.5 ± 0.7 3.7 ± 2.9 10.1 ± 5.4 92.0 ± 3.1
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×