Free
Article  |   April 2013
A sex difference in interference between identity and expression judgments with static but not dynamic faces
Author Affiliations
  • Brenda M. Stoesz
    Department of Psychology, University of Manitoba, Winnipeg, Manitoba, Canada
    sbrenda@mymts.net
  • Lorna S. Jakobson
    Department of Psychology, University of Manitoba, Winnipeg, Manitoba, Canada
    Lorna.Jakobson@ad.umanitoba.ca
Journal of Vision April 2013, Vol.13, 26. doi:https://doi.org/10.1167/13.5.26
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Brenda M. Stoesz, Lorna S. Jakobson; A sex difference in interference between identity and expression judgments with static but not dynamic faces. Journal of Vision 2013;13(5):26. https://doi.org/10.1167/13.5.26.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Facial motion cues facilitate identity and expression processing (Pilz, Thornton, & Bülthoff, 2006). To explore this dynamic advantage, we used Garner's speeded classification task (Garner, 1976) to investigate whether adding dynamic cues alters the interactions between the processing of identity and expression. We also examined whether facial motion affected women and men differently, given that women show an advantage for several aspects of static face processing (McClure, 2000). Participants made speeded identity or expression judgments while the irrelevant cue was held constant or varied. Significant interference occurred with both tasks when static stimuli were used (as in Ganel & Goshen-Gottstein, 2004), but interference was minimal with dynamic displays. This suggests that adult viewers are either better able to selectively attend to relevant cues, or better able to integrate multiple facial cues, when viewing moving as opposed to static faces. These gains, however, come with a cost in processing time. Only women showed asymmetrical interference with static faces, with variations in identity affecting expression judgments more than the opposite. This finding may reflect sex differences in global-local processing biases (Godard & Fiori, 2012). Our findings stress the importance of using dynamic displays and of considering sex distributions when characterizing typical face processing mechanisms.

Introduction
Faces are among the most complex and important visual stimuli in our environment. In the real world, faces move, and natural rigid motion (e.g., head turns and nods) and nonrigid changes in the shape of facial features over time (as in unfolding expressions) serve as important social signals (Barton, 2003). Dynamic displays of facial expression do not simply provide redundant static information; rather, the specific spatiotemporal information they convey leads to more accurate and faster recognition compared to that observed with displays that show form cues only (Knappmeyer, Thornton, & Bülthoff, 2003). This dynamic advantage is most evident when task demands are high (Horstmann & Ansorge, 2009), when form information is degraded or distorted as in point-light displays (Bassili, 1978) or morphed sequences (Kamachi et al., 2001), or when expressions are subtle (Ambadar, Schooler, & Cohn, 2005; Bould & Morris, 2008) or synthetic (Wehrle, Kaiser, Schmidt, & Scherer, 2000). In each of these circumstances, the expressions are more difficult to identify and the addition of motion cues is beneficial. Behaviorally, the dynamic advantage may be lost when performance is at ceiling (Fiorentini & Viviani, 2011), but it may still be evident in the form of enhanced neural activation in the core and extended face processing network (Arsalidou, Morris, & Taylor, 2011; Kessler et al., 2011). 
A dynamic advantage is also frequently observed during the processing of facial identity, although, as with expression processing, it is sometimes difficult to demonstrate with unaltered displays (Christie & Bruce, 1998; Knight & Johnston, 1997; Lander, Christie, & Bruce, 1999), or when the task is not sufficiently demanding (Lander et al., 1999). During the processing of familiar faces, a dynamic advantage can be observed under a variety of suboptimal viewing conditions (Knight & Johnston, 1997; Lander, Bruce, & Hill, 2001; Lander et al., 1999), with the greatest effects occurring in the presence of natural (as opposed to slowed or disrupted) movement (Lander & Bruce, 2000; Lander et al., 1999). One might also expect to find a dynamic advantage in a variety of other challenging situations, such as when viewers see faces briefly, match faces shown from different viewpoints, discriminate between unfamiliar individuals, or match faces displaying different kinds of movements. The results from investigations using nondegraded, unfamiliar faces (Pike, Kemp, Towell, & Phillips, 1997; Pilz et al., 2006; Thornton & Kourtzi, 2002) support these predictions. Thornton and Kourtzi (2002) found that participants were quicker to match the identity of a static target to a dynamic prime (moving nonrigidly) than to a static prime when the expressions of the target and prime faces differed. In addition, Pilz et al. (2006) demonstrated a dynamic advantage with nonrigid motion regardless of changes in the viewpoint (i.e., front, left, or right facing) of the prime face, or whether the task involved sequential matching or visual search. 
Researchers investigating the dynamic advantage in identity processing often utilize the nonrigid motion of expressive faces, and the benefit of motion may be particularly evident when viewers match faces expressing different emotions (Thornton & Kourtzi, 2002). One reason for this finding may be that viewers are better able to attend selectively to the identity or expression information in dynamic faces than in static faces, resulting in a greater resistance to interference between the processing of these different facial cues. Alternatively, viewers may be attending to multiple sources of information when processing identity and expression cues in dynamic, as opposed to static, displays—resulting in increased interdependence between the processing of these cues. 
One way to study functional independence or, alternatively, interdependence between the processing of different facial cues is to use Garner's classification task (Garner, 1976). The vast majority of studies using this paradigm to study face processing have employed static displays only (e.g., Baudouin, Martin, Tiberghien, Verlut, & Franck, 2002; Ganel & Goshen-Gottstein, 2004; Ganel, Goshen-Gottstein, & Goodale, 2005; Schweinberger & Soukup, 1998; with the exception of Kaufmann & Schweinberger, 2005). Garner's task was originally designed to examine one's ability to process one dimension of a visual stimulus while ignoring another dimension of the same stimulus. The task typically involves presentation of stimuli in two experimental blocks: baseline (or control) and orthogonal (or filtering). In studies using faces, the baseline block comprises trials in which a relevant dimension (e.g., identity) varies while an irrelevant dimension (e.g., expression) is held constant. Participants make speeded judgments regarding the relevant dimension. Accuracy and response times (RTs) in the baseline block are then compared with performance in the orthogonal block, in which both relevant and irrelevant dimensions vary randomly. Equivalent performance in baseline and orthogonal blocks indicates that one's ability to extract the relevant facial dimension is not influenced by variations in the irrelevant dimension. In contrast, Garner interference occurs when significantly less accurate and/or slower responses occur in the orthogonal block compared to the baseline block; this pattern of results suggests that the relevant dimension cannot be processed independently of the irrelevant dimension.1 
Using unfamiliar static faces, researchers exploring the dependencies between identity and expression processing using a Garner classification task have generally found no interfering effects from expression when making identity judgments, but significant interfering effects from identity when making expression judgments. This asymmetrical pattern of results has been described in adults (Schweinberger & Soukup, 1998; Schweinberger, Burton, & Kelly, 1999) and typically-developing children (Krebs et al., 2011; Spangler, Schwarzer, Korell, & Maier-Karius, 2010), and suggests that while systems supporting identity and expression processing may be interconnected, there is only one direction of cross-talk between them. The pattern may change, however, when viewers process even somewhat familiar faces. Viewers may still experience asymmetrical Garner interference with familiar faces, but significant interference occurs in both directions, suggesting a functional interdependence between the processing of these two types of facial cues (Ganel & Goshen-Gottstein, 2004). These observations cast doubt on traditional face-processing models that suggest that the processing of identity and expression cues depend on parallel and functionally independent pathways (Bruce & Young, 1986), subserved by different and largely independent neural structures (Haxby, Hoffman, & Gobbini, 2000). Of course, the existence of specialized brain areas does not provide a strong argument for strict separation on a functional level, as specialized regions in the intact brain might influence one another within the face-processing network (Fox, Moon, Iaria, & Barton, 2009) and within the broader social-cognition system (Beauchamp & Anderson, 2010). Functional interdependence also makes sense if one considers that a familiar person's idiosyncratic (i.e., characteristic) facial expressions (i.e., that individual's “facial motion signatures”) can aid in the determination of his or her identity, just as the unique structure of an individual's face can constrain the way that emotions are expressed (Ganel & Goshen-Gottstein, 2004). 
A key goal of the present study was to determine if the addition of dynamic facial cues alters the strength or nature of the interactions between the processing of identity and expression information. If this was the case, it might explain why dynamic advantages have been observed in many studies of face processing. To investigate this question, we asked adult viewers to complete an identity-matching task and a Garner classification task, in that order. The former task was administered, in part, to familiarize viewers with the identities of the faces and facial expressions used in the Garner task. Given that we were using nondegraded stimuli, we expected a high level of performance on both tasks and, as such, we did not expect to see a dynamic advantage in terms of accuracy or reaction time. However, for the Garner task we hypothesized that dynamic cues would alter the interactions between identity and expression processing, resulting in either increased resistance to interference (reflected in smaller Garner interference scores), or increased interdependence (reflected in larger Garner interference scores). 
A second goal of our study was to examine the impact of participant sex on performance on static and dynamic face processing tasks. Studies comparing the processing of static and dynamic faces have not typically considered participant sex. Moreover, previous reports of asymmetrical Garner interference between the processing of identity and expression in static faces are based on studies in which sex distributions were unequal (with the majority of the participants being women, e.g., Ganel & Goshen-Gottstein, 2004; Kaufmann & Schweinberger, 2005), or in which small sample sizes precluded the exploration of sex differences (Schweinberger & Soukup, 1998). This is unfortunate as examining sex differences in Garner interference may provide valuable insights into why women outperform men when processing the identity of unfamiliar faces (Godard & Fiori, 2012; McBain, Norton, & Chen, 2009; Megreya, Bindemann, & Havard, 2011) and facial expressions (see McClure, 2000) (particularly when the faces being viewed are female [see Herlitz & Rehnman, 2009]). Megreya et al. (2011) showed that this face processing advantage was not due to women showing a general superiority in episodic memory. There is evidence to suggest, however, that women's face processing advantage may be especially evident under more demanding task conditions, such as when displays are masked by visual noise (McBain et al., 2009) or two different facial cues are present. Importantly, as all of the work cited above involved static face stimuli it is not clear whether sex differences in face processing will also be apparent with dynamic stimuli. In the present study, we looked for evidence of sex differences in the magnitude of any interference effects occurring during the Garner classification task. Specifically, we wondered whether previous observations of asymmetrical interference between identity and expression processing with static faces might be more, or less, apparent with dynamic stimuli, and whether this might vary depending on participants' biological sex. Exploring these questions is important if one is to gain a deeper understanding of the factors that underlie individual differences in perceptual processing. 
Methods
Participants
Our sample consisted of 20 women (aged 18–26 years, M = 20.1, SD = 2.3) and 20 men (aged 18–24 years, M = 19.9, SD = 2.0) from the psychology participant pool at the University of Manitoba, Winnipeg, Canada. All participants had normal or corrected-to-normal visual acuity. 
Materials and procedure
The Human Research Ethics Board at the University of Manitoba approved the testing protocol. Participants provided written informed consent and received partial course credit. Participants were tested one at a time in a quiet room. Each participant completed the identity-matching task first to allow us to: (a) examine static versus dynamic face matching, and (b) familiarize participants with the identities and expressions they would view in the Garner's task that followed. The familiarization process helped to ensure that the response options were clear when viewers completed their identity judgments in the Garner task. Note that becoming somewhat familiar with the faces may have increased the likelihood that viewers would be able to extract useful information about facial expression and structure during identity and expression processing, respectively (see Ganel & Goshen-Gottstein, 2004). 
Static face stimuli were supplied by researchers at the Max Planck Institute for Biological Cybernetics, Germany, and have been described in previous work (Pilz et al., 2006). In generating their face database, these researchers filmed actors sitting against a black background and wearing a black cap and scarf that covered the hair and clothes, so that only the face, ears, and neck were visible in the final images. None of the actors wore glasses or jewelry. Each actor was filmed (at full-face viewpoint) expressing several different emotions, at a frame rate of 25 frames per second. From the 26 static images of each actor displaying each emotion that are available for download, we selected images of four female actors, each displaying the emotions of anger and surprise. We used these images to create dynamic stimuli using QuickTime 7 Pro (Apple Inc., Cupertino, CA). The static stimuli used in the present study were the static images from each set that depicted the apex of the emotion. The width and height of each actor's face in each display subtended 6.6° and 6.6°, respectively, at a distance of 57 cm. Both experiments were presented on the monitor of a PC computer using MATLAB (The MathWorks, Inc., Natick, MA). 
Identity-matching task
Face stimuli used in this task included all four faces, showing both facial expressions (surprise, anger). On each trial, a white fixation cross appeared on a black background for 500 ms followed by a screen showing two faces (one to the left and one to the right of center) for 1040 ms (see Figure 1). Participants determined, as quickly and as accurately as possible, whether the identities of the two faces were the same or different. Half of the participants pressed one key for a “same” judgment and another key for a “different” judgment, with the key assignments reversed for the remaining observers. The task consisted of 32 static and 32 dynamic trials, with an equal number of same and different trials in each presentation mode. Each face was shown displaying each facial expression an equal number of times. Static trials preceded dynamic trials and the order of trials within each presentation mode was randomized for each participant. To discourage use of picture-based strategies on “same” trials, the two simultaneously-presented faces always displayed different expressions. Before beginning the experiment, participants completed 20 static and 20 dynamic practice trials. 
Figure 1
 
Presentation sequence for the identity-matching task. Participants viewed two, simultaneously-presented static images (static condition) or dynamic sequences (dynamic condition) for 1040 ms. Across trials, participants made same or different judgments via a key press, using their index fingers. Static face stimuli were supplied by researchers at the Max Planck Institute for Biological Cybernetics, Germany, and have been described in previous work (Pilz et al., 2006).
Figure 1
 
Presentation sequence for the identity-matching task. Participants viewed two, simultaneously-presented static images (static condition) or dynamic sequences (dynamic condition) for 1040 ms. Across trials, participants made same or different judgments via a key press, using their index fingers. Static face stimuli were supplied by researchers at the Max Planck Institute for Biological Cybernetics, Germany, and have been described in previous work (Pilz et al., 2006).
Garner classification tasks
Face stimuli used in this task included two of the four faces used in the identity matching task (“Jane” and “Anne”), showing both facial expressions (surprise, anger). Participants completed two types of Garner tasks (Identity Judgments, Expression Judgments) in two different presentation modes (Static, Dynamic), for a total of four conditions, the order of which was randomized for each participant. Each condition consisted of a baseline block followed by an orthogonal block. In the baseline block (20 trials, randomly ordered), the relevant task dimension (e.g., Identity: Jane or Anne) varied while the irrelevant dimension (e.g., Expression: surprised or angry) remained constant. The orthogonal block (40 trials, randomly ordered) consisted of all four combinations of the two dimensions (i.e., Jane surprised, Jane, angry, Anne surprised, Anne angry). Each trial began with a central white fixation cross on a black background for 500 ms, followed by a 1040 ms presentation of one stimulus face in the center of the screen (see Figure 2 and supplemental movie). The participant made a two-alternative forced-choice response, as quickly and as accurately as possible, and the next trial began immediately after the response. Half of the participants pressed one key for “Jane” or “surprised” (depending on the task) and another key for “Anne” or “angry,” with key assignments reversed for the remaining participants. Before beginning the experiment, participants completed five static practice trials and five dynamic practice trials. 
Figure 2
 
Garner classification tasks. Participants viewed a static image (static condition) or a dynamic sequence (dynamic condition) for 1040 ms and responded with a key press using their index fingers. For the Identity judgment task, participants determined whether the face belonged to “Jane” or to “Anne.” For the Expression judgment task, participants determined whether the expression was one of anger or surprise. Static face stimuli were supplied by researchers at the Max Planck Institute for Biological Cybernetics, Germany, and have been described in previous work (Pilz et al., 2006).
Figure 2
 
Garner classification tasks. Participants viewed a static image (static condition) or a dynamic sequence (dynamic condition) for 1040 ms and responded with a key press using their index fingers. For the Identity judgment task, participants determined whether the face belonged to “Jane” or to “Anne.” For the Expression judgment task, participants determined whether the expression was one of anger or surprise. Static face stimuli were supplied by researchers at the Max Planck Institute for Biological Cybernetics, Germany, and have been described in previous work (Pilz et al., 2006).
Results
For each participant, trials in which responses occurred outside the window of 200–5000 ms after target onset were eliminated. This represented 3.83% of trials in the identity-matching task, and 0.27% of the trials in the Garner tasks. 
Identity-matching task
Median correct RTs were submitted to a 2 (Participant Sex: Women, Men) × 2 (Presentation Mode: Static, Dynamic) analysis of variance (ANOVA), with repeated measures on the last factor. Results revealed a main effect of Presentation Mode, F(1, 38) = 620.83, p < 0.001, ηp2 = 0.94, indicating that participants were faster making judgments when viewing pairs of static faces (M = 999 ms; SD = 212 ms) compared to pairs of dynamic faces (M = 1621 ms; SD = 247 ms). (See Figure 3.) This did not appear to reflect a speed-accuracy trade-off as the results of a similar analysis conducted on accuracy scores revealed that accuracy was near ceiling (≥89%) and comparable in males and females across both static and dynamic testing conditions. However, a signal detection analysis did reveal that viewers' sensitivity to dynamic faces (Md' = 3.40, SD = 0.94) was marginally better than their sensitivity to static faces (Md' = 3.04, SD = 0.89) [F(1,38) = 4.02, p = 0.052, ηp2 = 0.096]. 
Figure 3
 
Median correct RT (ms) in the static and dynamic conditions of the identity matching task. Dots represent data from individual participants, while bars represent group means.
Figure 3
 
Median correct RT (ms) in the static and dynamic conditions of the identity matching task. Dots represent data from individual participants, while bars represent group means.
Garner classification tasks
Accuracy scores were submitted to a 2 (Participant Sex: Women, Men) × 2 (Task: Identity Judgments, Expression Judgments) × 2 (Presentation Mode: Static, Dynamic) × 2 (Condition: Baseline, Orthogonal) ANOVA, with repeated measures on the last three factors. Accuracy was slightly higher, overall, during baseline compared to orthogonal trials [97.4% vs. 96.1%; F(1, 38) = 7.12, p < 0.05, ηp2 = 0.158], but this difference—while statistically reliable—was very small. Indeed, accuracy was essentially at ceiling in all conditions of the task for both men and women (≥94.3%), with no other significant main effects or interactions being observed. For this reason, we concluded that any main effects and interactions arising in the RT data were unlikely to reflect speed accuracy trade-offs. As such, results presented below focus on median RTs for correctly answered trials. 
Baseline blocks
Median correct RTs were submitted to a 2 (Participant Sex: Women, Men) × 2 (Task: Identity Judgments, Expression Judgments) × 2 (Presentation Mode: Static, Dynamic) ANOVA, with repeated measures on the last two factors.2 Results revealed main effects of Task, F(1, 38) = 17.00, p < 0.001, ηp2 = 0.31, and Presentation Mode, F(1, 38) = 1051.21, p < 0.001, ηp2 = 0.97; comparisons of mean RTs confirmed that viewers were able to extract identity more quickly than expression, and made their judgments more quickly when viewing static as opposed to dynamic stimuli. These effects were mediated by a significant Task × Presentation Mode interaction, F(1, 38) = 5.47, p < 0.03, ηp2 = 0.13. Followup tests on the interaction revealed that the Task effect (difference in RTs between Expression and Identity processing) was slightly larger with dynamic faces (M = 93 ms, SD = 145) than with static faces (M = 40 ms, SD = 97), and that the Mode effect (difference in RTs between Dynamic and Static stimuli) was slightly larger for the Expression task (M = 410 ms, SD = 124) than the Identity task (M = 358 ms, SD = 76) [t(39) > 2.35, p < 0.03 for both contrasts]. (See Figure 4.) 
Figure 4
 
Median correct RT (ms) in the baseline conditions of the Garner tasks. Dots represent data from individual participants, while bars represent group means.
Figure 4
 
Median correct RT (ms) in the baseline conditions of the Garner tasks. Dots represent data from individual participants, while bars represent group means.
Garner interference effect
We computed Garner interference scores by subtracting the median RT for correct trials in the baseline block from that seen in the orthogonal block for each participant. Because there were significant differences in the baseline measures for the four test conditions, we divided each participant's Garner interference score in a given condition by his/her median correct RT in the corresponding baseline block. These corrected Garner interference scores reflect the percent change in RT from baseline. 
We submitted corrected Garner interference scores to a 2 (Participant Sex: Men, Women) × 2 (Task: Identity Judgments, Expression Judgments) × 2 (Presentation Mode: Static, Dynamic) ANOVA, with repeated measures on the last two factors. Results revealed main effects of Task, F(1, 38) = 12.31, p = 0.001, ηp2 = 0.25, and Presentation Mode, F(1, 38) = 36.00, p < 0.001, ηp2 = 0.49. Overall, participants experienced less interference from expression when making identity judgments than vice versa, and less interference from the irrelevant dimension with dynamic than with static faces. These main effects were mediated by significant Task × Presentation Mode, F(1, 38) = 6.30, p < 0.02, ηp2 = 0.14, and Task × Participant Sex interactions, F(1, 38) = 4.71, p < 0.04, ηp2 = 0.11]. As these two-way interactions had to be interpreted in light of a significant Task × Presentation Mode × Participant Sex interaction, F(1, 38) = 6.24, p < 0.02, ηp2 = 0.14, we limited followup tests to the three-way interaction, which is depicted in Figure 5
Figure 5
 
Women's and men's mean corrected Garner interference scores in Identity and Expression Tasks for static and dynamic presentation modes. Dots represent data from individual participants, while bars represent group means.
Figure 5
 
Women's and men's mean corrected Garner interference scores in Identity and Expression Tasks for static and dynamic presentation modes. Dots represent data from individual participants, while bars represent group means.
Inspection of Figure 5 suggested that the interactions arose because of sex differences in task performance during static trials. To test this, for each presentation mode, corrected Garner interference scores were entered into separate 2 (Participant Sex: Women, Men) × 2 (Task: Identity Judgments, Expression Judgments) ANOVAs, with repeated measures on the last factor. With static faces, we observed a main effect of Task, F(1, 38) = 11.10, p = 0.002, ηp2 = 0.23, and a Task × Participant Sex interaction, F(1, 38) = 6.53, p < 0.02, ηp2 = 0.15. Followup tests on the interaction revealed that only women showed an asymmetrical pattern of interference, experiencing four times as much interference from the irrelevant dimension when processing expression than when processing identity, t(19) = 3.61, p = 0.002. Men experienced similar levels of interference during the two tasks, and their performance was comparable to that of women completing the Identity task. In striking contrast to the results that we observed with static stimuli, the analysis conducted on data from dynamic trials did not reveal any significant main effects or interactions. 
In additional followup tests, we compared static and dynamic trials for each task, for both men and women. Women experienced less interference in the dynamic compared to the static presentation mode when making expression judgments, t(1, 19) = 4.83, p < 0.001, while men experienced less interference in the dynamic than in the static presentation mode when making identity judgments, t(1, 19) = 3.70, p < 0.003. 
In order to determine whether women or men experienced significant levels of Garner interference in any of the testing conditions, we conducted planned one-sample t-tests (comparing corrected Garner interference scores to zero). These tests confirmed that women and men showed significant interference effects for both tasks when viewing static stimuli, t(19) ≥ 2.78, p < 0.02 in all cases]. Only men experienced significant interference when processing dynamic faces; this was the case whether men were making identity or expression judgments, t(19) ≥ 2.19, p < 0.05 in both cases. See Figure 5
Supplementary analyses
As has been noted elsewhere (Garner, 1983; Schweinberger & Soukup, 1998), susceptibility to interference may depend on the relative speed with which relevant and irrelevant cues can be extracted. To explore the impact that this might have had on the present findings, we conducted supplementary analyses on the median RT data. 
The first set of comparisons was designed to explore the possibility that differences in the relative speed of cue extraction could explain why women showed an asymmetrical pattern of interference in the static condition. As can be seen in Figure 4, there were large individual differences in the relative speed of cue extraction during static baseline testing. Inspection of the data revealed that 13 participants showed an “expression advantage” (i.e., they judged expression faster than identity), two showed no advantage, and 24 showed an “identity advantage” (i.e., they judged identity faster than expression). If relative ease of cue extraction was a key factor determining interference levels then, in the Identity task, those showing an identity advantage might be able to make identity judgments before expression processing could cause substantial interference, but in the Expression task identity processing should cause substantial interference with expression processing (i.e., they might show an asymmetrical pattern of interference). The opposite should be true for those who show an expression advantage. Thus, participants might be able to make expression judgements before identity processing could interfere with expression judgments but not the reverse. Only 15 of the 24 participants showing an identity advantage (seven men, eight women) showed less interference in the Identity task than in the Expression task. Only two of the 13 participants showing an expression advantage (both male) showed less interference in the Expression task than in the Identity task. Together, these results suggest that, while differences in relative ease of cue extraction might account (at least in part) for the behavior of a portion of our sample during the static trials (17 of 40 individuals), they do not adequately account for the behavior of the remaining participants. 
Using a similar logic to that described above, we explored the possibility that the smaller Garner interference scores seen with dynamic faces might reflect differences in the relative ease of cue extraction between static and dynamic testing conditions. First, we considered the 16 participants who showed an identity advantage during static baseline trials that was amplified during dynamic baseline trials. During the Identity task, we might expect these participants to exhibit less interference from expression with moving faces than with photographs, compared to that seen in participants with a static identity advantage who did not show amplification (n = 8). What we found, however, was that both groups showed an equivalent drop in interference from expression when viewing moving as opposed to static faces (Mode main effect, F(1, 22) = 17.9, p < 0.001, ηp2 = 0.45). 
Next, we considered the participants who showed an expression advantage during static baseline trials that was amplified during dynamic baseline trials. During the Expression task, we might expect these participants to exhibit less interference from identity with moving faces than with photographs. Only one individual (a woman) showed amplification of her static expression advantage when tested with dynamic faces, and, in this case, the interference from identity was indeed lower in the dynamic than in the static condition. However, it is noteworthy that 11 of the remaining 12 participants who showed an expression advantage during static baseline trials actually showed an identity advantage with moving faces, and these individuals also experienced significantly less interference from identity when viewing dynamic compared to static faces, paired samples t-test, t(10) = 4.4, p = 0.001. 
Discussion
The goals of this study were to determine if the addition of dynamic facial cues would affect the processing of identity and/or expression, and whether men and women would respond similarly to this manipulation. The results revealed three important findings: (a) there was no dynamic advantage in either the identity-matching task or in the baseline blocks of the Garner task in terms of RT; (b) bidirectional Garner interference effects occurred between identity and expression processing in the static presentation mode but were minimal for dynamic faces; and (c) in the static condition, only women showed an asymmetrical pattern of Garner interference, with changes in identity affecting expression judgments more than the reverse. Each of these results is discussed, in turn, below. 
Identity-matching and baseline performance with static and dynamic faces
The fact that we found little evidence of a dynamic advantage with regard to RT in the matching task, or during the baseline block of Garner's paradigm, was not surprising given that participants were able to perform both tasks with a high level of accuracy. This may have been because the static images we presented were nondegraded and showed fully developed, intense emotional expressions (i.e., they captured the apex of each expression). Under these conditions it may be difficult to demonstrate a dynamic advantage (Fiorentini & Viviani, 2011; Lander et al., 1999). Key information needed to extract emotion and facial structure would have been immediately available to observers in these images, potentially making the processing of additional information from dynamic cues unnecessary. With dynamic displays, in contrast, the apex of the expression was reached some time after stimulus onset—a fact that may explain why RTs during dynamic trials were significantly longer than those seen during static trials. Interestingly, however, participants did not appear to wait until the apex of an expression was reached before initiating their responses on dynamic trials (see Fiorentini & Viviani, 2011, for a similar result). Indeed, responses on these trials were completed (on average) either before the apex of the expression or shortly thereafter (within 200 ms), suggesting that participants would have initiated their responses when the expressions were more subtle than those captured in the static stimuli. The movement of facial features may compensate for the “incompleteness” of an unfolding expression (Fiorentini & Viviani, 2011); this may explain why we see a dynamic advantage with subtle expressions (Ambadar et al., 2005). 
Interference effects with static and dynamic faces
Like previous research in adults (Ganel & Goshen-Gottstein, 2004), we found significant interference effects for static stimuli when participants classified faces according to either identity or expression. With static stimuli, then, participants could not avoid computing expression when processing identity, or vice versa. These findings suggest that these aspects of face processing are interdependent, at least when the faces are somewhat familiar to viewers (see Ganel & Goshen-Gottstein, 2004). 
In contrast to the results with static stimuli, we found little evidence of interference from irrelevant facial cues with dynamic faces—a result which, traditionally, would be interpreted as evidence for functional independence in the processing of those cues (Ganel & Goshen-Gottstein, 2004). This result was somewhat surprising as it seems reasonable to expect that facial motion signatures could provide useful, supplemental cues to facial identity, and that motion-enhanced recovery of the three-dimensional facial structure could improve one's ability to predict possible constraints on the way a given face moves (Calder & Young, 2005; Ganel & Goshen-Gottstein, 2004; O'Toole, Roark, & Abdi, 2002). As dynamic faces should convey richer information about identity and expression than static images, viewers might find both types of cues more salient (i.e., more difficult to ignore) in dynamic faces, regardless of the task. Consistent with this, researchers have found greater interference from facial identity during the processing of facial speech with dynamic than with static displays (Kaufmann & Schweinberger, 2005). The fact that we observed a different pattern in the present study (with interference being significantly more evident with static than dynamic stimuli) supports the view that facial expressions are processed rather differently from facial speech (Fodor, 1983; Liberman & Mattingly, 1985). It remains to be seen if expression and speech processing interact with identity coding in different ways when dynamic stimuli are used. 
The differences in interference effects we observed between static and dynamic faces suggest that photographs of faces are processed quite differently than dynamic faces. It could be that the use of photographs of social stimuli such as faces in research studies biases viewers to adopt strategies specifically evolved for processing images rather than for dealing with complex, naturally moving stimuli. In this regard, there is a large body of evidence suggesting that static faces are processed using a global (holistic) approach (see Farah, Wilson, Drain, & Tanaka, 1998). One possible interpretation of our results is that the use of a global approach when processing static faces makes it difficult for viewers to ignore task-irrelevant information, leading to significant interference effects. If the introduction of dynamic cues caused participants to shift their attention to local facial features and ignore the global context, then this might explain the reduced interference scores we saw with dynamic displays. In support of this idea, Xiao, Quinn, Ge, and Lee (2012) showed that participants are better able to decompose faces into parts when processing dynamic as compared to static stimuli. In related work, Loucks and Baldwin (2009) found that the processing of local (small-scale) actions (i.e., “featural information in action,” p. 87) is elevated relative to the processing of global movement patterns when viewers watch dynamic scenes depicting whole-body human actions. Loucks (2011) later reported that this was not the case with static displays. If increased reliance on a local processing strategy results in greater resistance to interference between identity and expression processing, it appears that this comes at a cost—specifically, an overall increase in processing time. This result is consistent with other research showing that the use of a parts-based strategy can disrupt some aspects of performance (Macrae & Lewis, 2002; Marzi & Viggiano, 2011). 
Although our experiment does not directly address the question of whether there is a shift toward feature-based strategies (or, perhaps, toward the use of a more balanced or flexible approach) while processing dynamic facial cues, this would be a plausible mechanism through which a reduction in interference between the processing of identity and expression cues could be achieved. Specifically, switching to a feature-based processing approach may alter activation patterns within the broader face-processing network, minimizing the involvement of areas specialized for the global processing of facial cues. If such a shift did occur, it might underlie the increase in RTs seen in dynamic testing conditions. 
While concluding that the reduction in interference with dynamic faces arises from enhanced attention to features would be consistent with the traditional interpretation of Garner interference, adopting this classic interpretation may be attractive simply because it fits with the popular view that the face processing system has a modular structure (see Calder & Young, 2005). Another way to interpret the negligible interference seen with dynamic displays is that viewers are better able to integrate multiple facial cues when viewing naturalistic, moving stimuli. Evidence supporting this idea comes from studies examining how invariant and changeable facial cues in dynamic displays interact. In a series of experiments, Knappmeyer et al. (2003) exposed participants to both form (i.e., identity) and nonrigid motion (i.e., expression) in morphed faces that represented a continuous transition between the identities of two learned faces. Characteristic, nonrigid facial motion associated with a particular form biased participants' identity decisions, suggesting that integration occurs when processing these two types of facial information in dynamic displays. Other research suggests that facial form and motion information are integrated in the superior temporal sulcus (STS) (Puce et al., 2003), with nonrigid and rigid biological motion eliciting differential activation in an anterior-posterior gradient in the STS (Grèzes et al., 2001). Furthermore, dynamic faces produce more robust activation in multiple parts of the core and extended face processing systems than static faces (e.g., Kessler et al., 2011; Kilts, Egan, Gideon, Ely, & Hoffman, 2003; Schultz & Pilz, 2009). Together, these results suggest that the availability of motion cues may cause a shift in the way that different parts of the face processing and social cognition networks work together, which could explain why RTs were longer in the dynamic conditions. The notion that identity and expression cues may be integrated more successfully with dynamic than with static stimuli, resulting in a reduced interference, is consistent with Calder and Young's (2005) idea that separation between regions involved in processing invariant and changeable aspects of faces is relative rather than absolute. 
We have suggested here that interference between the processing of different facial cues may be reduced or eliminated when dynamic stimuli are being viewed if participants focus more selectively on specific facial features, or if they integrate multiple cues more effectively. Both of these ideas hold merit and, indeed, it is possible that individual differences in processing style determine which strategy a given viewer will adopt. Future research should explore this possibility. It will also be interesting, and important, to determine whether characteristics of the stimulus face not explored here, such as the particular expression that is displayed, impact performance on identity-matching and Garner tasks. 
Sex differences in static face processing
The last major point we want to address relates to the fact that, in the static condition, only women showed an asymmetrical pattern of interference when viewing photographs of faces, with changes in identity affecting expression judgments more than the opposite. Men, in contrast, showed equivalent levels of interference from irrelevant cues in both tasks. Based on work with static stimuli, some researchers have argued that identity coding is obligatory but that expression coding is not (Palermo & Rhodes, 2007). If this applies more to women than men it could explain the sex difference in interference we observed in the Expression task. Women may be more attentive to multiple nonverbal cues when processing another individual's emotional state in an effort to improve their ability to infer the mental states of others, and/or to empathize with them. If so, this may explain (in part) the fact that women and men show different patterns of neural activation when attempting to solve emotion tasks, including those requiring the recognition of facial expressions in photographs (Alaerts, Nackaerts, Meyns, Swinnen, & Wenderoth, 2011; Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001; Derntl et al., 2010). 
Some have argued that the extraction of identity involves global processing, while extraction of expression requires greater attention to local features (Lipp, Price, & Tellegen, 2009; Song & Hakoda, 2012), at least in static displays. If, as suggested above, women are more likely than men to process identity and expression information while making their expression judgments, interference could arise from the simultaneous, or sequential, application of these two different processing strategies. Some preliminary evidence supporting this idea comes from the work of Proverbio and colleagues (Proverbio, Brignone, Matarazzo, Del, & Zani, 2006; Proverbio, Riva, Martin, & Zani, 2010) with face stimuli, and Kimchi, Amishav, and Sulitzeanu-Kenan (2009) with nonface stimuli. 
It is important to remember that the sex difference we observed with static faces disappeared when dynamic cues were available. However, this does not necessarily mean that men and women are using similar approaches, or neural networks, to process dynamic facial information. Research involving other types of dynamic social stimuli speaks to this issue. For example, Pavlova, Guerreschi, Lutzenberger, Sokolov, and Krageloh-Mann (2010) showed that, although women and men both exhibited ceiling-level performance in their ability to discriminate between random and “social” interactions between geometric shapes on the basis of their movement patterns, robust sex differences were apparent in the induced oscillatory response to these displays over left prefrontal cortex—a region implicated in perceptual decision making (Heekeren, Marrett, & Ungerleider, 2008). Pavlova et al. (2010) speculated that women anticipate social interactions—predicting others' actions ahead of their realization—whereas men require accumulation of more sensory evidence before making decisions regarding the social meaning of actions. Amunts et al. (2007) have also shown sex differences in the cytoarchitecture of motion-sensitive complexes, and suggest that these brain structures work together in different ways in men and women to produce the same kind of behavioral performance in tasks involving the processing of motion. Given these findings, it might be interesting to explore differences in brain activation patterns during baseline and orthogonal blocks while women and men process identity and expression in static and dynamic facial images. In carrying out this work, it would be interesting to determine whether sex differences in activation vary depending on characteristics of the face stimuli, such as the sex of the individuals depicted (recall that in the present study only female faces were used). Others have reported that a female advantage for static expression processing is particularly evident when the faces being viewed are female (see Herlitz & Rehnman, 2009). Investigations of this sort may shed light on factors underlying inter- and intra-individual differences in performance on face processing tasks. 
Conclusions
We have shown that different patterns of interference between the processing of facial identity and expression are seen with static and dynamic faces (with interdependence seen in the former but not the latter case). It may be that, in order to make accurate judgments regarding complex, dynamic stimuli, viewers must focus their attention on task-relevant cue(s) to a greater extent than they do with static stimuli, resulting in less interference. This could be achieved by increasing their reliance on a local processing strategy. Alternatively, the lack of interference effects with dynamic faces may suggest that identity and expression information are being integrated more efficiently in moving faces. The fact that, with static displays, sex differences were observed in interference from identity on expression judgments suggests that men and women may use different strategies (or combinations of strategies) to extract various facial cues from photographs. Our findings highlight the importance of using ecologically relevant, dynamic facial stimuli (e.g., Kilts et al., 2003), and of considering participant sex when characterizing mechanisms involved in face perception. 
Supplementary Materials
Acknowledgments
This research formed part of the doctoral dissertation of B. M. Stoesz, who was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) studentship, a University of Manitoba Graduate Fellowship (UMGF), and a Manitoba Institute for Child Health (MICH) Travel Award. The research was funded by a grant to L. Jakobson from NSERC. We would like to thank Sophia Quan and Sarah Rigby for their help with data collection, and two anonymous reviewers for their helpful comments. 
Commercial relationships: none. 
Corresponding authors: Brenda Stoesz. 
Email: sbrenda@mymts.net; Lorna.Jakobson@ad.umanitoba.ca. 
Address: Department of Psychology, University of Manitoba, Winnipeg, Manitoba, Canada. 
References
Alaerts K. Nackaerts E. Meyns P. Swinnen S. P. Wenderoth N. (2011). Action and emotion recognition from point light displays: An investigation of gender differences. PLoS One, 6 (6), e20989, doi:10.1371/journal.pone.0020989.
Ambadar Z. Schooler J. W. Cohn J. F. (2005). Deciphering the enigmatic face: The importance of facial dynamics in interpreting subtle facial expressions. Psychological Science, 16 (5), 403–410, doi:10.1111/j.0956-7976.2005.01548.x. [CrossRef] [PubMed]
Amunts K. Armstrong E. Malikovic A. Homke L. Mohlberg H. Schleicher A. (2007). Gender-specific left-right asymmetries in human visual cortex. Journal of Neuroscience, 27 (6), 1356–1364, doi:27/6/1356 [pii];10.1523/JNEUROSCI.4753-06.2007. [PubMed]
Arsalidou M. Morris D. Taylor M. J. (2011). Converging evidence for the advantage of dynamic facial expressions. Brain Topography, 24 (2), 149–163, doi:10.1007/s10548-011-0171-4. [CrossRef] [PubMed]
Baron-Cohen S. Wheelwright S. Hill J. Raste Y. Plumb I. (2001). The “Reading the Mind in the Eyes” Test revised version: A study with normal adults, and adults with Asperger syndrome or high-functioning autism. Journal of Child Psychology and Psychiatry, 42 (2), 241–251, doi:10.1111/1469-7610.00715. [CrossRef] [PubMed]
Barton J. J. (2003). Disorders of face perception and recognition. Neurologic Clinics, 21 (2), 521–548, doi:10.1016/S0733-8619(02)00106-8. [CrossRef] [PubMed]
Bassili J. N. (1978). Facial motion in the perception of faces and of emotional expression. Journal of Experimental Psychology: Human Perception and Performance, 4 (3), 373–379. [CrossRef] [PubMed]
Baudouin J. Y. Martin F. Tiberghien G. Verlut I. Franck N. (2002). Selective attention to facial emotion and identity in schizophrenia. Neuropsychologia, 40 (5), 503–511, doi:S0028393201001142 [pii]. [CrossRef] [PubMed]
Beauchamp M. H. Anderson V. (2010). SOCIAL: an integrative framework for the development of social skills. Psychological Bulletin, 136 (1), 39–64. [CrossRef] [PubMed]
Bould E. Morris N. (2008). Role of motion signals in recognizing subtle facial expressions of emotion. British Journal of Psychology, 99 (Pt 2), 167–189, doi:10.1348/000712607X206702. [CrossRef] [PubMed]
Bruce V. Young A. (1986). Understanding face recognition. British Journal of Psychology, 77 (Pt 3), 305–327. [CrossRef] [PubMed]
Calder A. J. Young A. W. (2005). Understanding the recognition of facial identity and facial expression. Nature Reviews Neuroscience, 6 (8), 641–651, doi:nrn1724 [pii];10.1038/nrn1724. [CrossRef] [PubMed]
Christie F. Bruce V. (1998). The role of dynamic information in the recognition of unfamiliar faces. Memory and Cognition, 26 (4), 780–790. [CrossRef] [PubMed]
Derntl B. Finkelmeyer A. Eickhoff S. Kellermann T. Falkenberg D. I. Schneider F. (2010). Multidimensional assessment of empathic abilities: Neural correlates and gender differences. Psychoneuroendocrinology, 35 (1), 67–82, doi:S0306-4530(09)00315-1 [pii];10.1016/j.psyneuen.2009.10.006. [PubMed]
Farah M. J. Wilson K. D. Drain M. Tanaka J. N. (1998). What is “special” about face perception? Psychological Review, 105 (3), 482–498, doi:10.1037/0033-295X.105.3.482. [CrossRef] [PubMed]
Fiorentini C. Viviani P. (2011). Is there a dynamic advantage for facial expressions? Journal of Vision, 11 (3): 17, 1–15, http://www.journalofvision.org/content/11/3/17, doi:10.1167/11.3.17. [PubMed] [Article] [CrossRef] [PubMed]
Fodor J. A. (1983). The modularity of mind: An essay on faculty psychology. Cambridge, MA: MIT Press.
Fox C. J. Moon S. Y. Iaria G. Barton J. J. (2009). The correlates of subjective perception of identity and expression in the face network: An fMRI adaptation study. NeuroImage, 44 (2), 569–580, doi:S1053-8119(08)00991-9 [pii];10.1016/j.neuroimage.2008.09.011. [CrossRef] [PubMed]
Ganel T. Goshen-Gottstein Y. (2004). Effects of familiarity on the perceptual integrality of the identity and expression of faces: The parallel-route hypothesis revisited. Journal of Experimental Psychology: Human Perception and Performance, 30 (3), 583–597, doi:10.1037/0096-1523.30.3.583. [CrossRef] [PubMed]
Ganel T. Goshen-Gottstein Y. Goodale M. A. (2005). Interactions between the processing of gaze direction and facial expression. Vision Research, 45 (9), 1191–1200, doi:10.1016/j.visres.2004.06.025. [CrossRef] [PubMed]
Garner W. R. (1976). Interaction of stimulus dimensions in concept and choice processes. Cognitive Psychology, 8, 98–123. [CrossRef]
Garner W. R. (1983). Asymmetric interactions of stimulus dimensions in perceptual information processing. In Tighe T. J. Shepp B. E. (Eds.), Perception, cognition, and development: Interactional analyses. (pp. 1–38). Hillsdale, NJ: Erlbaum.
Godard O. Fiori N. (2012). Sex and hemispheric differences in facial invariants extraction. Laterality, 17 (2), 202–216, doi:10.1080/1357650X.2011.556641. [PubMed]
Grèzes J. Fonlupt P. Bertenthal B. Delon-Martin C. Segebarth C. Decety J. (2001). Does perception of biological motion rely on specific brain regions? NeuroImage, 13 (5), 775–785, doi:10.1006/nimg.2000.0740. [CrossRef] [PubMed]
Haxby J. V. Hoffman E. A. Gobbini M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4 (6), 223–233. [CrossRef] [PubMed]
Heekeren H. R. Marrett S. Ungerleider L. G. (2008). The neural systems that mediate human perceptual decision making. Nature Reviews Neuroscience, 9 (6), 467–479, doi:10.1038/nrn2374 [CrossRef] [PubMed]
Herlitz A. Rehnman J. (2009). Sex differences in episodic memory. Current Directions in Psychological Science, 17, 52–56, doi:10.1111/j.1467-8721.2008.00547.x. [CrossRef]
Horstmann G. Ansorge U. (2009). Visual search for facial expressions of emotions: A comparison of dynamic and static faces. Emotion, 9 (1), 29–38. [CrossRef] [PubMed]
Kamachi M. Bruce V. Mukaida S. Gyoba J. Yoshikawa S. Akamatsu S. (2001). Dynamic properties influence the perception of facial expressions. Perception, 30 (7), 875–887, doi:10.1068/p3131. [CrossRef] [PubMed]
Kaufmann J. M. Schweinberger S. R. (2005). Speaker variations influence speechreading speed for dynamic faces. Perception, 34 (5), 595–610, doi:10.1068/p5104. [CrossRef] [PubMed]
Kessler H. Doyen-Waldecker C. Hofer C. Hoffmann H. Traue H. C. Abler B. (2011). Neural correlates of the perception of dynamic versus static facial expressions of emotion. GMS Psycho-Social-Medicine, 8, doi:10.3205/psm000072.
Kilts C. D. Egan G. Gideon D. A. Ely T. D. Hoffman J. M. (2003). Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. NeuroImage, 18 (1), 156–168, doi:S1053811902913236 [pii]. [CrossRef] [PubMed]
Kimchi R. Amishav R. Sulitzeanu-Kenan A. (2009). Gender differences in global-local perception? Evidence from orientation and shape judgments. Acta Psychologica (Amsterdam), 130 (1), 64–71, doi:10.1016/j.actpsy.2008.10.002. [CrossRef]
Knappmeyer B. Thornton I. M. Bülthoff H. H. (2003). The use of facial motion and facial form during the processing of identity. Vision Research, 43 (18), 1921–1936, doi:S0042698903002360 [pii]. [CrossRef] [PubMed]
Knight B. Johnston A. (1997). The role of movement in face recognition. Visual Cognition, 4 (3), 265–273. [CrossRef]
Krebs J. F. Biswas A. Pascalis O. Kamp-Becker I. Remschmidt H. Schwarzer G. (2011). Face processing in children with autism spectrum disorder: Independent or interactive processing of facial identity and facial expression? Journal of Autism and Developmental Disorders, 41 (6), 796–804, doi:10.1007/s10803-010-1098-4. [CrossRef] [PubMed]
Lander K. Bruce V. (2000). Recognizing famous faces: Exploring the benefits of facial motion. Ecological Psychology, 12, 259–272, doi:10.1207/S15326969ECO1204_01. [CrossRef]
Lander K. Bruce V. Hill H. (2001). Evaluating the effectiveness of pixelation and blurring on masking the identity of familiar faces. Applied Cognitive Psychology, 15, 101–116, doi:10.1002/1099-0720(200101/02)15:1<101::AID-ACP697>3.0.CO;2-7. [CrossRef]
Lander K. Christie F. Bruce V. (1999). The role of movement in the recognition of famous faces. Memory and Cognition, 27 (6), 974–985. [CrossRef] [PubMed]
Liberman A. M. Mattingly I. G. (1985). The motor theory of speech perception revised. Cognition: International Journal of Cognitive Science, 21 (1), 1–36. [CrossRef]
Lipp O. V. Price S. M. Tellegen C. L. (2009). No effect of inversion on attentional and affective processing of facial expressions. Emotion, 9 (2), 248–259, doi:10.1037/a0014715. [CrossRef] [PubMed]
Loucks J. (2011). Configural information is processed differently in human action. Perception, 40 (9), 1047–1062, doi:10.1068/p7084. [CrossRef] [PubMed]
Loucks J. Baldwin D. (2009). Sources of information for discriminating dynamic human actions. Cognition: International Journal of Cognitive Science, 111 (1), 84–97, doi:10.1016/j.cognition.2008.12.010. [CrossRef]
Macrae C. N. Lewis H. L. (2002). Do I know you? Processing orientation and face recognition. Psychological Science, 13 (2), 194–196, doi:10.1111/1467-9280.00436. [CrossRef] [PubMed]
Marzi T. Viggiano M. P. (2011). Temporal dynamics of face inversion at encoding and retrieval. Clinical Neurophysiology, 122 (7), 1360–1370, doi:10.1016/j.clinph.2010.11.017. [CrossRef] [PubMed]
McBain R. Norton D. Chen Y. (2009). Females excel at basic face perception. Acta Psychologica (Amsterdam), 130 (2), 168–173, doi:10.1016/j.actpsy.2008.12.005. [CrossRef]
McClure E. B. (2000). A meta-analytic review of sex differences in facial expression processing and their development in infants, children, and adolescents. Psychological Bulletin, 126 (3), 424–453. [CrossRef] [PubMed]
Megreya A. M. Bindemann M. Havard C. (2011). Sex differences in unfamiliar face identification: Evidence from matching tasks. Acta Psychologica (Amsterdam), 137 (1), 83–89, doi: DOI:10.1016/j.actpsy.2011.03.003. [CrossRef]
O'Toole A. J. Roark D. A. Abdi H. (2002). Recognizing moving faces: A psychological and neural synthesis. Trends in Cognitive Science, 6 (6), 261–266. [CrossRef]
Palermo R. Rhodes G. (2007). Are you always on my mind? A review of how face perception and attention interact. Neuropsychologia, 45 (1), 75–92, doi:10.1016/j.neuropsychologia.2006.04.025. [CrossRef] [PubMed]
Pavlova M. Guerreschi M. Lutzenberger W. Sokolov A. N. Krageloh-Mann I. (2010). Cortical response to social interaction is affected by gender. NeuroImage, 50 (3), 1327–1332, doi:10.1016/j.neuroimage.2009.12.096. [CrossRef] [PubMed]
Pike G. E. Kemp R. I. Towell N. A. Phillips K. C. (1997). Recognizing moving faces: The relative contribution of motion and perspective view information. Visual Cognition, 4 (4), 409–437, doi:10.1080/713756769. [CrossRef]
Pilz K. S. Thornton I. M. Bülthoff H. H. (2006). A search advantage for faces learned in motion. Experimental Brain Research, 171 (4), 436–447. [CrossRef] [PubMed]
Proverbio A. M. Brignone V. Matarazzo S. Del Z. M. Zani A. (2006). Gender differences in hemispheric asymmetry for face processing. BMC Neuroscience, 7, 44, doi:10.1186/1471-2202-7-44.
Proverbio A. M. Riva F. Martin E. Zani A. (2010). Face coding is bilateral in the female brain. PLoS One, 5 (6), e11242, doi:10.1371/journal.pone.0011242.
Puce A. Syngeniotis A. Thompson J. C. Abbott D. F. Wheaton K. J. Castiello U. (2003). The human temporal lobe integrates facial form and motion: Evidence from fMRI and ERP studies. NeuroImage, 19 (3), 861–869, doi:10.1016/S1053-8119(03)00189-7. [CrossRef] [PubMed]
Schultz J. Pilz K. S. (2009). Natural facial motion enhances cortical responses to faces. Experimental Brain Research, 194 (3), 465–475, doi:10.1007/s00221-009-1721-9. [CrossRef] [PubMed]
Schweinberger S. Soukup G. (1998). Asymmetric relationships among perceptions of facial identity, emotion, and facial speech. Journal of Experimental Psychology: Human Perception and Performance, 24 (6), 1748–1765, doi:10.1037/0096-1523.24.6.1748. [CrossRef] [PubMed]
Schweinberger S. R. Burton A. M. Kelly S. W. (1999). Asymmetric dependencies in perceiving identity and emotion: Experiments with morphed faces. Perception & Psychophysics, 61 (6), 1102–1115. [CrossRef] [PubMed]
Song Y. Hakoda Y. (2012). Selective attention to facial emotion and identity in children with autism: evidence for global identity and local emotion. Autism Research, 5 (4), 282–285, doi:10.1002/aur.1242. [CrossRef] [PubMed]
Spangler S. M. Schwarzer G. Korell M. Maier-Karius J. (2010). The relationships between processing facial identity, emotional expression, facial speech, and gaze direction during development. Journal of Experimental Child Psychology, 105 (1–2), 1–19, doi:10.1016/j.jecp.2009.09.003. [CrossRef] [PubMed]
Thornton I. M. Kourtzi Z. (2002). A matching advantage for dynamic human faces. Perception, 31 (1), 113–132. [CrossRef] [PubMed]
Wehrle T. Kaiser S. Schmidt S. Scherer K. R. (2000). Studying the dynamics of emotional expression using synthesized facial muscle movements. Journal of Personality and Social Psychology, 78 (1), 105–119, doi:10.1037//0022-3514.78.1.105. [CrossRef] [PubMed]
Xiao N. G. Quinn P. C. Ge L. Lee K. (2012). Rigid facial motion influences featural, but not holistic, face processing. Vision Research, 57, 26–34, doi:10.1016/j.visres.2012.01.015. [CrossRef] [PubMed]
Footnotes
1  Sometimes a correlated block is also included, in which each level of the relevant dimension (e.g., each identity in an identity judgment task) is linked with only one level of the irrelevant dimension (e.g., a particular facial expression). Researchers have suggested that performance in correlated blocks does not provide information about whether two facial dimensions are dependent or independent (Schweinberger & Soukup, 1998). Instead, it appears to be strongly affected by differences in discriminability, and based on decisional strategies rather than on the perceptual relationship between the two facial cues (see Schweinberger & Soukup, 1998). Because of this, researchers have either discarded the data from correlated blocks (Schweinberger & Soukup, 1998), or have not been included these blocks in their experimental designs (Ganel & Goshen-Gottstein, 2002).
Footnotes
2  Note that submitting uncorrected Garner interference scores to the same analysis produced a very similar pattern of results. Given the differences in baseline performance across conditions, however, interpretation of the corrected scores is more straightforward and was, therefore, preferred.
Figure 1
 
Presentation sequence for the identity-matching task. Participants viewed two, simultaneously-presented static images (static condition) or dynamic sequences (dynamic condition) for 1040 ms. Across trials, participants made same or different judgments via a key press, using their index fingers. Static face stimuli were supplied by researchers at the Max Planck Institute for Biological Cybernetics, Germany, and have been described in previous work (Pilz et al., 2006).
Figure 1
 
Presentation sequence for the identity-matching task. Participants viewed two, simultaneously-presented static images (static condition) or dynamic sequences (dynamic condition) for 1040 ms. Across trials, participants made same or different judgments via a key press, using their index fingers. Static face stimuli were supplied by researchers at the Max Planck Institute for Biological Cybernetics, Germany, and have been described in previous work (Pilz et al., 2006).
Figure 2
 
Garner classification tasks. Participants viewed a static image (static condition) or a dynamic sequence (dynamic condition) for 1040 ms and responded with a key press using their index fingers. For the Identity judgment task, participants determined whether the face belonged to “Jane” or to “Anne.” For the Expression judgment task, participants determined whether the expression was one of anger or surprise. Static face stimuli were supplied by researchers at the Max Planck Institute for Biological Cybernetics, Germany, and have been described in previous work (Pilz et al., 2006).
Figure 2
 
Garner classification tasks. Participants viewed a static image (static condition) or a dynamic sequence (dynamic condition) for 1040 ms and responded with a key press using their index fingers. For the Identity judgment task, participants determined whether the face belonged to “Jane” or to “Anne.” For the Expression judgment task, participants determined whether the expression was one of anger or surprise. Static face stimuli were supplied by researchers at the Max Planck Institute for Biological Cybernetics, Germany, and have been described in previous work (Pilz et al., 2006).
Figure 3
 
Median correct RT (ms) in the static and dynamic conditions of the identity matching task. Dots represent data from individual participants, while bars represent group means.
Figure 3
 
Median correct RT (ms) in the static and dynamic conditions of the identity matching task. Dots represent data from individual participants, while bars represent group means.
Figure 4
 
Median correct RT (ms) in the baseline conditions of the Garner tasks. Dots represent data from individual participants, while bars represent group means.
Figure 4
 
Median correct RT (ms) in the baseline conditions of the Garner tasks. Dots represent data from individual participants, while bars represent group means.
Figure 5
 
Women's and men's mean corrected Garner interference scores in Identity and Expression Tasks for static and dynamic presentation modes. Dots represent data from individual participants, while bars represent group means.
Figure 5
 
Women's and men's mean corrected Garner interference scores in Identity and Expression Tasks for static and dynamic presentation modes. Dots represent data from individual participants, while bars represent group means.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×