Free
Article  |   December 2013
Contrast negation differentiates visual pathways underlying dynamic and invariant facial processing
Author Affiliations
Journal of Vision December 2013, Vol.13, 13. doi:https://doi.org/10.1167/13.14.13
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Pamela M. Pallett, Ming Meng; Contrast negation differentiates visual pathways underlying dynamic and invariant facial processing. Journal of Vision 2013;13(14):13. https://doi.org/10.1167/13.14.13.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
Abstract
Abstract:

Abstract  Bruce and Young (1986) proposed a model for face processing that begins with structural encoding, followed by a split into two processing streams: one for the dynamic aspects of the face (e.g., facial expressions of emotion) and the other for the invariant aspects of the face (e.g., gender, identity). Yet how this is accomplished remains unclear. Here, we took a psychophysical approach using contrast negation to test the Bruce and Young model. Previous research suggests that contrast negation impairs processing of invariant features (e.g., gender) but not dynamic features (e.g., expression). In our first experiment, participants discriminated differences in gender and facial expressions of emotion in upright, inverted, and contrast-negated faces. Results revealed a profound impairment for contrast-negated gender discrimination, whereas expression discrimination remained relatively robust to contrast negation. To test whether this differential effect occurs during perceptual encoding, we conducted three additional experiments in which we measured aftereffects following upright, inverted, or contrast-negated face adaptation for the same discrimination task as in the first experiment. Results showed a mild impairment with contrast negation during perceptual encoding for both gender and expression, followed by a marked gender-specific deficit during contrast-negated face discrimination. Taken together, our results suggest that there are shared neural mechanisms during perceptual encoding, and at least partially separate neural mechanisms during recognition and decision making for dynamic and invariant facial-feature processing.

Introduction
The human race is a highly social species that frequently uses the face to communicate social signals. These signals may include dynamic information such as our current emotional state, environmental features capturing our attention (i.e., directed eye gazes), and even facial expressions of language (e.g., speech movements and American Sign Language nonmanuals; see McGurk & MacDonald, 1976; Reilly, McIntire, & Bellugi, 1990). Alternatively, they may be static in nature, such as with gender and identity. When the ability to process these facial cues is diminished, poor facial-recognition skills develop as well as impairments in social communication skills, such as those observed with autism spectrum disorders (American Psychiatric Association, 2013). However, our understanding of the neural mechanisms subserving face perception is still limited. It was proposed by Bruce and Young (1986) that face processing begins with basic structural encoding (i.e., eyes above a nose above a mouth) and then separates into two divergent pathways, one for processing the dynamic features of the face (e.g., emotion, eye gaze, speech) and the other for processing the invariant features of the face (e.g., identity, gender, race). Consistent with the Bruce and Young model, there is accumulating neuroimaging evidence suggesting a dissociation between the representation of invariant and dynamic aspects of faces (for a review, see Haxby, Hoffman, & Gobbini, 2000). Moreover, this model is supported by studies of prosopagnosia, in which identity recognition is impaired but emotion recognition is usually spared (Duchaine, Germine, & Nakayama, 2007; Duchaine, Murray, Turner, White, & Garrido, 2009; Duchaine, Parker, & Nakayama, 2003; Humphreys, Avidan, & Behrmann, 2007). However, recent research suggests that individuals with developmental prosopagnosia (DP) have great difficulty encoding configural information for both expression and identity. Since the incoming information is compromised, it is possible that the preserved ability in DP to recognize emotion may result from compensatory strategies rather than a separated, and therefore intact, emotion-processing pathway (Palermo et al., 2011). 
A few behavioral paradigms have been developed to test the relationship between the processing of dynamic and invariant facial cues. One such method is to measure irrelevant-dimension effects. An irrelevant-dimension effect occurs when the processing of information from one stimulus dimension is altered or impaired by variations in a second, ancillary stimulus dimension. This indicates that the two dimensions are not orthogonal but rather interrelated. In the context of processing facial expressions of emotion and identity, variations in identity decrease accuracy for recognition of facial expressions of emotion and vice versa (Galster, Kahana, Wilson, & Sekuler, 2009; Ganel & Goshen-Gottstein, 2004; Schweinberger & Soukup, 1998; White, 2001). These results suggest that the processing of facial expressions of emotion and identity are interrelated and hence share at least some neural circuitry, which conflicts with Bruce and Young's theory. Yet when irrelevant-dimension effects are measured in the context of adaptation, the size of the aftereffects following adaptation to facial expressions of emotion are stronger when identity remains constant, whereas the contrary cannot be said for identity (Ellamil, Susskind, & Anderson, 2008; Fox & Barton, 2007). That is, the size of aftereffects resulting from adaptation to identity is not influenced by variations in facial expression of emotion. In contrast to the irrelevant-dimension effect for recognition, these results suggest a unidirectional dependency in which processing of facial expressions of emotion is influenced by the representation of identity. Explaining all these results would require a modified version of Bruce and Young's (1986) model. Accordingly, Calder and Young (2005) have suggested that dynamic- and invariant-feature processing may share neural circuitry before deviating for domain-specific processing. However, it remains unknown which neural circuitry is shared and at what point during face processing the split might occur. 
Here we focus on the possible differentiation of face processing for emotional expressions and gender, from the perceptual encoding stage to recognition and decision making. We investigate which of these processing stages involve shared neural circuitry and which are likely to proceed independently. To do this, we use a highly sensitivite perceptual-discrimination task to compare dynamic and invariant pathway processing based upon the lexical information conveyed in the face (e.g., angry or happy, male or female). This psychophysical approach allows us to test whether dynamic and invariant facial information is processed jointly or independently at the recognition and decision-making stage. We presented participants with upright, inverted, and contrast-negated faces. Since inversion is thought to impair the encoding of face configuration (i.e., facial-feature arrangement and holistic percept; reviewed in Farah, Wilson, Drain, & Tanaka, 1998; McKone & Yovel, 2009; Young, Hellawell, & Hay, 1987), we expect that it will also impair the processing of dynamic and invariant facial information. This is because our psychological representations of both dynamic and invariant information—for example, expression and identity—generally require some level of configural encoding (see, e.g., Calder & Jansen, 2005; Young et al., 1987). On the other hand, contrast negation (i.e., a contrast reversal or photographic negative) produces faces that are very different from those viewed in daily life (the whites of the eyes are black, the pupils are white, etc.), but still identifiable as faces. Despite being a fully reversible manipulation without any information loss, contrast negation leads to great difficulty in recognizing the identity of a face (see, e.g., Galper, 1970; Gilad, Meng, & Sinha, 2009; Kemp, McManus, & Pigott, 1990; Nederhouser, Yue, Mangini, & Biederman, 2007; White, 2001) but spares the recognition of facial expressions of emotion (White, 2001). These results suggest that contrast negation may only interfere with face recognition after it has separated from expression processing. Notably, Gilad et al. (2009) suggest that ordinal luminance relations between the eyes and their surrounding region are crucial for normal facial processing. Contrast negation destroys these otherwise highly reliable ordinal luminance relations and therefore impairs face processing. Analyzing these ordinal relations involves comparing averaged luminance levels across different facial regions and cannot be accomplished at the stage when only specific local cues are encoded. Consistent with this notion, we hypothesize that if there is a split in processing pathways (i.e., after initial encoding of local features but before recognition and decision making), then contrast negation should impair the discrimination of invariant facial features (e.g., identity and gender) but not necessarily dynamic features (e.g., facial expression). However, if contrast negation impairs the discrimination of both to a comparable level, then this would suggest that the processing of invariant and dynamic facial information may not necessarily be separated. 
Of course, it should also be examined whether a split in processing pathways could exist in the initial stages of face processing, that is, during perceptual (holistic) encoding. Holistic processing involves the binding of the internal facial features and their spatial arrangement with the external contour of the face, creating a single face percept (Sergent, 1984). In the Bruce and Young (1986) model, this level of perceptual processing occurs after basic structural encoding. Visual adaptation to faces is one powerful tool with which to study the neuronal basis of perceptual (holistic) encoding (see, e.g., Jiang, Blanz, & O'Toole, 2006; Leopold, O'Toole, Vetter, & Blanz, 2001; Leopold, Rhodes, Mueller, & Jeffery, 2005; Oruç & Barton, 2011; Webster, Kaping, Mizokami, and Duhamel, 2004; reviewed in Webster & MacLeod, 2011). Thus, we also included experiments that measured aftereffects following visual adaptation to a face. Notably, aftereffects reflect the adjustment of sensory neurons during adaptation so as to maintain the perception of the prevailing average sensory experience (Webster, Werner, & Field, 2005). Adaptation with neuronal-response attenuation is found in both low-level visual aftereffects—for example, color, orientation, and spatial frequency (Graham, 1989; Webster, 1996; Webster & Mollon, 1991; Westheimer & Gee, 2002)—and high-level visual aftereffects such as those viewed with faces (see, e.g., Leopold et al., 2001; Rhodes, Jeffery, Watson, Clifford, & Nakayama, 2003; Watson & Clifford, 2003; Webster & MacLin, 1999; reviewed in Webster & MacLeod, 2011). Of particular relevance to the current study is evidence for aftereffects following gender or expression adaptation. In the case of gender, adapting to, for example, a male face will result in the perception of a female in a face that is actually gender-neutral (Webster et al., 2004). Similarly, adapting to, for example, an angry face will result in the perception of happiness in a face that is actually 50% angry and 50% happy (Webster et al., 2004). 
Here we exploit the possibility of a differential effect of contrast negation on each processing pathway by measuring aftereffects for the perception of dynamic information (e.g., expression) and invariant information (e.g., gender) following adaptation to either an original or a contrast-negated face. As face aftereffects may reflect adaption of the neural substrates that encode faces, if a split in processing pathways occurs after perceptual encoding, then aftereffects caused by visual adaptation should be, by and large, equivalent for both types of information regardless of whether the adapting face is upright, inverted, or contrast negated. However, if different effects of adaptation are observed, then a split in processing pathways will likely have occurred during perceptual encoding, and our hypothesis will have been wrong. 
An additional feature of the present study, and not present in many of the previously mentioned studies, is that we chose gender as our invariant property rather than identity. While emotions are pervasive and extend from person to person, identity is unique. Thus the exposure to any one identity, particularly an unfamiliar identity in an experiment, is bound to be less than the exposure to different facial expressions of emotion. This is important because experience can alter face-processing ability (see, e.g., Gobbini & Haxby, 2007; Rossion, 2002). Accordingly, when we compare dynamic and invariant facial processing, task familiarity is of great concern, as we would not want differences in experience to confound potential interpretations of the results. By contrast, gender, like emotion, is ubiquitous; yet it is also invariant, like identity. Gender covaries with identity and is proposed to share the same processing pathway as identity, that is, the processing pathway for invariant features (Goshen-Gottstein & Ganel, 2000; Ng, Ciaramitaro, Anstis, Boyton, & Fine, 2006). As with identity, gender recognition is impaired by contrast negation (Bruce & Langton, 1994; Santos & Young, 2008). Thus, we argue that gender is a more appropriate invariant facial feature for testing the Bruce and Young (1986) model. 
Experiment 1
Methods
Participants
Twenty-two undergraduates from Dartmouth College participated in exchange for course credit. All participants had normal or corrected-to-normal visual acuity. This research was approved by the Committee for the Protection of Human Subjects at Dartmouth College and conducted in accordance with the 1964 Declaration of Helsinki. 
Stimuli
The stimuli were generated from grayscale photographs of one Caucasian male making three different expressions—happy, angry, and neutral—and a Caucasian female with a neutral expression. These faces were selected from the NimStim database (Tottenham et al., 2009) and were identities 06, 24, 27, and 34, respectively. In order to remove the influence of external facial features such as hair, the upper half of each face was partially framed by the top half of a black oval frame. This partial oval frame occluded the hair and the ears but preserved the external contour for the lower half of the face (Figure 1). Then, using Matlab, we set the mean luminance and root-mean-square contrast of the face portion of the images to be the same. We also ensured that stimuli contained no significant differences in spatial-frequency content and size (9.35° × 14.1°). Thus, any effects observed would be unlikely to have resulted from differences in low-level visual characteristics. From these normalized faces, we created three sets of stimuli, each containing a series of 100 faces, by morphing from neutral to happy, neutral to angry, and male to female. Since there were 100 morph faces in each set, the amount of change from one morph face to the next was very small. As a result, we can obtain very finely tuned discrimination thresholds. Faces were additionally contrast negated or inverted to create a total of nine different categories: upright happy, upright angry, upright gender, contrast-negated happy, contrast-negated angry, contrast-negated gender, inverted happy, inverted angry, and inverted gender. All images were presented against a gray background. Stimuli were presented on a 21-in. (53.3-cm) Dell P1130 CRT monitor (1280 × 1024 pixels, 85 Hz) using Matlab r2008a and the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). 
Figure 1
 
Example trials for Experiment 1. Participants indicated which face better fit the category label, that is, which face appeared happier (left) or which face appeared more masculine (right). The top left shows a trial with 70% happy (left) and neutral (right), and the upper right shows 70% male and 50% male/50% female (i.e., gender neutral). The example trials for contrast-negated faces (bottom) use the same morphs but are shown on opposite sides of the display (left shows neutral, right shows 70% happy, etc.).
Figure 1
 
Example trials for Experiment 1. Participants indicated which face better fit the category label, that is, which face appeared happier (left) or which face appeared more masculine (right). The top left shows a trial with 70% happy (left) and neutral (right), and the upper right shows 70% male and 50% male/50% female (i.e., gender neutral). The example trials for contrast-negated faces (bottom) use the same morphs but are shown on opposite sides of the display (left shows neutral, right shows 70% happy, etc.).
Design
Participants viewed a test face and a comparison face centered in the left and right halves of the display. The locations of the test face and comparison face were counterbalanced across trials. Centered in the upper half of the display was a label indicating the category for that trial, for example, “Happy.” Test faces were any of the 100 morph faces for a given category (e.g., happy). For happy and angry, the comparison face was always the neutral face (0% morph face). For the gender trials, the comparison face was always the 50% morph face (i.e., 50% male and 50% female). As a result, and because participants may differ in their perceptions of gender neutral, we separated the gender trials into two categories: male and female. This allowed us to acquire separate male and female thresholds, which we could then average to obtain an unbiased estimate of gender-discrimination sensitivity. Male faces varied from 1 to 50, and female faces varied from 50 to 100. 
The first test face for each category was the 80% morph face: for example, 80% happy. After this, test faces were determined by an adaptive staircase procedure (modified PEST; see Taylor & Creelman, 1967). Because there were nine different face categories (see Stimuli previously), we needed nine different staircases. To reduce the perception of a gradual narrowing onto a threshold, the nine different staircases were randomly interleaved between trials. 
In each staircase, a correct response made the test face more similar to the comparison face by one “step,” making it harder to discriminate, and an incorrect response increased the difference between the test face and comparison face by three steps, making it easier to discriminate. The size of a step varied with the participant's responses, with a maximum step size of 20 morph units. Step size was further controlled by an acceleration factor, such that two consecutive correct or incorrect responses increased the size of the step by a factor of 1.5 and a shift from correct to incorrect (or vice versa) decreased the step size by a factor of 1/1.5 ≈ 0.67. These parameters are identical to the staircases employed in previous studies (Pallett, Cohen, & Dobkins, 2013; Pallett & Dobkins, 2013). 
There were 50 trials per face category except for gender, which had 100 trials (50 male trials and 50 female trials). Since pilot results suggested that the task was very difficult, every 10 trials contained an easy trial in which the test face represented the category maximum (e.g., 100% happy). As a result, we obtained measures of both discrimination ability and simple accuracy. Each participant completed 660 total trials (600 test trials and 60 easy trials). 
Procedure
Each trial began with a beep and 500-ms fixation. Then participants simultaneously viewed the test face, comparison face, and category label. The stimuli remained on display until the participant selected the face more representative of the trial category (by key press). At this point, the trial ended and a new one began. 
Data analysis
We examined three different performance measures: discrimination ability (thresholds), easy-trial accuracy, and response times (RTs). Because the male and female judgments involved the same morph continuum, we collapsed the data across these two categories and obtained a general measure of gender performance. Overall gender performance was analyzed alongside happy and angry performance for each of the physical categories (upright, inverted, and contrast negated). 
When originally designing this experiment, we did not plan to analyze the easy-trial data, since we expected them to plateau at ceiling. However, when asked to label the gender of a contrast-negated face in the discrimination task, participants could not successfully do so. In other words, our participants were completely incapable of discriminating differences in contrast-negated gender (described later in Results). As a result, it was impossible to obtain dependable threshold fits (method described later) for contrast-negated gender discrimination. Although this, in and of itself, demonstrates a uniquely strong effect of contrast negation on gender discrimination, statistical comparison of the thresholds was therefore not possible. To overcome this, we analyzed participant accuracy on the easy trials, which allowed us to compare performance across all conditions including contrast-negated gender. It should be noted that there are limitations to consider when assessing accuracy data from conditions with near 100% performance accuracy (i.e., ceiling). Performance at ceiling may limit our ability to measure the true effect of semantic category and physical category on easy-trial accuracy, and thus our measurements may inaccurately reflect the effect of these different conditions (for discussion, see Crookes & McKone, 2009; McKone, Crookes, Jeffery, & Dilks, 2012; McKone, Crookes, & Kanwisher, 2009). In the current data, participants performed at ceiling for all conditions except upright gender, contrast-negated gender, inverted gender, inverted happy, and inverted angry (although there were only marginally significant differences in the upright condition for happy, angry, and gender accuracy; Wilcoxon signed-rank test, ps > 0.06). Easy-trial accuracies were analyzed in a 3 × 3 repeated-measures ANOVA on semantic category (happy, angry, gender) and physical category (upright, inverted, contrast negated). 
As with the easy trials, RT performances were available for each condition, including contrast-negated gender. Thus we also compared RTs for each face category. First, each participant's RTs were filtered for outliers. Any RT beyond two standard deviations away from the mean of that participant's data was excluded. The remaining RTs were averaged within each face category. To improve conformity to the normal distribution, we tested log RTs rather than regular RTs. Log RTs were analyzed in a 3 × 3 repeated-measures ANOVA on semantic category (happy, angry, gender) and physical category (upright, inverted, contrast-negated). 
As mentioned in the description of the easy-trial data analysis (previously), we could not obtain threshold fits for contrast-negated gender discrimination. However, we could measure thresholds for the discrimination of upright, inverted, and contrast-negated happy and angry faces, as well as of upright and inverted gender. Thresholds were determined by fitting the proportion of happier, angrier, more masculine, or more feminine responses to independent logistic functions, for each participant and each combination of semantic and physical category. This was accomplished using psignifit version 2.5.6 (see http://bootstrap-software.org/psignifit/), a software package which implements the maximum-likelihood method described by Wichmann and Hill (2001) and runs in Matlab r2008a. The morph unit associated with 80% correct in each function represented the stimulus threshold. Previous studies involving a similar paradigm have successfully used this method to determine thresholds for face and object discrimination ability (Pallett, Cohen, & Dobkins, 2013; Pallett & Dobkins, 2013). 
Once thresholds were obtained, we examined whether there was a differential effect of inversion or contrast negation on happy and angry discrimination using a 2 × 3 repeated-measures ANOVA with semantic category and physical category as the repeated-measures variables. To examine the effect of inversion on gender discrimination, we also performed a paired-samples t test on upright and inverted gender-discrimination thresholds. 
Results
Discrimination thresholds
As expected, participants performed poorly with contrast-negated faces. Remarkably, they were completely incapable of discriminating differences in contrast-negated gender. As mentioned previously, this resulted in unreliable threshold fits for contrast-negated gender discrimination. Figure 2 clearly shows that gender discrimination was greatly impaired by contrast negation. 
Figure 2
 
Gender discrimination across all participants (N = 22) in Experiment 1. The top panel displays accuracy with contrast negation, the middle panel shows original normal-face performance, and the bottom panel displays inverted-face performance.
Figure 2
 
Gender discrimination across all participants (N = 22) in Experiment 1. The top panel displays accuracy with contrast negation, the middle panel shows original normal-face performance, and the bottom panel displays inverted-face performance.
The results of our 2 × 3 repeated-measures ANOVA on semantic category (happy, angry)1 and physical category (upright, inverted, contrast-negated) revealed a main effect of semantic category, F(1, 19) = 59.1, p < 0.001), with participants displaying lower discrimination thresholds (i.e., better sensitivity) for changes in happiness than in anger. There was also a main effect of physical category, F(2, 38) = 5.85, p = 0.006. Post hoc t tests with Bonferroni correction suggested that this was driven primarily by lower thresholds for detecting differences in upright faces relative to inverted faces (p = 0.020), while contrast-negated face discrimination did not differ significantly from upright face discrimination (p = 0.24). There was no significant semantic category × physical category interaction, F(2, 38) = 0.22, p = 0.81. 
Again, since we were unable to obtain thresholds for contrast-negated gender discrimination, we could only compare thresholds for the upright and inverted gender conditions. The results of a paired-samples t test on upright and inverted gender discrimination showed a significant inversion effect (IE) with lower thresholds for upright than inverted gender discrimination, t(16) = 4.74, p < 0.001. 
Easy-trial accuracy
Results from our 3 × 3 repeated-measures ANOVA on semantic category (happy, angry, gender) and physical category (upright, inverted, contrast-negated) revealed a significant interaction between semantic category and physical category, F(6, 108) = 18.0, p < 0.001. Table 1 displays the mean accuracies and standard errors. To better understand this interaction, we computed the size of the inversion effect (IE) and contrast-negation effect (CNE) for each condition.    
Table 1
 
Experiment 1 easy-trial accuracy.
Table 1
 
Experiment 1 easy-trial accuracy.
Original Inverted Contrast negated
Happy 98.1% (± 1.1%) 96.7% (± 1.4%) 98.2% (± 1.2%)
Angry 99.0% (± 1.0%) 97.1% (± 1.4%) 98.8% (± 1.1%)
Gender 95.9% (± 1.2%) 80.4% (± 2.4%) 63.2% (± 2.3%)
This gave us six new measures: IEs for happy, angry, and gender accuracy and CNEs for happy, angry, and gender accuracy. Since CNEs and IEs were not normally distributed, we conducted one-sample Wilcoxon signed-rank tests comparing the median of each measure to 0. Only gender accuracy was significantly affected by contrast negation and inversion (CNE: median = 0.19, p < 0.001; IE: median = 0.048, p = 0.002). 
Response times
The 3 × 3 ANOVA on semantic category and physical category revealed no significant main effects (Fs < 1, ps > 0.39) and no interaction, F(4, 76) = 1.18, p = 0.33. These results demonstrate that (a) participants' inability to discriminate (i.e., we could not obtain thresholds) differences in contrast-negated gender was not due to a trade-off between speed and accuracy (with participants responding too quickly to provide reliable results) and (b) the differential effect of contrast negation on easy-trial accuracy for gender (vs. happy and angry accuracy) cannot be explained by a condition-dependent trade-off between speed and accuracy. 
In sum, contrast negation markedly impaired gender discrimination. Discrimination for all semantic categories was impaired by inversion, although gender was the only semantic category affected by inversion in the easy-trial analysis. Gender easy-trial accuracy was also uniquely impaired by contrast negation. These results cannot be explained by trade-offs between speed and accuracy. 
Discussion
Our results revealed a clear separation between processing of expression and of gender. Specifically, we observed a profound deficit for labeling gender in a contrast-negated face but little difficulty naming the expression in that face. Although the ability to interpret the statistical significance of these differences is limited by ceiling effects, when combined with Figure 2, our results provide a compelling case for a dissociation in processing of gender versus expression. This finding complements previous research suggesting that luminance relations between facial regions are also important for the perception of facial beauty, with larger contrast between the mouth and its surrounding skin and between the eyes and their surrounding skin corresponding with greater perceived beauty in the female face and reduced contrast resulting in a more masculine appearance (Russell, 2003). Taken together, these findings provide consistent evidence that luminance contrast is important for gender processing. The results of the current study further show that the direction of contrast (i.e., contrast polarity) plays a crucial role. 
Experiment 2A
Although the results of Experiment 1 are consistent with a separation between gender and expression processing at some point along the dynamic and invariant processing pathways, it remains unclear when and where this divergence may occur. One possibility is that the processing pathways separate before or during perceptual (local and holistic) encoding. Alternatively, a split may occur after perceptual encoding, for example during decision making for discrimination and recognition. Along these lines, previous research on the composite-face effect suggests that the formation of a holistic face percept is resistant to the deleterious effects of contrast negation (Calder & Jansen, 2005; Hole, George, & Dunsmore, 1999). If this is true, then any separation in processing pathways signaled by different effects of contrast negation must occur after perceptual encoding. To further test the possibility of a split in the dynamic and invariant processing pathways, and perhaps also to clarify when such a split may occur, we took advantage of known face aftereffects for the perception of expression and gender (Webster et al., 2004). First, participants adapted to either a 100% angry male face or an expression- and gender-neutral face with normal or negated contrast. Then we measured participants' accuracies for gender and expression discrimination using normal test faces only. Test faces varied from 100% angry male to 100% happy female, and the discrimination task was the same as Experiment 1
We hypothesized that if the split in processing pathways occurs after perceptual encoding, then we should observe significant aftereffects for discrimination of both gender and expression following contrast-negated face adaptation, despite the severe deficit in contrast-negated gender discrimination observed in Experiment 1. Alternatively, if there were no aftereffect for gender discrimination following contrast-negated face adaptation, then we would have to conclude that the split in dynamic and invariant processing pathways likely occurs before or during perceptual encoding
Methods
Participants
Thirteen undergraduates from Dartmouth College participated in exchange for course credit. All participants had normal or corrected-to-normal visual acuity. This research was approved by the Committee for the Protection of Human Subjects at Dartmouth College and conducted in accordance with the 1964 Declaration of Helsinki. 
Stimuli
Stimuli were similar to those described in Experiment 1. First we chose two new faces, one of an angry Caucasian male and one of a happy Caucasian female. Then we created a series of 101 morphs ranging from 0% angry male (100% happy female) to 100% angry male (0% happy female). We selected the 100% angry male face and the 50% male/50% female face as adapting faces. We will refer to the 50% male/50% female face as the expression- and gender-neutral face, since it lies in the center of our morph continuum. We then applied contrast negation to these faces to create two additional adapting faces, contrast-negated 100% angry male and contrast-negated expression and gender neutral. Test faces were selected from the original, normal contrast continuum and ranged from 0% angry male (100% happy female) to 100% angry male (0% happy female). Stimuli were 9.09° × 12.9°. All else was the same as in Experiment 1
Adaptation design
During adaptation, the location of the adapting face oscillated between two points, 5 pixels (0.16°) above and to the left of center and 5 pixels (0.16°) below and to the right of center. Participants were instructed to maintain center fixation, and a chin rest was used to help maintain this position. Since the size of the location shift (0.23° diagonally) was substantially less than 1° of visual angle from center, there was no need for participants to shift gaze to complete the task. As a result, this task helped ensure that any observed aftereffects reflected high-level adaptation to the face and not low-level aspects of the retinotopic map (e.g., local changes in contrast or brightness). The adapting face spent 1 s in each location; however, the face would occasionally pause in its movement and remain in one location for 1.5 s. When this happened, participants were instructed to press the down-arrow key. This happened semirandomly six times during the initial 3-min adaptation period and once during the 5-s “top-up” adaptation period (described later in Procedure). This encouraged participants to pay attention to the face throughout the entire 3 min of adaptation and 5 s of top-up, although we did not analyze these responses. 
Test design
The test design was nearly identical to that of Experiment 1, with the following differences. Because we were interested in measuring aftereffects for adaptation to an angry male face, we chose to measure changes in the perception of anger and masculinity only. Thus, we acquired 80% correct thresholds for angry and male judgments but not happy or female judgments. 
Procedure
Participants experienced each adapting condition as four separate blocks of trials: angry male, contrast-negated angry male, expression and gender neutral, and contrast-negated expression and gender neutral. Block order was randomized. Each block was separated by at least 30 s, during which participants were encouraged to take a break. Blocks began with 3 min of adaptation, followed by a label (“Male” or “Angry”) centered in the display for 1 s. This label alerted participants to the type of judgment they needed to make. Participants then viewed a 250-ms mask, followed by the test face presented in the center of the display for 1 s. After this, the test face was removed and participants were prompted to indicate whether the test face fit the category label, signaling yes or no via key press (left-arrow key or right-arrow key, respectively). The participant's response ended the trial, and the next trial began. All remaining trials within the block began with 5 s of top-up adaptation. This made sure that participants remained adapted throughout the entire block and increased the reliability of our aftereffect measurements. Each block contained 50 expression trials that were randomly interleaved with 50 gender trials. Future test faces were determined by a modified PEST adaptive-staircase procedure similar to that described in Experiment 1 (see also Data analysis, later). In addition, every 10 trials contained an easy trial in which the test face was 100% angry male. Thus, there were 400 test trials and 40 easy trials. Figure 3 shows an example of the trial progression. 
Figure 3
 
Example trial for Experiments 2A, 2B, and 3. In each trial, participants were asked to make a yes or no judgment based on facial expression of emotion (“Angry”) or gender (“Male”). The physical distance between the adaptation faces in this figure is exaggerated for the purpose of demonstrating movement. The actual shift in location was 0.16° (5 pixels) up and to the left of center and 0.16° (5 pixels) down and to the right of center (i.e., 0.23° diagonally).
Figure 3
 
Example trial for Experiments 2A, 2B, and 3. In each trial, participants were asked to make a yes or no judgment based on facial expression of emotion (“Angry”) or gender (“Male”). The physical distance between the adaptation faces in this figure is exaggerated for the purpose of demonstrating movement. The actual shift in location was 0.16° (5 pixels) up and to the left of center and 0.16° (5 pixels) down and to the right of center (i.e., 0.23° diagonally).
Data analysis
Thresholds were obtained using the procedure described in Experiment 1. However, in addition to determining the test face that yielded 80% correct, we were also interested in isolating the test face that yielded 50% correct. This is because there are two properties that can change with adaptation. One is the point of subject equality (PSE), which in the current study is the face that appears expression and gender neutral. With adaptation, this perceived neutral point changes, and so we refer to this effect as a shift in PSE. In the context of a psychometric function, this is represented as a shift in the location of the curve along the x-axis (see, e.g., Webster et al., 2004). The second property that may change with adaptation is threshold size. In the current study, this is measured as the percentage of angry male needed to correctly discriminate expression or gender on 80% of the trials, that is, the 80% correct thresholds. In the context of a psychometric function, this is represented as a change in the slope (i.e., scale or steepness). 
Specifically, to determine the size of the aftereffects, we subtracted the log of the PSE for expression judgments during expression-and-gender-neutral adaptation from the log of the PSE for expression judgments during 100% angry-male adaptation; we then repeated this procedure for the gender judgments. To determine whether adaptation produced a change in slope, we measured the difference between the log of the 80% threshold morph and the log of the PSE (i.e., log 80% − log 50%) for each adapting condition and each judgment type and then compared these values across adapting conditions. Each of these computations used log values, because log, but not linear, data conformed to a normal distribution. Both aftereffect sizes and thresholds were analyzed in two separate 2 × 2 repeated-measures ANOVAs with adapting contrast (normal, contrast-negated) and semantic category (expression, gender) as the repeated-measures variables. 
Since we analyzed easy-trial accuracy in Experiment 1, we also assessed accuracy for the easy trials in this experiment.2 To do this, we computed CNEs (Equation 2) for expression and gender accuracy for each adapting contrast and adapting face type (i.e., angry male vs. expression and gender neutral). We then tested whether each CNE was significant. Since the data were not normally distributed, we applied a nonparametric analysis, that is, a one-sample Wilcoxon signed-rank test. 
Results
Size of the aftereffects
Results of our two-factor ANOVA on adapting contrast and semantic category for shift in log PSE revealed no significant interaction, F(1, 12) = 0.65, p = 0.44, and no main effect of semantic category, F(1, 12) = 0.42, p = 0.53. There was a main effect of adapting contrast, F(1, 12) = 23.9, p < 0.001, that was driven by larger aftereffects following adaptation to normal faces (M = 0.13, SE = 0.022) than contrast-negated faces. However, contrast-negated faces also produced significant aftereffects, as demonstrated by a one-sample t test, M = 0.032, SE = 0.015, t(12) = 2.22, p = 0.047 (Figure 4, top). 
Figure 4
 
The top panel displays the size of the aftereffects following adaptation to a normal or contrast-negated face for judgments of facial expression of emotion and gender (i.e., shift in the point of subjective equality) in Experiment 2A. The bottom panel displays the effect of normal and contrast-negated face adaptation on thresholds for discrimination of facial expression of emotion and gender (i.e., change in the slope of the psychometric function). N = 13, error bars = positive and negative standard error.
Figure 4
 
The top panel displays the size of the aftereffects following adaptation to a normal or contrast-negated face for judgments of facial expression of emotion and gender (i.e., shift in the point of subjective equality) in Experiment 2A. The bottom panel displays the effect of normal and contrast-negated face adaptation on thresholds for discrimination of facial expression of emotion and gender (i.e., change in the slope of the psychometric function). N = 13, error bars = positive and negative standard error.
Changes in slope
Results of our two-factor ANOVA revealed no significant interaction of contrast × semantic category, F(1, 12) = 1.38, p = 0.26, and no main effect of contrast, F(1, 12) = 0.88, p = 0.37. Mean thresholds for discrimination of gender and expression after normal and contrast-negated face adaptation are displayed in Figure 4 (bottom). Surprisingly, there was a main effect of semantic category, F(1, 12) = 5.70, p = 0.034, suggesting that adaptation to an angry male face, regardless of contrast, alters discrimination ability differentially for expression and gender. To determine what drove this effect, we conducted follow-up comparisons using paired-sample t tests. Results revealed a significant increase in thresholds for differences in gender, t(12) = 2.20, p = 0.049, but not expression, t(12) = 0.19, p = 0.86. In other words, participants were less capable of discriminating differences in gender, but not expression, following adaptation to an angry male face. 
Easy-trial accuracy
There were no significant CNEs for expression accuracy, regardless of adaptor (Neutral: p = 1.0; Angry Male: p = 0.41), and no significant CNE for gender accuracy when adapted to the expression-and-gender-neutral face (p = 0.27). However, there was a marginally significant CNE for gender accuracy when adapted to the angry male face (p = 0.074). Mean accuracies and standard errors are displayed in Table 2
Table 2
 
Experiment 2A easy-trial accuracy.
Table 2
 
Experiment 2A easy-trial accuracy.
Gender and expression neutral Angry male
Normal Contrast negated Normal Contrast negated
Expression 100% (± 0%) 100% (± 0%) 95.3% (± 3.5%) 98.6% (± 1.4%)
Gender 98.8% (± 1.2%) 93.4% (± 4.3%) 65.8% (± 9.2%) 85.8% (± 5.0%)
Discussion
There are two main points to take away from this experiment. First, there is little or no differential effect of contrast negation on expression and gender encoding. This is supported by the absence of any interactions between adapting contrast and semantic category. Second, gender processing appears overall less robust than expression processing. This is suggested by the gender-specific discrimination deficit observed after angry-male adaptation. 
In the introduction to this experiment, we defined a set of possible outcomes. First, if the results showed no aftereffect for gender discrimination following contrast-negated face adaptation but a significant aftereffect for expression discrimination, then this would suggest a separation in processing pathways that occurs before or during perceptual encoding. Second, if we observed significant aftereffects for both gender and expression following contrast-negated face adaptation, then this would suggest that the dissociative effect of contrast negation on expression and gender discrimination observed in Experiment 1 likely originates after perceptual encoding, that is, either before or during the recognition and lexical-decision stage. 
The current experiment revealed significant aftereffects (i.e., shifts in log PSE) for both gender and expression discrimination following adaptation to contrast-negated faces, with no significant difference in the size of the aftereffects for gender and expression. These results suggest that the disproportionate impairment observed with gender discrimination in Experiment 1 occurs after perceptual (local and holistic) encoding. To further confirm this, we conducted Experiment 2B in the same manner as Experiment 2A, but with contrast-negated test faces instead of normal test faces. If our hypothesis is correct—that is, that the dissociative impairment from contrast negation occurs after perceptual encoding—then we would expect to replicate the results of Experiment 1 and find that participants are completely incapable of discriminating differences in contrast-negated gender, regardless of adapting contrast. 
Experiment 2B
Methods
Participants
Ten undergraduates from Dartmouth College participated in exchange for course credit. All participants had normal or corrected-to-normal visual acuity. This research was approved by the Committee for the Protection of Human Subjects at Dartmouth College and conducted in accordance with the 1964 Declaration of Helsinki. 
Stimuli, design, and procedure
These were the same as in Experiment 2A, except that here the test faces were contrast-negated. 
Data analysis
Although we planned to analyze the data in the manner described for Experiment 2A, our participants found contrast-negated gender discrimination so difficult that we could not obtain reliable fits for eight of our 10 participants (replicating Experiment 1). As a result, we assessed the effect of contrast negation on all conditions (including gender) by examining easy-trial accuracy (described in Experiment 2A). We also analyzed the effect of adapting contrast on aftereffect size and change in slope for expression trials only. Since aftereffect size was normally distributed, we used a one-sample t test. Change in slope, however, was not normally distributed; therefore we used a Wilcoxon signed-rank test. 
Results
Size of the aftereffects
Our results revealed significant aftereffects for expression discrimination following both normal and contrast-negated face adaptation [normal: t(9) = 2.35, p = 0.043; contrast-negated: t(9) = 2.75, p = 0.023], with no significant difference in aftereffect size, t(9) = 0.02, p = 0.56. 
Changes in slope
Consistent with the results from Experiment 2A, adaptation did not alter expression-discrimination ability (i.e., log thresholds), regardless of adaptor type (normal: M = −0.028, SE = 0.053, p = 0.96; contrast-negated: M = 0.072, SE = 0.047, p = 0.14). 
Easy-trial accuracy
Although there were no significant CNEs for expression accuracy, regardless of adaptor type (Neutral: p = 1.0; Angry Male: p = 1.0), and no significant CNE for gender accuracy with adaptation to the angry male face (p = 0.67), there was a marginally significant CNE for gender accuracy with adaptation to the expression-and-gender-neutral face (p = 0.068). Mean accuracies and standard errors are displayed in Table 3
Table 3
 
Experiment 2B easy-trial accuracy.
Table 3
 
Experiment 2B easy-trial accuracy.
Gender and expression neutral Angry male
Normal Contrast negated Normal Contrast negated
100% (± 0%) 100% (± 0%) 98.0% (± 2.0%) 98.0% (± 2.0%)
98.0% (± 2.0%) 84.3% (± 6.6%) 86.8% (± 8.2%) 82.0% (± 7.7%)
Discussion
Results from the current experiment replicated the findings from Experiment 1. That is, contrast negation severely impaired the ability to discriminate differences in gender, such that we could not even obtain reliable fits to a psychometric function. The same was not true for facial expressions of emotion. Current results also replicated the findings from Experiment 2A. That is, aftereffect sizes for expression judgments did not depend upon the adapting-contrast polarity. Taken together, these results show that normal contrast polarity is not crucial for the encoding of facial expressions of emotion (or at least for happiness and anger). Moreover, these results again suggest that contrast negation uniquely impairs gender processing after perceptual encoding, that is, at some point before or during the recognition and lexical-decision stage. 
Experiment 3
In Experiment 1, we observed significant IEs for expression and gender discrimination, with a larger effect for gender than expression. For comparison, in Experiment 3, we examine the effect of adaptation to an upright or inverted face on expression and gender discrimination. 
Methods
Participants
Eight undergraduates from Dartmouth College participated in exchange for course credit. All participants had normal or corrected-to-normal visual acuity. This research was approved by the Committee for the Protection of Human Subjects at Dartmouth College and conducted in accordance with the 1964 Declaration of Helsinki. 
Stimuli
Stimuli were the same as in Experiment 2A, except that adapting faces were either upright or inverted. 
Design, procedure, and data analysis
These were similar to the methods described in Experiment 2A, except there were fewer trials (200 trials) in the current experiment. Pilot data indicated that this was enough to obtain reliable threshold fits. As a result, we only obtained 25 easy trials per participant, and not all participants had easy trials in every condition. Consequently, we did not have enough data to compare IEs for gender and expression easy-trial accuracy and do not present easy-trial results. Both aftereffect size and change in slope were analyzed in two separate 2 × 2 ANOVAs with adapting orientation (upright, inverted) and semantic category (expression, gender) as repeated-measures variables. 
Results
Size of the aftereffects
Our two-factor ANOVA revealed no main effect of semantic category, F(1, 7) = 0.039, p = 0.85. However, there was a significant main effect of inversion, F(1, 7) = 22.6, p = 0.002, with larger aftereffects for upright adaptation. There was also a significant interaction of inversion × semantic category, F(1, 7) = 17.8, p = 0.004, with larger IEs for expression than gender discrimination (Figure 5). The effect of inversion on gender aftereffect size was marginally significant, t(7) = 2.20, p = 0.064. 
Figure 5
 
The difference in aftereffect size (i.e., shift in point of subjective equality) following upright-face versus inverted-face adaptation in Experiment 3. N = 8, error bars = positive and negative standard error.
Figure 5
 
The difference in aftereffect size (i.e., shift in point of subjective equality) following upright-face versus inverted-face adaptation in Experiment 3. N = 8, error bars = positive and negative standard error.
Changes in slope
Results of our two-factor ANOVA showed no significant main effect of semantic category, F(1,7) = 0.069, p = 0.80, no main effect of inversion, F(1, 7) = 0.23, p = 0.65, and no significant interaction of orientation × semantic category, F(1, 7) = 0.47, p = 0.52. 
Discussion
In Experiment 1, inversion significantly impaired the ability to discriminate differences in expression and gender. Yet here we observed a larger effect of inversion for expression aftereffect size than for gender aftereffect size. While this would seem contradictory, participants in the current experiment were tested with upright faces only, with the purpose of measuring perceptual-encoding ability. By contrast, in Experiment 1 participants discriminated both upright and inverted faces—the purpose was measuring recognition and decision-making ability. As demonstrated in Experiments 2A and 2B, there is a strong difference between adapting to a normal or contrast-negated face and discriminating between normal or contrast-negated faces. The same may also be true for inversion. That is, the effect of inversion on adaptation could be qualitatively different from IEs for discrimination. Specifically, unlike with contrast-negated faces, results of the current experiment showed that for inverted faces, expression encoding is more affected than gender encoding. These results further suggest that the overlap of neural substrates encoding gender and facial expression of emotion may only be partial. 
General discussion
At a glance, our results are quite surprising: On one hand, contrast negation markedly impairs gender discrimination but not so much expression discrimination; on the other hand, adapting to a contrast-negated face leads to equivalent aftereffects for gender and expression discrimination (as demonstrated by nonsignificant interactions). By contrast, although inversion impairs both gender and emotion discrimination, we observed a larger effect of inversion for expression aftereffect size than for gender aftereffect size. In the context of the Bruce and Young (1986) framework, it is tempting to take this as evidence of shared perceptual-encoding mechanisms followed by a separation in the dynamic and invariant processing pathways. Indeed, our results support a recent model (Calder & Young, 2005), suggesting that perception of gender (identity) and facial expression may share some neural substrates that underlie encoding of faces. Particularly, the results of our Experiment 2A suggest shared perceptual processing mechanisms, not just at the level of basic first-order relation encoding but also later stage holistic encoding. It is important to note that there may be multiple stages of visual processing that occur during perceptual encoding (Kay, Winawer, Rokem, Mezer, & Wandell, 2013). Moreover, the formation of a holistic face percept may be modulated by top-down influences (Li et al., 2010; Mechelli, Price, Friston, & Ishai, 2004). Indeed, the results of our Experiment 3 suggest that the overlap of neural substrates encoding gender and facial expression of emotion may only be partial. Nonetheless, if there is no differential effect of contrast negation during encoding, then why does contrast negation uniquely impair gender recognition? 
It has been proposed that the directional relationship of luminance (i.e., direction of contrast) between subregions of a face, and especially around the eyes, are important for identity recognition (Gilad et al., 2009; Ohayon, Freiwald, & Tsao, 2012; Sinha, 2002). These directional luminance relationships are highly robust and are the precursors to many higher level semantic representations of invariant facial characteristics, such as gender (Dupuis-Roy, Fortin, Fiset, & Gosselin, 2009; Frieze, Olson, & Russell, 1991; Nestor & Tarr, 2008; Santos & Young, 2008) and attractiveness (Russell, 2003). Consequently, we suggest that directional luminance relationships should be considered foundational invariant facial features. If these luminance relationships are reversed, such as with contrast negation, we expect that this basic luminance information would no longer assist in face recognition and the processing of other invariant facial characteristics, such as gender. Indeed, this is exactly what the current study found: Gender discrimination was impaired by contrast negation. By contrast, dynamic features, such as pupil size and the curvature of the lips, are relatively unaffected by contrast negation. Previous research has proposed that this contrast invariance may result from a greater reliance on the edge-based information in a face (White & Li, 2006). This is exactly what our results in Experiment 1 showed for expression discrimination—very little impairment with contrast negation. Note that analyzing the luminance relations involves comparing averaged luminance levels across different facial regions. This would need to occur after the perceptual encoding of, for example, local luminance. Adaptation to luminance would not normally affect the directional relationship between facial regions. Therefore, aftereffects from adaption to contrast-negated faces may not differ for gender categorization and expression identification. Indeed, this is what we found in Experiment 2A. In comparison, inversion impairs configural representations and therefore can affect both the dynamic (i.e., expression) and invariant (i.e., gender) properties of the face. This was also observed in our experiments. 
Our study poses some important questions for future neuroimaging or neurophysiological research investigating the neural circuitry underlying face perception. The neural model for distributed face processing proposed by Haxby et al. (2000) is often considered the neural analogue to the Bruce and Young (1986) model. It was proposed that there is a core and an extended face-processing network (Haxby et al., 2000). The core face network consists of three regions that respond preferentially to faces: (a) a region of the fusiform gyrus called the fusiform face area (FFA), (b) a region of inferior occipital gyrus called the occipital face area (OFA), and (c) the superior temporal sulcus (STS). The precise function of each area is unclear, but the FFA and OFA appear to encode facial structure and identity (i.e., the invariant information), while the STS responds to movement-based changes in the face, that is, the dynamic information (Andrews & Ewbank, 2004; Grill-Spector, Knouf, & Kanwisher, 2004; Harris & Aguirre, 2010; Haxby et al., 2000; Liu, Harris, & Kanwisher, 2010; Rossion, 2008; Rotshtein, Henson, Treves, Driver, & Dolan, 2005; Schiltz, Dricot, Goebel, & Rossion, 2010; Winston, Henson, Fine-Goulden, & Dolan, 2004), and the extended face network is reserved for processing the remaining information, for example, emotional expression or biological relevance in the amygdala (Adolphs, 2008; Johnson, 2005; Pessoa & Adolphs, 2010). While the results of our psychophysical testing cannot be taken as evidence of region-specific effects, they do have implications for both the perceptual-encoding stage of face processing (i.e., OFA and/or FFA) and the later stages of face processing (i.e., FFA, STS, and/or extended regions, such as the amygdala). First, our finding of a mild impairment common to both gender and expression during perceptual encoding suggests the use of overlapping neural mechanisms in the OFA and/or FFA. Second, our observation of a more substantive impairment for gender at a later processing stage, presumably after the variant and invariant pathways separate, may correspond with a dissociation in FFA and STS activation (as described for identity in the Introduction). Why one domain should be affected while the other is not remains unknown, but it is possible that additional neural regions associated with the extended face network (Haxby et al., 2000), such as the amygdala, are recruited to further support the processing of dynamic facial information (e.g., expressions of emotion). 
The current results also fill an important gap in our understanding of gender processing, since most of the face-processing literature focuses on identity and/or race processing, with only a few exceptions (reviewed in Dupuis-Roy et al., 2009). Our findings support the notion that gender and identity are processed by similar mechanisms (Calder, Burton, Miller, Young, & Akamatsu, 2001; Goshen-Gottstein & Ganel, 2000), presumably through the invariant-feature processing pathway. Most of the previous research investigating the dynamic versus invariant processing pathways involves facial expressions of emotion and identity. As we described in the Introduction, such a limited scope makes it difficult to determine whether any observed differences in expression and identity processing are necessarily representative of the split between dynamic- and invariant-feature processing pathways. However, using gender instead of identity is not without limitations. Previous research suggests that male faces may be more prone to the perception of anger, which in the current experiment could give the appearance of greater sensitivity to anger in the male face. To be sure that these results do not reflect unintended signs of anger in the neutral male faces, we had an experienced FACS (Facial Action Coding System; Ekman, Friesen, & Hager, 2002) coder measure the action units (AUs) present in the neutral male and female faces to ensure that they were physically consistent with neutral. Results showed that the male neutral face displayed a low-intensity activation of AU 7, an AU typically associated with anger. In contrast, the female neutral face may have contained a slightly raised eyebrow, an action typically viewed in expressions of fear or surprise. Yet neither of these subtle activations were enough to qualify them as displaying an emotion other than neutral. Thus, while the slight expression of an AU associated with anger may have increased sensitivity to anger in the male face, we believe it is unlikely to entirely account for our adaptation results. However, it is possible that for social and/or evolutionary reasons, we are more sensitive to anger in a male face than a female face (discussed in Aguado, García-Gutierrez, & Serrano-Pedraza, 2009; Becker, Kenrick, Neuberg, Blackwell, & Smith, 2007; Hess, Adams, Grammer, & Kleck, 2009), which would be consistent with the joint encoding of expression (“Angry”) and gender (“Male”). In either case, the fact that our current findings replicate those of previous identity-based research supports the notion that our results reflect the invariant processing pathway. 
In sum, our results suggest a partial overlap in the processing mechanisms supporting facial expression of emotion and gender processing. We found evidence of a mild impairment due to contrast negation during the perceptual encoding of both dynamic (i.e., expression) and invariant (i.e., gender) facial information. Moreover, we observed a unilateral effect of contrast negation on the mechanisms underlying gender recognition and decision making but not facial expression of emotion. Taken together, these results suggest that the dynamic and invariant pathways are largely joined during perceptual encoding and then likely separate into two distinct processing streams for semantic processing and decision making. 
Acknowledgments
We wish to thank Shichuan Du for lending us her FACS coding skills. Part of the present research was presented at the Vision Sciences Society 10th and 11th Annual Meetings in 2010 and 2011. This work was supported by a NARSAD Young Investigator Award to MM. Development of the MacBrain Face Stimulus Set was overseen by Nim Tottenham and supported by the John D. and Catherine T. MacArthur Foundation Research Network on Early Experience and Brain Development. Please contact Nim Tottenham at tott0006@tc.umn.edu for more information concerning the stimulus set. 
PMC deposit required: No 
Corresponding author: Ming Meng. 
Email: ming.meng@dartmouth.edu. 
Address: Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH. 
References
Adolphs R. (2008). Fear, faces, and the human amygdala. Current Opinion in Neurobiology, 18 (2), 166–172, doi:10.1016/j.conb.2008.06.006. [CrossRef] [PubMed]
Aguado L. García-Gutierrez A. Serrano-Pedraza I. (2009). Symmetrical interaction of sex and expression in face classification tasks. Perception & Psychophysics, 71 (1), 9–25. [CrossRef]
American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Washington, DC: Author.
Andrews T. J. Ewbank M. P. (2004). Distinct representations for facial identity and changeable aspects of faces in the human temporal lobe. NeuroImage, 23 (3), 905–913, doi:10.1016/j.neuroimage.2004.07.060. [CrossRef] [PubMed]
Becker D. V. Kenrick D. T. Neuberg S. L. Blackwell K. C. Smith D. M. (2007). The confounded nature of angry men and happy women. Journal of Personality and Social Psychology, 92 (2), 179–190. [CrossRef] [PubMed]
Brainard D. H. (1997). The psychophysics toolbox. Spatial Vision, 10, 433–436. [CrossRef] [PubMed]
Bruce V. Langton S. (1994). The use of pigmentation and shading information in recognising the sex and identities of faces. Perception, 23 (7), 803–822. [CrossRef] [PubMed]
Bruce V. Young A. (1986). Understanding face recognition. British Journal of Psychology, 77 (Pt 3), 305–327. [CrossRef] [PubMed]
Calder A. J. Burton A. M. Miller P. Young A. W. Akamatsu S. (2001). A principal component analysis of facial expressions. Vision Research, 41 (9), 1179–1208. [CrossRef] [PubMed]
Calder A. J. Jansen J. (2005). Configural coding of facial expressions: The impact of inversion and photographic negative. Visual Cognition, 12 (3), 495–518. [CrossRef]
Calder A. J. Young A. W. (2005). Understanding the recognition of facial identity and facial expression. Nature Reviews Neuroscience, 6, 641–651. [CrossRef] [PubMed]
Crookes K. McKone E. (2009). Early maturity of face recognition: No childhood development of holistic processing, novel face encoding, or face-space. Cognition, 111 (2), 219–247. [CrossRef] [PubMed]
Duchaine B. Germine L. Nakayama K. (2007). Family resemblance: Ten family members with prosopagnosia and within-class object agnosia. Cognitive Neuropsychology, 24 (4), 419–430, doi:10.1080/02643290701380491. [CrossRef] [PubMed]
Duchaine B. Murray H. Turner M. White S. Garrido L. (2009). Normal social cognition in developmental prosopagnosia. Cognitive Neuropsychology, 26 (7), 620–634, doi:10.1080/02643291003616145. [CrossRef] [PubMed]
Duchaine B. Parker H. Nakayama K. (2003). Normal recognition of emotion in a prosopagnosic. Perception, 32 (7), 827–838. [CrossRef] [PubMed]
Dupuis-Roy N. Fortin I. Fiset D. Gosselin F. (2009). Uncovering gender discrimination cues in a realistic setting. Journal of Vision, 9 (2): 10, 1–18, http://www.journalofvision.org/content/9/2/10, doi:10.1167/9.2.10. [PubMed] [Article] [CrossRef] [PubMed]
Ekman P. Friesen W. V. Hager J. C. (2002). Facial action coding system: The manual. Salt Lake City, UT: A Human Face.
Ellamil M. Susskind J. M. Anderson A. K. (2008). Examinations of identity invariance in facial expression adaptation. Cognitive, Affective & Behavioral Neuroscience, 8 (3), 273–281. [CrossRef] [PubMed]
Farah M. J. Wilson K. D. Drain M. Tanaka J. N. (1998). What is “special” about face perception? Psychological Review, 105 (3), 482–498. [CrossRef] [PubMed]
Fox C. J. Barton J. J. (2007). What is adapted in face adaptation? The neural representations of expression in the human visual system. Brain Research, 1127 (1), 80–89, doi:10.1016/j.brainres.2006.09.104. [CrossRef] [PubMed]
Frieze I. H. Olson J. E. Russell J. (1991). Attractiveness and income for men and women in management. Journal of Applied Social Psychology, 21 (13), 1039–1057. [CrossRef]
Galper R. E. (1970). Recognition of faces in photographic negative. Psychonomic Science, 19, 207–208). [CrossRef]
Galster M. Kahana M. J. Wilson H. R. Sekuler R. (2009). Identity modulates short-term memory for facial emotion. Cognitive, Affective & Behavioral Neuroscience, 9 (4), 412–426, doi:10.3758/CABN.9.4.412. [CrossRef] [PubMed]
Ganel T. Goshen-Gottstein Y. (2004). Effects of familiarity on the perceptual integrality of the identity and expression of faces: The parallel-route hypothesis revisited. Journal of Experimental Psychology: Human Perception and Performance, 30 (3), 583–597, doi:10.1037/0096-1523.30.3.583. [CrossRef] [PubMed]
Gilad S. Meng M. Sinha P. (2009). Role of ordinal contrast relationships in face encoding. Proceedings of the National Academy of Sciences, USA, 106 (13), 5353–5358, doi:10.1073/pnas.0812396106. [CrossRef]
Gobbini M. I. Haxby J. V. (2007). Neural systems for recognition of familiar faces. Neuropsychologia, 45 (1), 32–41, doi:10.1016/j.neuropsychologia.2006.04.015. [CrossRef] [PubMed]
Goshen-Gottstein Y. Ganel T. (2000). Repetition priming for familiar and unfamiliar faces in a sex-judgment task: Evidence for a common route for the processing of sex and identity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26 (5), 1198–1214. [CrossRef] [PubMed]
Graham N. V. S. (1989). Visual pattern analyzers. Oxford, UK: Oxford University Press.
Grill-Spector K. Knouf N. Kanwisher N. (2004). The fusiform face area subserves face perception, not generic within-category identification. Nature Neuroscience, 7 (5), 555–562, doi:10.1038/nn1224. [CrossRef] [PubMed]
Harris A. Aguirre G. K. (2010). Neural tuning for face wholes and parts in human fusiform gyrus revealed by FMRI adaptation. Journal of Neurophysiology, 104 (1), 336–345, doi:10.1152/jn.00626.2009. [CrossRef] [PubMed]
Haxby J. V. Hoffman E. A. Gobbini M. I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4 (6), 223–233. [CrossRef] [PubMed]
Hess U. Adams R. B. Grammer K. Kleck R. E. (2009). Face gender and emotion expression: Are angry women more like men? Journal of Vision, 9 (12): 19, 1–8, http://www.journalofvision.org/content/9/12/19, doi:10.1167/9.12.19. [PubMed] [Article] [CrossRef] [PubMed]
Hole G. J. George P. A. Dunsmore V. (1999). Evidence for holistic processing for faces viewed as photographic negatives. Perception, 28, 341–359. [CrossRef] [PubMed]
Humphreys K. Avidan G. Behrmann M. (2007). A detailed investigation of facial expression processing in congenital prosopagnosia as compared to acquired prosopagnosia. Experimental Brain Research, 176 (2), 356–373, doi:10.1007/s00221-006-0621-5.
Jiang F. Blanz V. O'Toole A. J. (2006). Probing the visual representation of faces with adaptation: A view from the other side of the mean. Psychological Science, 17 (6), 493–500. [CrossRef] [PubMed]
Johnson M. H. (2005). Subcortical face processing. Nature Reviews Neuroscience, 6 (10), 766–774. [CrossRef] [PubMed]
Kay K. N. Winawer J. Rokem A. Mezer A. Wandell B. A. (2013). A two-stage cascade model of BOLD responses in human visual cortex. PLoS Computational Biology, 9 (5), e1003079, doi:10.1371/journal.pcbi.1003079.
Kemp R. McManus C. Pigott T. (1990). Sensitivity to the displacement of facial features in negative and inverted images. Perception, 19 (4), 531–543. [CrossRef] [PubMed]
Leopold D. A. O'Toole A. J. Vetter T. Blanz V. (2001). Prototype-referenced shape encoding revealed by high-level after effects. Nature Neuroscience, 4 (1), 89–94. [CrossRef] [PubMed]
Leopold D. A. Rhodes G. Mueller K. M. Jeffery L. (2005). The dynamics of visual adaptation to faces. Proceedings of the Royal Society B: Biological Sciences, 272 (1566), 897–904. [CrossRef]
Li J. Liu J. Liang J. Zhang H. Zhao J. Rieth C. A. Lee K. (2010). Effective connectivities of cortical regions for top-down face processing: A dynamic causal modeling study. Brain Research, 1340, 40–51, doi:10.1016/j.brainres.2010.04.044. [CrossRef] [PubMed]
Liu J. Harris A. Kanwisher N. (2010). Perception of face parts and face configurations: An FMRI study. Journal of Cognitive Neuroscience, 22 (1), 203–211, doi:10.1162/jocn.2009.21203. [CrossRef] [PubMed]
McGurk H. MacDonald J. (1976). Hearing lips and seeing voices. Nature, 264 (5588), 746–748. [CrossRef] [PubMed]
McKone E. Crookes K. Jeffery L. Dilks D. D. (2012). A critical review of the development of face recognition: Experience is less important than previously believed. Cognitive Neuropsychology, 29 (1–2), 174–212. [CrossRef] [PubMed]
McKone E. Crookes K. Kanwisher N. (2009). The cognitive and neural development of face recognition in humans. The Cognitive Neurosciences, 4, 467–482.
McKone E. Yovel G. (2009). Why does picture-plane inversion sometimes dissociate perception of features and spacing in faces, and sometimes not? Toward a new theory of holistic processing. Psychonomic Bulletin & Review, 16 (5), 778–797, doi:10.3758/PBR.16.5.778. [CrossRef] [PubMed]
Mechelli A. Price C. J. Friston K. J. Ishai A. (2004). Where bottom-up meets top-down: Neuronal interactions during perception and imagery. Cerebral Cortex, 14 (11), 1256–1265, doi:10.1093/cercor/bhh087. [CrossRef] [PubMed]
Nederhouser M. Yue X. Mangini M. C. Biederman I. (2007). The deleterious effect of contrast reversal on recognition is unique to faces, not objects. Vision Research, 47 (16), 2134–2142, doi:10.1016/j.visres.2007.04.007. [CrossRef] [PubMed]
Nestor A. Tarr M. J. (2008). The segmental structure of faces and its use in gender recognition. Journal of Vision, 8 (7): 7, 1–12, http://www.journalofvision.org/content/8/7/7, doi:10.1167/8.7.7. [PubMed] [Article] [CrossRef] [PubMed]
Ng M. Ciaramitaro V. M. Anstis S. Boyton G. M. Fine I. (2006). Selectivity for the configural cues that identify the gender, ethnicity, and identity of faces inhuman cortex. Proceedings of the National Academy of Sciences, USA, 103, 19552–19557. [CrossRef]
Ohayon S. Freiwald W. A. Tsao D. Y. (2012). What makes a cell face selective? The importance of contrast. Neuron, 74 (3), 567–581, doi:10.1016/j.neuron.2012.03.024. [CrossRef] [PubMed]
Oruç I. Barton J. J. (2011). Adaptation improves discrimination of face identity. Proceedings of the Royal Society B: Biological Sciences, 278 (1718), 2591–2597. [CrossRef]
Palermo R. Willis M. L. Rivolta D. McKone E. Wilson C. E. Calder A. J. (2011). Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia. Neuropsychologia, 49 (5), 1226–1235, doi:10.1016/j.neuropsychologia.2011.02.021. [CrossRef] [PubMed]
Pallett P. M. Cohen S. J. Dobkins K. R. (2013). Face and object discrimination in autism, and relationship to IQ and age. Journal of Autism and Developmental Disorders, 1–16, [e-pub ahead of print].
Pallett P. M. Dobkins K. R. (2013). Development of face discrimination abilities, and relationship to magnocellular pathway development, between childhood and adulthood. Visual Neuroscience, 1–12.
Pelli D. G. (1997). The VideoToolbox software for visual psychophysics: Transforming numbers into movies. Spatial Vision, 10, 437–442. [CrossRef] [PubMed]
Pessoa L. Adolphs R. (2010). Emotion processing and the amygdala: From a “low road” to “many roads” of evaluating biological significance. Nature Reviews Neuroscience, 11 (11), 773–783, doi:10.1038/nrn2920. [CrossRef] [PubMed]
Reilly J. S. McIntire M. L. Bellugi U. (1990). Faces: The relationship between language and affect. In Volterra V. Erting C. J. (Eds.), From gesture to language in hearing and deaf children ( pp. 128–141). Berlin: Springer Berlin Heidelberg.
Rhodes G. Jeffery L. Watson T. L. Clifford C. W. G. Nakayama K. (2003). Fitting the mind to the world: Face adaptation and attractiveness aftereffects. Psychological Science, 14 (6), 558–566. [CrossRef] [PubMed]
Rossion B. (2002). Is sex categorization from faces really parallel to face recognition? Visual Cognition, 9 (8), 1003–1020. [CrossRef]
Rossion B. (2008). Constraining the cortical face network by neuroimaging studies of acquired prosopagnosia. NeuroImage, 40 (2), 423–426, doi:10.1016/j.neuroimage.2007.10.047. [CrossRef] [PubMed]
Rotshtein P. Henson R. N. Treves A. Driver J. Dolan R. J. (2005). Morphing Marilyn into Maggie dissociates physical and identity face representations in the brain. Nature Neuroscience, 8 (1), 107–113, doi:10.1038/nn1370. [CrossRef] [PubMed]
Russell R. (2003). Sex, beauty, and the relative luminance of facial features. Perception, 32 (9), 1093–1107. [CrossRef] [PubMed]
Santos I. M. Young A. W. (2008). Effects of inversion and negation on social inferences from faces. Perception, 37 (7), 1061–1078. [CrossRef] [PubMed]
Schiltz C. Dricot L. Goebel R. Rossion B. (2010). Holistic perception of individual faces in the right middle fusiform gyrus as evidenced by the composite face illusion. Journal of Vision, 10 (2): 25, 1–16, http://www.journalofvision.org/content/10/2/25, doi:10.1167/10.2.25. [PubMed] [Article] [CrossRef] [PubMed]
Schweinberger S. R. Soukup G. R. (1998). Asymmetric relationships among perceptions of facial identity, emotion, and facial speech. Journal of Experimental Psychology: Human Perception and Performance, 24 (6), 1748–1765. [CrossRef] [PubMed]
Sergent J. (1984). An investigation into component and configural processes underlying face perception. British Journal of Psychology, 75 (2), 221–242. [CrossRef] [PubMed]
Sinha P. (2002). Qualitative representations for recognition. In Bülthoff H. H. Wallraven C. Lee S.-W. Poggio T. A. (Eds.), Biologically motivated computer vision (pp. 249–262). Berlin: Springer Berlin Heidelberg.
Taylor M. M. Creelman C. D. (1967). PEST: Efficiency estimates on probability functions. Journal of the Acoustical Society of America, 41, 782–787. [CrossRef]
Tottenham N. Tanaka J. W. Leon A. C. McCarry T. Nurse M. Hare T. A. Nelson C. (2009). The NimStim set of facial expressions: Judgments from untrained research participants. Psychiatry Research, 168 (3), 242–249, doi:10.1016/j.psychres.2008.05.006. [CrossRef] [PubMed]
Watson T. L. Clifford C. W. G. (2003). Pulling faces: An investigation of the face-distortion aftereffect. Perception, 32, 1109–1116. [CrossRef] [PubMed]
Webster M. A. (1996). Human colour perception and its adaptation. Network: Computation in Neural Systems, 7, 587–634. [CrossRef]
Webster M. A. Kaping D. Mizokami Y. Duhamel P. (2004). Adaptation to natural facial categories. Nature, 428 (6982), 557–561. [CrossRef] [PubMed]
Webster M. A. MacLeod D. I. (2011). Visual adaptation and face perception. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 366 (1571), 1702–1725, doi:10.1098/rstb.2010.0360. [CrossRef]
Webster M. A. MacLin O. H. (1999). Figural aftereffects in the perception of faces. Psychonomic Bulletin and Review, 6 (4), 647–653. [CrossRef] [PubMed]
Webster M. A. Mollon J. D. (1991). Changes in colour appearance following post-receptoral adaptation. Nature, 349 (6306), 235–238, doi:10.1038/349235a0. [CrossRef] [PubMed]
Webster M. A. Werner J. S. Field D. J. (2005). Adaptation and the phenomenology of perception. In Clifford C. W. G. Rhodes G. (Eds.), Fitting the mind to the world: Adaptation and aftereffects in high-level vision (pp. 241–277). Oxford, UK: Oxford University Press.
Westheimer G. Gee A. (2002). Orthogonal adaptation and orientation discrimination. Vision Research, 42 (20), 2339–2343, http://dx.doi.org/10.1016/S0042-6989(02)00192-X. [CrossRef] [PubMed]
White M. (2001). Effect of photographic negation on matching the expressions and identities of faces. Perception, 30 (8), 969–981. [CrossRef] [PubMed]
White M. Li J. (2006). Matching faces and expressions in pixelated and blurred photos. American Journal of Psychology, 119 (1), 21–28. [CrossRef] [PubMed]
Wichmann F. A. Hill N. J. (2001). The psychometric function: I. Fitting, sampling and goodness-of-fit. Perception & Psychophysics, 63 (8), 1293–1313. [CrossRef] [PubMed]
Winston J. S. Henson R. N. Fine-Goulden M. R. Dolan R. J. (2004). fMRI-adaptation reveals dissociable neural representations of identity and expression in face perception. Journal of Neurophysiology, 92 (3), 1830–1839. [CrossRef] [PubMed]
Young A. W. Hellawell D. Hay D. C. (1987). Configurational information in face perception. Perception, 16 (6), 747–759. [CrossRef] [PubMed]
Footnotes
1  Since participants were incapable of discriminating differences between contrast-negated male and female faces, gender was not included in this analysis (see Data analysis, previously, for details).
Footnotes
2  This was done for all but one participant, whose easy-trial data were mistakenly not saved.
Figure 1
 
Example trials for Experiment 1. Participants indicated which face better fit the category label, that is, which face appeared happier (left) or which face appeared more masculine (right). The top left shows a trial with 70% happy (left) and neutral (right), and the upper right shows 70% male and 50% male/50% female (i.e., gender neutral). The example trials for contrast-negated faces (bottom) use the same morphs but are shown on opposite sides of the display (left shows neutral, right shows 70% happy, etc.).
Figure 1
 
Example trials for Experiment 1. Participants indicated which face better fit the category label, that is, which face appeared happier (left) or which face appeared more masculine (right). The top left shows a trial with 70% happy (left) and neutral (right), and the upper right shows 70% male and 50% male/50% female (i.e., gender neutral). The example trials for contrast-negated faces (bottom) use the same morphs but are shown on opposite sides of the display (left shows neutral, right shows 70% happy, etc.).
Figure 2
 
Gender discrimination across all participants (N = 22) in Experiment 1. The top panel displays accuracy with contrast negation, the middle panel shows original normal-face performance, and the bottom panel displays inverted-face performance.
Figure 2
 
Gender discrimination across all participants (N = 22) in Experiment 1. The top panel displays accuracy with contrast negation, the middle panel shows original normal-face performance, and the bottom panel displays inverted-face performance.
Figure 3
 
Example trial for Experiments 2A, 2B, and 3. In each trial, participants were asked to make a yes or no judgment based on facial expression of emotion (“Angry”) or gender (“Male”). The physical distance between the adaptation faces in this figure is exaggerated for the purpose of demonstrating movement. The actual shift in location was 0.16° (5 pixels) up and to the left of center and 0.16° (5 pixels) down and to the right of center (i.e., 0.23° diagonally).
Figure 3
 
Example trial for Experiments 2A, 2B, and 3. In each trial, participants were asked to make a yes or no judgment based on facial expression of emotion (“Angry”) or gender (“Male”). The physical distance between the adaptation faces in this figure is exaggerated for the purpose of demonstrating movement. The actual shift in location was 0.16° (5 pixels) up and to the left of center and 0.16° (5 pixels) down and to the right of center (i.e., 0.23° diagonally).
Figure 4
 
The top panel displays the size of the aftereffects following adaptation to a normal or contrast-negated face for judgments of facial expression of emotion and gender (i.e., shift in the point of subjective equality) in Experiment 2A. The bottom panel displays the effect of normal and contrast-negated face adaptation on thresholds for discrimination of facial expression of emotion and gender (i.e., change in the slope of the psychometric function). N = 13, error bars = positive and negative standard error.
Figure 4
 
The top panel displays the size of the aftereffects following adaptation to a normal or contrast-negated face for judgments of facial expression of emotion and gender (i.e., shift in the point of subjective equality) in Experiment 2A. The bottom panel displays the effect of normal and contrast-negated face adaptation on thresholds for discrimination of facial expression of emotion and gender (i.e., change in the slope of the psychometric function). N = 13, error bars = positive and negative standard error.
Figure 5
 
The difference in aftereffect size (i.e., shift in point of subjective equality) following upright-face versus inverted-face adaptation in Experiment 3. N = 8, error bars = positive and negative standard error.
Figure 5
 
The difference in aftereffect size (i.e., shift in point of subjective equality) following upright-face versus inverted-face adaptation in Experiment 3. N = 8, error bars = positive and negative standard error.
Table 1
 
Experiment 1 easy-trial accuracy.
Table 1
 
Experiment 1 easy-trial accuracy.
Original Inverted Contrast negated
Happy 98.1% (± 1.1%) 96.7% (± 1.4%) 98.2% (± 1.2%)
Angry 99.0% (± 1.0%) 97.1% (± 1.4%) 98.8% (± 1.1%)
Gender 95.9% (± 1.2%) 80.4% (± 2.4%) 63.2% (± 2.3%)
Table 2
 
Experiment 2A easy-trial accuracy.
Table 2
 
Experiment 2A easy-trial accuracy.
Gender and expression neutral Angry male
Normal Contrast negated Normal Contrast negated
Expression 100% (± 0%) 100% (± 0%) 95.3% (± 3.5%) 98.6% (± 1.4%)
Gender 98.8% (± 1.2%) 93.4% (± 4.3%) 65.8% (± 9.2%) 85.8% (± 5.0%)
Table 3
 
Experiment 2B easy-trial accuracy.
Table 3
 
Experiment 2B easy-trial accuracy.
Gender and expression neutral Angry male
Normal Contrast negated Normal Contrast negated
100% (± 0%) 100% (± 0%) 98.0% (± 2.0%) 98.0% (± 2.0%)
98.0% (± 2.0%) 84.3% (± 6.6%) 86.8% (± 8.2%) 82.0% (± 7.7%)
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×