Faces are complex stimuli. Not only do they have complicated three-dimensional structures, but they convey a multitude of perceptual data, including information about identity, gender, race, expression, and direction of gaze, among others. Current behavioral and neuroanatomical models have proposed that the processing of these different types of information may occur in at least two streams (Bruce & Young,
1986; Haxby, Hoffman, & Gobbini,
2000). One stream is dedicated to the extraction of structural cues that support the perception of identity, gender, and race. Such properties are stable over time, and therefore it is hypothesized that these dimensions involve neural representations that are invariant to the dynamic elements of faces (Haxby et al.,
2000). These dynamic elements may be processed by the other stream, as temporally varying information conveys key data for the perception of expression, gaze direction, and visual speech (Haxby et al.,
2000). The proposal that different anatomic structures process different types of information might lead to a prediction that the perception of facial identity and the perception of facial expression are independent. However, there is growing behavioral and anatomic evidence that this is not the case and that there may be interactions between the two (Calder & Young,
2005; de Gelder, Frissen, Barton, & Hadjikhani,
2003; Fox & Barton,
2007; Ganel, Valyear, Goshen-Gottstein, & Goodale,
2005; Humphreys, Avidan, & Behrmann,
2007; Kaufmann & Schweinberger,
2004; Palermo & Rhodes,
2007; Stephan, Breen, & Caine,
2006; Winston, Henson, Fine-Goulden, & Dolan,
2004).