Abstract
Facial dynamics communicate a considerable amount of social information. For example, the fine movements that create facial expressions convey a person’s emotional state, while other changes, like adjustments of head orientation, signal the focus of a person’s attention. A prominent theory of face processing (Bruce & Young, 1986; Haxby, Hoffman, & Gobbini, 2000) posits that dynamic facial information, like expression, is processed primarily by the superior temporal sulcus (STS; Pitcher et al., 2011) while invariant aspects of a face, like identity, are processed primarily by the fusiform face area (FFA; Grill-Spector et al., 2004). While the role of STS in processing facial expressions is well characterized, less is known about its sensitivity to other dynamic information such as changes in head orientation. In a recent fMRI-adaptation study in rhesus macaques (Taubert et. al., 2020), we found greater sensitivity to facial expression than head orientation in STS fundus face patches and amygdala, while the reverse was true for the STS lateral face patches. In the current study, we used a similar fMRI-adaptation paradigm to examine whether a parallel dissociation between facial expression and head orientation processing exists in humans. Participants viewed images of faces presented in four block types where: 1) both the expression and orientation of the faces changed; 2) only expression changed; 3) only orientation changed; and 4) neither expression nor orientation changed, while performing a fixation change task. Analysis of fMRI data revealed a greater sensitivity to facial expression compared to head orientation in posterior STS and amygdala, while FFA showed sensitivity to both expression and orientation. These initial results suggest a difference in how head orientation and expression are processed between human and non-human primates and additional analyses will probe the exact role of these regions in the processing of these two types of facial dynamics.