Abstract
A network of several face areas–defined by their greater activation to faces than non face objects--have been reported in the cortices of both macaques and humans, but their functionality is somewhat uncertain. We used fMRI-adaptation (fMRIa) to investigate the representation of viewpoint, expression and identity of faces in the fusiform face area (FFA), the occipital face area (OFA), and the superior temporal sulcus (STS). In each trial, subjects viewed a sequence of two computer-generated faces and judged whether they depicted the same person (1/5 of the trials were in fact different, but highly similar individuals). On all trials, the second face was translated in an uncertain direction even when the faces were identical. Among the images of the same person, the two images could vary in viewpoint (∼15° rotation in depth) and/or in expression (e.g., from happy to angry). Critically, the physical similarity of a view change and an expression change for each face were equated by the Gabor-jet metric, a measure that predicts almost perfectly similarity effects on discrimination performance. We found that a change of expression, but not a change of viewpoint, produced a significant release from adaptation (compared to the identical, translated face) in FFA. In addition, a change of identity produced an even stronger release. In contrast, OFA was not sensitive to either expression or viewpoint change, but did show a release from adaptation to an identity change. These results are consistent with Pitcher et al.'s (2009) finding that TMS applied to OFA disturbs face identification, but are not consistent with a model (Haxby et al., 2000) that assumes that FFA is insensitive to emotional expression.