Abstract
Extensive work in cognitive and affective neuroscience has shown an advantage in the perception of fearful faces than neutral faces. Much of this work has focused on the processing of a single face presented in isolation, a situation that rarely occurs in the real world. In this study we examined the cognitive and brain basis for the perception of multiple emotional faces. Specifically, we compared the perception of a single emotional face with that of multiple emotional faces, all of which depicted the same facial expression. The multiple faces were either duplicate of the same individual or of different individuals. We ask whether the presentation of multiple faces with the same facial expression alters emotional processing, and whether the redundancy effect is mediated by differences in facial identity. Behavioral experiments showed that facial expression discrimination was facilitated by the presence of multiple faces with the same expression, and that the redundancy gain was unaffected by differences in facial identity. However, this facilitation did not carry over to the processing of subsequent emotional faces. fMRI data showed that viewing multiple faces with the same expression increases activity in the fusiform face area (FFA). However, this increase was restricted to identical faces and not found for faces with different identities. We conclude that the representation of the same type of facial expression derived from different tokens shows perceptual summation rather than averaging.