Abstract
Faces are thought to be processed holistically: all facial features appear to be processed as a whole, instead of as individual features. At a glance, observers can easily extract the identity and emotional expressions of a face. To what extent is identity and expression processing integrated? Previous findings often used only neutral faces to study holistic processing of facial identity, thus, it remains unclear to what extent facial expressions may influence identification judgments. Moreover, observers appear to emphasize different facial features in recognizing various facial expressions, such as processing both the top and bottom face halves for happy faces, and focusing on the top halves for angry faces. Here we asked whether holistic processing of facial identity is modulated by happy or angry facial expressions. In a composite paradigm, participants (N=29) performed identity matching on either the top or bottom halves of each pair of sequentially presented composites, and were asked to ignore the task-irrelevant halves. The face halves were either aligned or misaligned. The pairing of identities between the face halves were either congruent or incongruent. The composites showed happy, angry, or neutral expressions; the expressions on the top and bottom halves of the composites were always congruent, and the expression conditions were randomized. We found significant holistic processing in all expression conditions. More importantly, the holistic effects were modulated by expressions, as indicated by a significant 3-way interaction of expression, alignment, and congruency. Critically, holistic processing was stronger for happy than angry faces. No significant difference in the magnitude of holistic processing was found in other comparisons. The results suggest that identity judgement is influenced by the different processing strategies for different expressions, even when emotional information is task-irrelevant. This provides evidence that holistic processing integrate the identity and emotional information of faces, and is dynamic across trials.
Meeting abstract presented at VSS 2017