Abstract
Perception of facial displays of emotion is influenced by context; error rates and reaction times increase when the emotion displayed by the face (e.g., fear) is incongruent with the emotion displayed by the body (e.g., anger) (Meeren et al., 2005). Two models of emotion perception invoke different mechanisms to explain context effects. Although both models predict that congruency effects will be maximal when emotions are similar, they do not always agree on which emotions are most similar. To compare the predictive validity of these two models we measured context effects for three emotions for which the two models make different predictions: sad, anger, and fear. Whereas the Dimensional model predicts largest effects when fear and anger are paired because both are negatively valenced and high in arousal, the Emotional Seed model predicts largest effects whenever fear or anger are paired with sad because sad faces are more physically similar to anger or fear faces than anger and fear faces are to each other (Susskind et al., 2007). Adults categorized each facial expression when presented on congruent and incongruent bodies. They were instructed to ignore the body. Stimuli were presented for 600ms in Experiment 1 (n = 24) and for an unlimited time in Experiment 2 (n = 17 to date). Accuracy, response times, and proportion of errors were analyzed. In Experiment 1, congruency effects were pervasive but strongest when sad faces were presented on fear bodies (p < .01) , followed by when fear faces were presented on sad bodies (p <.05). Congruency effects were dampened in Experiment 2, but were still strongest when sad faces were paired with fear bodies (p <.03). Collectively, these results question the predictive validity of both models and suggest that fear postures may hold a special status in emotion perception.
Meeting abstract presented at VSS 2012