Purchase this article with an account.
Frank E. Pollick, Helena Paterson, Pascal Mamassian; Combining faces and movements to recognize affect. Journal of Vision 2004;4(8):232. doi: https://doi.org/10.1167/4.8.232.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Using a simple linear model of cue combination we examined how information from faces and human movement combine to produce an impression of affect. Face stimuli depicting profile views of sad, happy, angry and neutral expressions were obtained from a database of synthetic faces generated by Poser software. Preliminary studies were performed to obtain displays of facial affect with high salience (recognition accuracy above 90%) and low salience (recognition accuracy of around 50%, with the remainder of responses being predominately neutral). Movement stimuli were obtained via human motion capture of an actor depicting sad, happy, angry and neutral knocking actions and animated also using Poser software. Single cue conditions were defined both as neutral faces with affective movements and affective faces with neutral movements. In the cue combination condition face and movement information were congruent. Participants viewed displays of the single and multiple cue conditions at the low and high facial cue salience levels and were asked to categorize the display as sad, angry or happy, and to give a rating of the strength of the perceived affect. These results were used to obtain estimates of the weights for facial and movement information for each of the three affects. Results indicated that the movement information was consistently weighted more heavily. Unexpectedly, this revealed that movement information could both boost and diminish the effectiveness facial information in the cue combination condition.
This PDF is available to Subscribers Only