Abstract
Morphing between face images that are expressing different emotions results in stimuli that are ambiguous in terms of category membership. Within a categorical perception framework, the status of these images is questionable: they should either be uncategorizable as either one of the parent emotions or their ambiguity may mean that they are categorized as one parent emotion or the other with low confidence. However, these images may also be unique in that they may signal emotional states that are distinct from the parent images they are based on. In the present study, we examined this possibility by allowing participants to categorize ambiguous facial emotion morphs in a free response task. Our hypothesis was that the labels assigned to ambiguous faces in this setting may be multimodal and distinct from the labels assigned to unambiguous parent images. To test this hypothesis, we presented 74 participants with ambiguous morphs made by blending together pairs of different emotion expressions and 71 different participants with unambiguous parent images. We analyzed participants’ free responses using natural language processing methods: participants’ labels for each image were converted into WordNet synset representations based on a large-scale corpus and compared with a pairwise semantic dissimilarity metric. Pairwise dissimilarities between words assigned to the same face were projected into a three-dimensional space so clustering could be applied to participants’ labels. This revealed that significantly more clusters were obtained from ambiguous images and that approximately twice as many unique labels were used for ambiguous morphs. Therefore, morphed emotional faces are distinct from the emotion categories represented by their parents and perceived more variably, but nonetheless yield clear modes in unconstrained responses. Our results signal the need for a broader view of basic emotion categories and a reconsideration of the role of morphed images in emotion perception research.