Abstract
Evaluating emotional expressions is an integral part of social interactions. While facial expression, body posture, and biological movement are all thought to convey emotional signals, the mechanisms by which different sources of information are combined to form an emotional percept is poorly understood. We conducted a behavioral experiment in which participants evaluated the emotional expression of composite face/body images created by combining independent images of faces and bodies. The face and body combinations were either emotionally congruent, with matching expressions (e.g., fearful body, fearful face), or emotionally incongruent, with mismatched expressions (e.g., fearful body, angry face). To select images for each emotion category (angry, fearful, and neutral), we ran an independent rating experiment on mTurk. Images that were consistently rated as angry or fearful, and had high rating confidence scores, were used as emotional images in the main experiment, and images with the least certainty towards either angry or fearful used as neutral images. Each trial began when participants placed the mouse cursor at a fixed point at the bottom of the screen. Participant fixated a central cross for 500ms until a composite image appeared for 2000ms. Participants then made a mouse movement indicating whether they rated the image as fearful or angry. We predicted that the separate sources of information (i.e., the face and the body) would contribute to the expression judgment at different points in time. By comparing the deflection of the average mouse trace relative to a straight-line trajectory, we discovered an early bias that differed from the eventual judgment on that trial. This finding reveals the utility of using dynamic mouse position for making inferences about recognition judgments and supports the hypothesis that face and body expressions are dynamically weighted when participants evaluate emotional expressions.