Journal of Vision Cover Image for Volume 22, Issue 14
December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Dynamic binding of faces and bodies when recognizing emotional expression
Author Affiliations & Notes
  • Maeve M. Sargeant
    National Institute of Mental Health
  • Kunjan Rana
    National Institute of Mental Health
  • Jessica Taubert
    National Institute of Mental Health
    University of Queensland
  • Leslie G. Ungerleider
    National Institute of Mental Health
  • Elisha P. Merriam
    National Institute of Mental Health
  • Footnotes
    Acknowledgements  Funding: ZIAMH002966
Journal of Vision December 2022, Vol.22, 4244. doi:https://doi.org/10.1167/jov.22.14.4244
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Maeve M. Sargeant, Kunjan Rana, Jessica Taubert, Leslie G. Ungerleider, Elisha P. Merriam; Dynamic binding of faces and bodies when recognizing emotional expression. Journal of Vision 2022;22(14):4244. https://doi.org/10.1167/jov.22.14.4244.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Evaluating emotional expressions is an integral part of social interactions. While facial expression, body posture, and biological movement are all thought to convey emotional signals, the mechanisms by which different sources of information are combined to form an emotional percept is poorly understood. We conducted a behavioral experiment in which participants evaluated the emotional expression of composite face/body images created by combining independent images of faces and bodies. The face and body combinations were either emotionally congruent, with matching expressions (e.g., fearful body, fearful face), or emotionally incongruent, with mismatched expressions (e.g., fearful body, angry face). To select images for each emotion category (angry, fearful, and neutral), we ran an independent rating experiment on mTurk. Images that were consistently rated as angry or fearful, and had high rating confidence scores, were used as emotional images in the main experiment, and images with the least certainty towards either angry or fearful used as neutral images. Each trial began when participants placed the mouse cursor at a fixed point at the bottom of the screen. Participant fixated a central cross for 500ms until a composite image appeared for 2000ms. Participants then made a mouse movement indicating whether they rated the image as fearful or angry. We predicted that the separate sources of information (i.e., the face and the body) would contribute to the expression judgment at different points in time. By comparing the deflection of the average mouse trace relative to a straight-line trajectory, we discovered an early bias that differed from the eventual judgment on that trial. This finding reveals the utility of using dynamic mouse position for making inferences about recognition judgments and supports the hypothesis that face and body expressions are dynamically weighted when participants evaluate emotional expressions.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×