August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
An Enhanced Dataset for Inferential Emotion Tracking in Humans and Machines
Author Affiliations
  • Ethan Shedd
    University of California, Berkeley
  • Zhihang Ren
    University of California, Berkeley
  • Jefferson Ortega
    University of California, Berkeley
  • Ananya Sharma
    University of California, Berkeley
  • Wish Wang
    University of California, Berkeley
  • Stella Yu
    University of California, Berkeley
    University of Michigan, Ann Arbor
  • David Whitney
    University of California, Berkeley
Journal of Vision August 2023, Vol.23, 5636. doi:https://doi.org/10.1167/jov.23.9.5636
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ethan Shedd, Zhihang Ren, Jefferson Ortega, Ananya Sharma, Wish Wang, Stella Yu, David Whitney; An Enhanced Dataset for Inferential Emotion Tracking in Humans and Machines. Journal of Vision 2023;23(9):5636. https://doi.org/10.1167/jov.23.9.5636.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Emotion perception is fundamental to human experience, and it reveals both the sophistication and richness of vision. Although emotion perception depends in part on face processing, it also hinges critically on background scene context (e.g., Chen & Whitney, 2019, 2020, 2021). However, how the visual system uses background scene context to modulate and assign perceived emotion is poorly understood. AI models have the potential to be uniquely useful in this regard: they could shed light on how the human brain processes emotion. However, AI and machine learning approaches fail to incorporate or capture context information. There are not even any AI training datasets that focus on context information in emotion processing. Moreover, insufficient annotators are recruited in existing studies, introducing likely human biases in the annotations. Previous datasets are therefore severely limited. For AI to be useful in studying human perception of emotion, AI needs extensive psychophysical data for training, including data about the background context. Here, our goal was to define and create the kind of psychophysical data that AI models can use for training. In this study, we created a novel emotion tracking dataset with more than 100 video clips, which provide character and contextual information with ratings of both continuous arousal/valence and discrete emotion categories on a frame-by-frame basis. A large number of annotators were recruited to avoid idiosyncratic biases. Our psychophysically controlled stimuli improve the psychological validity of the data used in AI research, improve the statistical power, and reduce bias and artifacts that accompanied previous data sets used in machine learning. Further, we also propose a baseline AI approach, using a transformer model, which achieves state-of-the-art performance on emotion recognition tasks. This model could be useful in helping guide an understanding of how humans use contextual information to perceive emotion in natural scenes.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×