August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Modeling Emotional Cue Integration of Context-rich and Dynamic Stimuli reveals Bayesian as well as anti-Bayesian Properties
Author Affiliations
  • Jefferson Ortega
    University of California, Berkeley
  • Yuki Murai
    National Institute of Information and Communications Technology
  • David Whitney
    University of California, Berkeley
Journal of Vision August 2023, Vol.23, 5221. doi:https://doi.org/10.1167/jov.23.9.5221
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jefferson Ortega, Yuki Murai, David Whitney; Modeling Emotional Cue Integration of Context-rich and Dynamic Stimuli reveals Bayesian as well as anti-Bayesian Properties. Journal of Vision 2023;23(9):5221. https://doi.org/10.1167/jov.23.9.5221.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Humans often encounter multiple complex social cues when perceiving emotion, but how does the brain combine cues into a single judgment of emotion? Observers do not just use facial expressions, as previous studies have found that contextual information is just as important (Chen and Whitney, PNAS, 2019). One idea suggests that emotional cues are combined using a Bayesian framework where cues are weighted by their reliability (Zaki 2013, Ong et al. 2015). In our study, we investigated emotional cue integration using context-rich and dynamic stimuli, which mimic the natural properties of images present when humans infer emotion in their everyday lives. In the experiment, observers continuously tracked and inferred the affect of a target character while watching a video. Observers were organized into three conditions: (1) the context-only condition which had the target character masked-out, (2) the character-only condition which had the context masked-out, and (3) the ground-truth condition where no visual information was masked-out. We used a Bayesian model to integrate the group-averaged context-only and character-only ratings of valence and arousal to model the empirical group-averaged ground-truth ratings. Using bootstrapping and 95% confidence intervals, we found that the Bayesian model greatly outperformed observers' character-only and context-only ratings in matching the empirical group-averaged ground-truth ratings. Comparing our model to a linear integration model revealed that the Bayesian model performed better for some videos while the linear model performed better on others. Individual video analysis revealed that around half of the ratings involved “anti-Bayesian” combinations of the context and character ratings (an amplification of ratings, a reduction of ratings, and sign flipping of ratings rather than merging). Our results suggest that observers combine multiple cues when favorable, but this combination does not consistently follow a Bayesian framework. Instead, there are conditions in which “anti-Bayesian” approaches for emotion perception may be required.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×