Abstract
Emotion recognition is a critical function of vision. It seems intuitive that to perceive a person's emotions, we just need to focus directly on that person—their face or body. However, sometimes the context in which a person has an emotion may be key to understanding that emotion. Can a person's emotion be dynamically inferred from contextual visual information, even without face and body-related information? We tested the ability to infer and track the emotions of people based solely on visual situational context, without any information about facial expression. Thirty-one observers watched silent movie clips of two characters interacting. The face and body of a randomly chosen character were occluded (target); the other character in the movie clip (partner) remained visible. Observers tracked the inferred emotion of the masked (invisible) target and reported the emotion by moving a mouse pointer in a valence-arousal (2D) space continuously, in real-time. Baseline ratings of the target and partner characters were established by asking a separate group of 69 observers to track the target's emotion when all characters were visible (unoccluded). In the baseline, observers agreed strongly when tracking the visible target's emotion (mean Cronbach's alpha = 0.95). More importantly, observers accurately inferred and tracked the emotion of the invisible target character, when compared to the baseline (mean Spearman's rho = 0.58, p < .01; mean absolute deviation = 8.5%). Cross-correlation analyses showed that inferring emotion based on context alone was as fast as tracking emotion using face and body information (no significant non-zero time lag). More strikingly, observers inferred the intensity of the target's emotion (arousal) accurately by using contextual ensemble information, not simply by tracking the partner's arousal (partial correlation = 0.42, p < .01). Our results demonstrate that observers can infer and track emotion accurately and speedily in real time based entirely on contextual information.
Meeting abstract presented at VSS 2017