Abstract
During social interaction, humans rely heavily on emotion recognition to guide behavior. The ability to recognize the emotions of others allows us to understand individuals around us, and thus helps us to understand the world. Emotional intelligence (EI), has been conceptualized as the ability to accurately perceive emotions in ourselves and others, and to regulate these emotions in order to achieve certain adaptive outcomes or emotional states (Salovey and Mayer 1990). Emotion recognition is widely studied, but there is currently little role for context-based emotion recognition in EI assessments, despite the fact that context is critically important in emotion recognition. For example, Inferential Emotion Tracking (IET) shows that observers can accurately and rapidly infer the emotions of a blurred-out (invisible) character in a scene using contextual information alone (Chen and Whitney 2019; Chen and Whitney 2021). In the present study, we investigated the relationship between Inferential Emotion Tracking and various psychological measures and standard emotion recognition tasks. In the experiment, participants continuously tracked and reported the valence and arousal of masked (invisible) actors in video clips (including Hollywood movies, documentaries, and home videos). We found a stronger correlations between IET task performance and Autism Quotient (AQ) scores (rho = -.35, p = .001, corrected for multiple comparisons) compared to the Film Facial Expression Task (rho = -.28, p = .015, corrected) and Reading the Mind in the Eyes task (rho = -.13, p = .718, corrected). These findings suggest that emotion recognition performance from contextual information alone could potentially be a stronger predictor of autism spectrum disorder than current standard emotion recognition and theory of mind tasks. Partial correlations suggest that there is at least unique variance accounted for by IET that is not captured in the more common emotion recognition tests.