Abstract
An observer’s perception and interpretation of facial behavior of others is prone to error and bias due to attentional bottlenecks, heuristics, and the lack of proper feedback to learn from. Here we investigate an objective approach to the measurement of nonverbal behaviors to infer the motivation levels of webcam-recorded participants that executed an online, structured job interview. First, we implemented and developed artificial intelligence and computer vision algorithms to automatically detect participants’ facial muscle activity and emotional expressions in videos. The extracted facial features served as input to an unbiased, cross-validated model to predict the motivation levels of participants that they introspectively reported after the interview. The motivation judgments by the model outperformed human observers’ unreliable, invalid, and gender-biased judgments. In order to determine motivation, observers correctly pay attention to some of the relevant facial features but, unlike the model, fail to assign correct weights and signs. These findings mark the necessity and usefulness of novel, bias-free, and scientific approaches to observing and judging human behavior.