Abstract
Emotional facial expressions have largely been studied using static images. Motion, however, makes expressions that would otherwise be invisible readily apparent. In addition, the current study demonstrates that positive expressions (e.g. happiness, pride in achievement, sensual pleasure, satisfaction) that were previously thought to share the same facial signal (the smile), can be differentiated based on the time course of the activation of the constituent action units. This suggests a new approach to studying emotional and social facial expressions—action unit trajectories. This study describes a new, photo-realistic synthetic model of a face in which action units can be independently activated with arbitrary magnitudes and controlled in real-time using a joystick. In addition, the apparent gender, ethnicity, and age of the face can be altered. The current study employs this platform to study the perception of dynamic faces, quantifying the action unit trajectories which give rise to the perception of varying positive emotional states. Expressions are then represented as trajectories in action unit space, where dimensions are defined by action units. Trajectories provided by multiple participants for a single emotional state (e.g. sensual pleasure) are combined using functional data analysis resulting in a mean trajectory. Played back to a new group of observers, the trajectory is reliably identified in a forced-choice task.