Abstract
Facial expressions are a rich information source from which observers infer the emotional states of others. Despite much understanding about the brain regions that represent facial expressions, we do not yet know how representations of these facial movements transform into judgments of emotions in the brain. We addressed this question in 5 participants who judged the emotion of individual face movements called Action Units (AUs) while we concurrently measured brain activity using magnetoencephalography (MEG). Stimuli were animations of 5 facial movements--Outer Brower Raiser (AU2), Nose Wrinkler (AU9), Lip Corner Puller (AU12), Chin Raiser (AU17), Lip Stretcher (AU20), each at 4 levels of intensity (%25 - %100). We instructed participants to rate each animation according to either its perceived valence (‘negative’, ‘neutral’ or ‘positive’) or arousal (‘low,’ ‘neutral’ or ‘high’). Tasks alternated between blocks of 40 trials (5 AUs X 4 intensity levels X 2 repetitions) and participants completed 4,000 ~ 6,000 trials in total. We averaged all ratings of each AU and intensity level per task for each participant. We show that the arousal ratings increased along AU intensity levels while valence ratings are consistent for each AU (e.g., Nose Wrinkler (AU9) as negative and Lip Corner Puller (AU12) as positive). Then, we calculated Mutual Information (MI, permutation test) between MEG recording and task ratings. The results revealed the spatial and temporal distribution of brain activities related to the specific valence and arousal. We found that the valence and arousal evoked similar representational peaks ~270ms and ~750 ms in the temporal lobes while a special peak from parietal lobes at 387ms for valence task that differentiated between the two inferences. Our results show where (in temporal lobes and parietal lobes) and when (at ~270ms, 380ms and 750 ms post stimulus) the brain processes dynamic AUs as meaningful affective signals.