Abstract
Faces convey stable identity via static 3D shape/complexion features and transient emotions via dynamic movements features (i.e. Action Units, AUs). With a transparent generative Virtual Human (VH), we studied how brain pathways dynamically compute (i.e. represent, communicate, integrate) AUs and 3D identity features for emotion decisions. In a behavioral task, the generative VH presented randomly parametrized AUs applied to 2,400 random 3D identities. This produced a different animation per trial that each participant (N=10) categorized as one emotion (happy, surprise, fear, disgust, anger, sad). Using participant’s responses, we modelled the AUs causing their perception of each emotion. In subsequent neuroimaging, each participant categorized their own emotion models applied to 8 new identities while we randomly varied each AU’s amplitude and concurrently measured MEG. Using information theoretical analyses, we traced where and when MEG source amplitudes represent each AU and how sources then integrate AUs for decisions. We compared these representations to covarying but decision-irrelevant 3D face identities. Our results replicate across all participants (p<0.05, FWER-corrected): (1) Social Pathway (Occipital Cortex to Superior Temporal Gyrus) directly represents AUs with time lags, with no Ventral involvement; (2) AUs represented early are maintained until STG integrates them with later AUs. In contrast, emotion-irrelevant 3D identities are reduced early, within Occipital Cortex. In summary, we show that the third “Social” Brain Pathway (not the dorsal pathway) dynamically represents facial action units with time lags that are resorbed by the time they reach STG, where they are integrated for emotion decision behavior; while the irrelevant 3D face identity is not represented beyond OC.