Abstract
A picture of a face provides information about both someone’s identity and their facial expression. According to traditional view, identity and expression recognition are performed by separate mechanisms. However, recent studies show that recognition of identity and expressions may not be as disjointed as originally thought: face identity can be decoded from response patterns in pSTS (Anzellotti et al. 2017, Dobs et al. 2018), a region previously implicated in expression recognition. Joint processing of expressions and identity might be driven by computational efficiency. In support of this hypothesis, O’Nell et al. 2019, found that artificial neural networks (ANNs) trained to recognize expressions spontaneously learn features that support identity recognition. Here, we investigate transfer learning in the reverse direction, testing whether ANNs trained to distinguish between identities learn features that support recognition of facial expressions. We trained a siamese architecture without handcrafted features on a face verification task. The network achieved 77.22% accuracy. To see if the network spontaneously learns features that support expression recognition, we froze its weights and used features in its hidden layers as inputs to a linear-layer trained to label expressions. We will discuss the generalization performance from identity to expressions of simpler networks trained on a single dataset (which achieve low accuracy in both identity and expression tasks when applied to the Karolinska Directed Emotional Faces dataset), and the performance of more complex networks trained on multiple datasets.