October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Investigating the emergence of expression representations in a neural network trained to discriminate identities
Author Affiliations
  • Emily Schwartz
    Boston College
  • Kathryn O'Nell
    University of Oxford
  • Stefano Anzellotti
    Boston College
Journal of Vision October 2020, Vol.20, 1590. doi:https://doi.org/10.1167/jov.20.11.1590
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emily Schwartz, Kathryn O'Nell, Stefano Anzellotti; Investigating the emergence of expression representations in a neural network trained to discriminate identities. Journal of Vision 2020;20(11):1590. https://doi.org/10.1167/jov.20.11.1590.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A picture of a face provides information about both someone’s identity and their facial expression. According to traditional view, identity and expression recognition are performed by separate mechanisms. However, recent studies show that recognition of identity and expressions may not be as disjointed as originally thought: face identity can be decoded from response patterns in pSTS (Anzellotti et al. 2017, Dobs et al. 2018), a region previously implicated in expression recognition. Joint processing of expressions and identity might be driven by computational efficiency. In support of this hypothesis, O’Nell et al. 2019, found that artificial neural networks (ANNs) trained to recognize expressions spontaneously learn features that support identity recognition. Here, we investigate transfer learning in the reverse direction, testing whether ANNs trained to distinguish between identities learn features that support recognition of facial expressions. We trained a siamese architecture without handcrafted features on a face verification task. The network achieved 77.22% accuracy. To see if the network spontaneously learns features that support expression recognition, we froze its weights and used features in its hidden layers as inputs to a linear-layer trained to label expressions. We will discuss the generalization performance from identity to expressions of simpler networks trained on a single dataset (which achieve low accuracy in both identity and expression tasks when applied to the Karolinska Directed Emotional Faces dataset), and the performance of more complex networks trained on multiple datasets.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×