September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
Deep networks trained to recognize facial expressions predict ventral face-selective ECoG responses as well as networks trained to recognize identity
Author Affiliations
  • Emily Schwartz
    Boston College
  • Kathryn O'Nell
    University of Oxford
  • Arish Alreja
    Carnegie Mellon University
  • Avniel Ghuman
    University of Pittsburgh
  • Stefano Anzellotti
    Boston College
Journal of Vision September 2021, Vol.21, 2221. doi:https://doi.org/10.1167/jov.21.9.2221
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emily Schwartz, Kathryn O'Nell, Arish Alreja, Avniel Ghuman, Stefano Anzellotti; Deep networks trained to recognize facial expressions predict ventral face-selective ECoG responses as well as networks trained to recognize identity. Journal of Vision 2021;21(9):2221. https://doi.org/10.1167/jov.21.9.2221.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Faces are a rich source of information about people’s identity and their facial expressions. Recognition of identity and expression have been traditionally thought to be performed by separate neural mechanisms. However, recent neuroimaging studies suggest that recognition of identity and expressions may not be as disjointed as originally thought: valence can be decoded from patterns of response in the FFA (Skerry & Saxe 2014), a brain region previously implicated in identity recognition. If ventral temporal face-selective regions are specialized for the recognition of identity, we would expect deep networks trained to recognize identity to provide a better model of neural responses in these regions as compared to networks trained to recognize facial expressions. In this study, we used electrocorticography (ECoG) to test this prediction, comparing the similarity between neural representations and representations from deep networks trained to recognize either identity or expressions. Patients were shown face images from the Karolinska Directed Emotional Faces database (Lundqvist et al., 1998) while intracranial recordings from ventral temporal brain regions were collected. Using temporal representational similarity analysis for each electrode over sliding temporal windows, we compared representational dissimilarity matrices (RDMs) obtained from the ECoG data to RDMs obtained from one model trained on expression recognition and one model trained on identity recognition. Similarity between RDMs from different layers of the DNNs and the RDMs obtained from ECoG data at different time windows was also evaluated. RDMs from networks trained to recognize expression and those trained to recognize identity were equally able to explain ventral temporal regions when presented with face stimuli. These results provide further support for the presence of both identity and expression information within common brain regions and suggest that ventral temporal regions may not be exclusively optimized for identity recognition.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×