Journal of Vision Cover Image for Volume 23, Issue 9
August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Comparing iEEG responses and deep networks with Bayesian statistics challenges the view that lateral face-selective regions are specialized for facial expression recognition over identity recognition
Author Affiliations & Notes
  • Emily Schwartz
    Boston College
  • Arish Alreja
    Carnegie Mellon University
    University of Pittsburgh
    University of Pittsburgh Medical Center
  • R. Mark Richardson
    Massachusetts General Hospital
    Harvard Medical School
  • Avniel Ghuman
    University of Pittsburgh
    University of Pittsburgh Medical Center
  • Stefano Anzellotti
    Boston College
  • Footnotes
    Acknowledgements  This work was supported by the National Science Foundation CAREER Grant 1943862 to S.A., National Institutes of Health R01MH107797 and R21EY030297 to A.G and the National Science Foundation 1734907 to A.G.
Journal of Vision August 2023, Vol.23, 5641. doi:https://doi.org/10.1167/jov.23.9.5641
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Emily Schwartz, Arish Alreja, R. Mark Richardson, Avniel Ghuman, Stefano Anzellotti; Comparing iEEG responses and deep networks with Bayesian statistics challenges the view that lateral face-selective regions are specialized for facial expression recognition over identity recognition. Journal of Vision 2023;23(9):5641. https://doi.org/10.1167/jov.23.9.5641.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Following a classical view, face identity and facial expression recognition are performed by separate neural mechanisms. However, some neuroimaging studies demonstrate that identity and expression recognition may not be disjoint processes: response patterns in ventral and lateral temporal pathways decode valence (Skerry & Saxe, 2014) and identity (Anzellotti & Caramazza, 2017), respectively. If the ventral pathway is identity-specialized like the classical view suggests, deep neural networks (DNNs) trained to recognize identity should provide a better model of neural responses in these regions as compared to networks trained to recognize expressions. Conversely, if the lateral pathway is expression-specialized, expression-trained DNNs should provide a better model of lateral region responses. Importantly, there would be an interaction between DNN type and brain region. We used intracranial Electroencephalography (iEEG) to compare similarity between neural representations and DNN representations trained to recognize identity or expressions. Patients were shown face images while data from face-selective ventral temporal and lateral regions were collected. For each electrode over sliding temporal windows, we compared neural representational dissimilarity matrices (RDMs) to RDMs obtained from identity-trained models and expression-trained models. Similarity between RDMs from DNN layers and iEEG RDMs at multiple timepoints was analyzed. We evaluated how similar each electrode was to identity and expression RDMs using semi-partial tau-B. Schwarz Criterion was then used to assess if these correlations were better explained by modeling ventral and lateral electrodes separately or combining the two sets of electrodes together. Critically, the data was better explained by a single slope that combined ventral and lateral electrodes. The relative contribution of the models did not differ between ventral and lateral electrodes, and identity models better accounted for the ventral and lateral responses compared to expression models. Results deviate from what the classical view proposes in which lateral electrodes should be better explained by expression models.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×