October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Distinct identity information encoded in FFA and OFA
Author Affiliations & Notes
  • Lucia Garrido
    City, University of London
  • Maria Tsantani
    Birkbeck, University of London
  • Katherine Storrs
    Justus Liebig University
  • Carolyn McGettigan
    University College London
  • Nikolaus Kriegeskorte
    Columbia University
  • Footnotes
    Acknowledgements  This work was supported by a Leverhulme Trust Research Grant (RPG-2014-392).
Journal of Vision October 2020, Vol.20, 536. doi:https://doi.org/10.1167/jov.20.11.536
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Lucia Garrido, Maria Tsantani, Katherine Storrs, Carolyn McGettigan, Nikolaus Kriegeskorte; Distinct identity information encoded in FFA and OFA. Journal of Vision 2020;20(11):536. https://doi.org/10.1167/jov.20.11.536.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

The human brain contains several face-selective regions that consistently respond more to faces than other visual stimuli (Kanwisher et al., 1997), and activity in some of these regions can distinguish between different face identities. Studies using fMRI multivariate pattern analysis have shown that face identities can be distinguished based on their elicited response patterns in the fusiform face area (FFA), occipital face area (OFA), posterior superior temporal sulcus (pSTS), and anterior inferior temporal lobe (e.g. Nestor et al., 2011; Verosky et al., 2013; Anzelotti et al., 2014; Axelrod & Yovel, 2015; Tsantani et al., 2019). But do all these regions distinguish between identities in similar ways? We investigated what types of identity-distinguishing information are encoded in three face-selective regions: FFA, OFA, and pSTS. In an event-related fMRI study, 30 participants viewed videos of faces of famous individuals. We extracted brain patterns elicited by each face in each region and computed representational distances between different identities. Using representational similarity analysis (RSA; Kriegeskorte et al., 2008), we investigated which properties of the face identities best explained representational distances in each brain region. We built diverse candidate models of the differences between identities, ranging from low-level stimulus properties (pixel, GIST, and Gabor-jet dissimilarities), through higher-level image-computable descriptions (the OpenFace deep neural network), to complex human-rated properties (perceived similarity, social traits, and gender). We found marked differences in the information represented by difference face-identifying regions. Dissimilarities between face identities in FFA were well explained by differences in perceived similarity, social traits, gender and by the OpenFace network, trained to cluster faces by identity. In contrast, representational distances in OFA were mainly driven by differences in low-level image-based properties (pixel-wise and Gabor-jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, they encode distinct information about faces.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×