August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Revealing interpretable object dimensions from a high-throughput model of the fusiform face area
Author Affiliations & Notes
  • Oliver Contier
    Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences
    Max Planck School of Cognition, Max Planck Institute for Human Cognitive and Brain Sciences
  • Shu Fujimori
    Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences
    Department of Mechanical and Intelligent Systems Engineering, Graduate School of Informatics and Engineering, The University of Electro-Communications
  • Katja Seeliger
    Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences
  • N Apurva Ratan Murty
    McGovern Institute for Brain Research, Massachusetts Institute of Technology
    Department of Brain and Cognitive Science, Massachusetts Institute of Technology
  • Martin Hebart
    Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences
    Department of Medicine, Justus Liebig University
  • Footnotes
    Acknowledgements  This work was supported by a Max Planck Research Group Grant and the ERC Starting Grant COREDIM awarded to MNH, the Japan Public-Private Partnership Student Study Abroad Program awarded to SF, and a doctoral fellowship awarded to OC by the Max Planck School of Cognition.
Journal of Vision August 2023, Vol.23, 5356. doi:https://doi.org/10.1167/jov.23.9.5356
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Oliver Contier, Shu Fujimori, Katja Seeliger, N Apurva Ratan Murty, Martin Hebart; Revealing interpretable object dimensions from a high-throughput model of the fusiform face area. Journal of Vision 2023;23(9):5356. https://doi.org/10.1167/jov.23.9.5356.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A central aim of visual neuroscience is to uncover the function of individual visually-responsive brain regions. A hallmark of occipitotemporal cortex is its functional organization into category-selective brain regions, and among these regions, it is well established that fusiform face area (FFA) responds highly selectively to the visual presentation of faces. At the same time, previous research has shown that FFA activity overlaps with several other feature maps that are not face specific, such as animacy, size, or curvature (Long et al., 2017), and FFA has been shown to carry above-chance information about non-face objects (Duchaine & Yovel, 2015). Thus, it remains an open question which other object dimensions may be represented in patterns of FFA responses. Here, we explored this question with a recent high-throughput neural-network model of FFA activity which has been shown to yield excellent predictive accuracy (Ratan Murty et al., 2021). We first predicted responses of the model’s FFA voxels to >26,000 naturalistic object images from the THINGS database (Hebart et al., 2019). Next, we used a sparse positive similarity embedding technique to identify interpretable dimensions underlying these response patterns. As expected, the results yielded a number of dimensions related to human faces and body parts encoded in FFA activity. Additionally, the embedding revealed latent dimensions in FFA activity encoding animal faces and non-face features reflecting mid-level shapes, textures, and scene-related features. These results capture a broad space of object features embedded in synthetic FFA activity while still confirming its clear selectivity for face images. Our approach may open the door for exploring the rich space of object features encoded in the complex activity patterns in visual brain regions.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×