August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
fROI-level computational models enable broad-scale experimental testing and expose key divergences between models and brains
Author Affiliations & Notes
  • Elizabeth Mieczkowski
    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
  • Alex Abate
    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
  • Willian De Faria
    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
  • Kirsten Lydic
    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
  • James DiCarlo
    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
    Center for Brains, Minds and Machines, Massachusetts Institute of Technology
    McGovern Institute for Brain Research, Massachusetts Institute of Technology
  • Nancy Kanwisher
    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
    Center for Brains, Minds and Machines, Massachusetts Institute of Technology
    McGovern Institute for Brain Research, Massachusetts Institute of Technology
  • N. Apurva Ratan Murty
    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
    Center for Brains, Minds and Machines, Massachusetts Institute of Technology
    McGovern Institute for Brain Research, Massachusetts Institute of Technology
  • Footnotes
    Acknowledgements  NIH Pioneer Award NIH DP1HD091957 to NK, NIH K99/R00 Pathway to Independence Award K99EY032603 to NARM
Journal of Vision August 2023, Vol.23, 5788. doi:https://doi.org/10.1167/jov.23.9.5788
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Elizabeth Mieczkowski, Alex Abate, Willian De Faria, Kirsten Lydic, James DiCarlo, Nancy Kanwisher, N. Apurva Ratan Murty; fROI-level computational models enable broad-scale experimental testing and expose key divergences between models and brains. Journal of Vision 2023;23(9):5788. https://doi.org/10.1167/jov.23.9.5788.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Deep convolutional neural network (DNN)- based models have emerged as our leading hypotheses of human vision. Here we describe, and expand upon, our latest effort to use DNN models of brain regions to explain key results from previous cognitive neuroscience and psychology experiments. Many stimuli in these prior experiments were highly manipulated (e.g. scrambled body parts, face parts, re-arranged spatial positions) often outside the domain of natural stimuli. These results can therefore be considered as tests of model generalization beyond naturalistic stimuli. We first performed these tests on the fusiform face area (FFA), parahippocampal place area (PPA) and the extrastriate body area (EBA). Our previous results (presented in VSS2022) showed that our fROI-level models recapitulate several key results from prior studies. We also observed that models did not perform as well on non-naturalistic stimuli. Here we extend our model evaluation metrics in two ways. First, we replicated findings from the original paper on the EBA (Downing et al 2001) that the EBA responds as highly to line drawings of bodies, and symbolic stick figures as to natural images of bodies (and not to control conditions like faces and objects). Second, we find that none of the computational models explain this pattern of observed responses, though models trained with language-based supervision (like CLIP) do better than other models. Together, our results on symbolic body images expose the bounds of current computational models. This progress was made possible only because of fROI-level modeling procedures, and opens up new ways to understand the power and limitations of current models and test novel hypotheses completely in-silico.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×