August 2023
Volume 23, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   August 2023
Brain-optimized models reveal increase in few-shot concept learning accuracy across human visual cortex
Author Affiliations & Notes
  • Ghislain St-Yves
    University of Minnesota
  • Kendrick Kay
    University of Minnesota
  • Thomas Naselaris
    University of Minnesota
  • Footnotes
    Acknowledgements  This work was supported by NSF CRCNS grants IIS-1822929 (T.N.).
Journal of Vision August 2023, Vol.23, 5913. doi:https://doi.org/10.1167/jov.23.9.5913
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ghislain St-Yves, Kendrick Kay, Thomas Naselaris; Brain-optimized models reveal increase in few-shot concept learning accuracy across human visual cortex. Journal of Vision 2023;23(9):5913. https://doi.org/10.1167/jov.23.9.5913.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

A concept manifold is the collection of all possible brain activity patterns evoked by stimuli exemplifying the concept. A recent theory (Sorscher et al., 2022) identified several geometric elements of concept manifolds that accurately predicts their distinguishability under few-shot learning. Here, we use this theory to characterize the representational geometry of the same set of visual concepts in direct brain data (the Natural Scene Dataset, a large fMRI dataset of response to natural images), in accurate end-to-end encoding models predicting this neural activity, and in neural networks trained to classify a separate set of concepts. This direct comparison demonstrates that the brain organizes concepts in a manner very different from artificial networks trained as visual concept classifiers. Although few-shot accuracy of visual concepts tends to increase in both brains and artificial networks with generalized ascension (toward anterior areas in brains, deeper layers in networks), we find that the geometry of concept manifolds in early visual areas (e.g. V1) is more similar to the last (readout) layers of neural networks than to lower layers. This suggests that the human visual system may be subject to very different learning pressures than those that arise in supervised training for core object recognition in artificial neural networks.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×