December 2022
Volume 22, Issue 14
Open Access
Vision Sciences Society Annual Meeting Abstract  |   December 2022
Brain-optimized neural networks reveal evidence for non-hierarchical representation in human visual cortex
Author Affiliations & Notes
  • Ghislain St-Yves
    University of Minnesota
  • Emily Allen
    University of Minnesota
  • Yihan Wu
    University of Minnesota
  • Kendrick Kay
    University of Minnesota
  • Thomas Naselaris
    University of Minnesota
  • Footnotes
    Acknowledgements  Collection of the NSD dataset was supported by NSF CRCNS grants IIS-1822683 and IIS-1822929
Journal of Vision December 2022, Vol.22, 4322. doi:https://doi.org/10.1167/jov.22.14.4322
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ghislain St-Yves, Emily Allen, Yihan Wu, Kendrick Kay, Thomas Naselaris; Brain-optimized neural networks reveal evidence for non-hierarchical representation in human visual cortex. Journal of Vision 2022;22(14):4322. https://doi.org/10.1167/jov.22.14.4322.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Task-optimized deep neural networks (DNNs) have been shown to yield impressively accurate predictions of brain activity in the primate visual system. For most networks, network layer depth generally aligns with V1-V4. This result has been construed as evidence that V1-V4 instantiates hierarchical computation. To test this interpretation, we analyzed the Natural Scenes Dataset, a massive dataset consisting of 7T fMRI measurements of human brain activity in response to up to 30,000 natural scene presentations per subject. We used this dataset to directly optimize DNNs to predict responses in V1-V4, flexibly allowing features to distribute across layers in any way that improves prediction of brain activity. Our results challenge three aspects of hierarchical computation. First we find only marginal advantage from jointly training on V1-V4 relative to training independent DNNs on each of these brain areas. This suggests that data from different areas offer largely independent constraints on the model. Second, the independent DNNs do not show the typical alignment of network layer depth with visual areas. This suggests that alignment may arise for other reasons than computational depth. Finally, we performed transfer learning between the DNN features learned on each visual area. We show that features learned on anterior areas (e.g. V4) poorly generalized to the representations found in more posterior areas (e.g. V1). Together, these results indicate that the features represented in V1-V4 do not necessarily bear hierarchical relationships to one another. Overall, we suggest that human visual areas V1-V4 do not only serve as a pre-processing stream for generating higher visual representations, but may also operate as a parallel system of representation that can serve multiple independent functions.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×