October 2020
Volume 20, Issue 11
Open Access
Vision Sciences Society Annual Meeting Abstract  |   October 2020
Is Rodent Visual Cortex Really Just a Randomly Initialized Neural Network?
Author Affiliations & Notes
  • Colin Conwell
    Harvard University
  • George Alvarez
    Harvard University
  • Footnotes
    Acknowledgements  Many thanks to Dr. Michael Buice (Allen Institute) for assistance in preparing the visual coding dataset and to Dr. Andrei Barbu (Massachusetts Institute of Technology) for suggestions.
Journal of Vision October 2020, Vol.20, 968. doi:https://doi.org/10.1167/jov.20.11.968
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Colin Conwell, George Alvarez; Is Rodent Visual Cortex Really Just a Randomly Initialized Neural Network?. Journal of Vision 2020;20(11):968. https://doi.org/10.1167/jov.20.11.968.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Image-recognizing deep neural networks now provide the gold standard for the modeling of primate visual cortex, predicting aggregate and individual neural profiles to striking accuracy. Their success in the modeling of rodent visual cortex, on the other hand, has been a bit more meted. Recent findings (Cadena & Others, 2019) have suggested that randomly initialized networks (never trained) provide about as predictive a set of features as the same networks when trained on image recognition, calling into question the use of such networks for the modeling of markedly different brains. We re-examine this finding with a methodology consisting of three components: one) the Allen Institute Brain Observatory two-photon calcium-imaging visual coding dataset (de Vries & Others, 2018); two) a battery of 11 ImageNet-pretrained architectures; and three) a cross-validated nonlinear least squares regression analysis in which we iteratively build a predicted representational dissimilarity matrix from across all features of each model for a given neural site and compare it to the actual representational dissimilarity matrix calculated on the images used by the Allen Institute. Contrary to previous findings, we find that ImageNet-pretrained models almost categorically outperform their randomly initialized counterparts by a large margin (Student’s t(460) = 7.3, p = 1.25e-12, Cohen’s d = .78). However, even the most performant model (SqueezeNet, with mean R2 of 0.116 +|- 0.075 SD) falls far short of the ceiling suggested by the split-half reliability of the neural data (with mean R2 of 0.58 +|- 0.252 SD), suggesting there remains room for substantial innovation in the engineering of both model architectures and training task. More broadly, it deepens the ongoing mystery of how exactly standard neural networks can serve as the model for the rich diversity (and fiendish complexity) of biological brains at scale – even when that scale is the size of a mouse.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×