September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Investigating power laws in neural network models of visual cortex
Author Affiliations
  • Keaton Townley
    Johns Hopkins University
  • Mick Bonner
    Johns Hopkins University
Journal of Vision September 2024, Vol.24, 1209. doi:https://doi.org/10.1167/jov.24.10.1209
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Keaton Townley, Mick Bonner; Investigating power laws in neural network models of visual cortex. Journal of Vision 2024;24(10):1209. https://doi.org/10.1167/jov.24.10.1209.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Recent work has shown that both the visual cortex and deep neural networks exhibit a power law in the covariance spectra of their representations, suggesting that optimal visual representations have a high-dimensional structure. Of particular interest is the power-law exponent that defines how variance scales across latent dimensions. Here we extracted layer-wise activations from convolutional neural networks (CNNs) pre-trained on a variety of tasks and characterized the power-law exponent in their covariance spectra. We found that CNNs with lower power-law exponents were better models of the visual cortex. We also observed that the covariance spectra differed when spatial information was taken into account. The covariance spectra between convolution channels or within a single spatial location in a convolution filter exhibit a minimum power-law exponent of 1—similar to what has been observed in the visual cortex. However, the covariance spectrum for the full tensor of layer activations could be significantly lower. To investigate further, we developed a method for initializing untrained CNNs with a power-law in the covariance spectrum of their weights. Here we found that the power-law exponent of the weights significantly modulated the exponent in their activations, and a lower power-law exponent improved their ability to model neural activity in the visual cortex. Interestingly, these networks continue to exhibit a minimum power-law exponent of 1 even when the power-law exponent in their weights is far smaller. In sum, our work suggests that the power-law exponent of channel covariance spectra in CNNs is a key factor underlying model-brain correspondence and that there may be fundamental constraints on the power law exponent of deep neural network representations.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×