September 2024
Volume 24, Issue 10
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2024
Sparse null codes emerge and dominate representations in deep neural network vision models
Author Affiliations & Notes
  • Brian S. Robinson
    Johns Hopkins University Applied Physics Laboratory
  • Nathan Drenkow
    Johns Hopkins University Applied Physics Laboratory
  • Colin Conwell
    Johns Hopkins University
  • Michael F. Bonner
    Johns Hopkins University
  • Footnotes
    Acknowledgements  This work was supported by funding from the Johns Hopkins University Applied Physics Laboratory
Journal of Vision September 2024, Vol.24, 1125. doi:https://doi.org/10.1167/jov.24.10.1125
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Brian S. Robinson, Nathan Drenkow, Colin Conwell, Michael F. Bonner; Sparse null codes emerge and dominate representations in deep neural network vision models. Journal of Vision 2024;24(10):1125. https://doi.org/10.1167/jov.24.10.1125.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Representations in vision-based deep neural networks and biological vision are often analyzed from the perspective of the image features they encode, such as contours, textures, and object parts. In this work, we present evidence for an alternative, more abstract type of representation in deep neural networks, which we refer to as a “null code”. Through a series of analyses inspecting the embeddings of a range of neural networks, including different transformer architectures and a recent performant convolutional neural network, we observe null codes that are both statistically and qualitatively distinct from the more commonly reported feature-related codes of vision models. These null codes are highly sparse, have a single unique activation pattern for each network, emerge abruptly at intermediate network depths, and are activated in a feature-independent manner by weakly informative image regions, such as backgrounds. We additionally find that these sparse null codes are approximately equal to the first principal component of representations in middle and later network layers across all analyzed models, which means that they have a major impact on methodological and conceptual approaches for relating deep neural networks to biological vision. In sum, these findings reveal a new class of highly abstract representations that emerge as major components of modern deep vision models: sparse null codes that seem to indicate the absence of features rather than serving as feature detectors.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×