September 2021
Volume 21, Issue 9
Open Access
Vision Sciences Society Annual Meeting Abstract  |   September 2021
VISCNN: A tool for Visualizing Interpretable Subgraphs in CNNs
Author Affiliations
  • Christopher Hamblin
    Harvard University
  • George Alvarez
    Harvard University
Journal of Vision September 2021, Vol.21, 2674. doi:https://doi.org/10.1167/jov.21.9.2674
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Christopher Hamblin, George Alvarez; VISCNN: A tool for Visualizing Interpretable Subgraphs in CNNs. Journal of Vision 2021;21(9):2674. https://doi.org/10.1167/jov.21.9.2674.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Deep, convolutional neural networks (CNNs) have become prominent models of biological visual processing, but have been criticised as replacing one black-box (the brain) with another (the CNN). To help address this critique, we present a new tool for visualizing interpretable subgraphs in CNNs (VISCNN) that can both enhance interpretability of CNN computations and help guide hypothesis generation. Olah et. al. 2020 showed that CNNs can be decomposed into small, interpretable circuits, which combine simple feature detectors into complex ones. What remains unclear from their work is how one finds such circuits in a quantitatively principled way, given the combinatorially explosive number of possible subcircuits from pixel-space to a downstream feature in a deep CNN. VISCNN is a software tool that enables this, by allowing researchers to query CNNs for circuits that generate features of interest. The tool identifies parts of a CNN’s computational graph that would significantly affect some target feature’s expression were they to be removed, and weights the preceding nodes and edges in the graph accordingly. The researcher can then quickly and intuitively explore the latent data-processing streams in their CNN models, by first querying for subgraphs, then clicking through the returned nodes and edges to view underlying feature visualizations (Olah et. al. 2017), activation maps, and convolutional kernels. VISCNN works by repurposing metrics used in neural network pruning, which conventionally rank neural network units based on their importance for preserving the final loss. Following Molchanov et. al. 2017, we can approximate the importance of any intermediary activation map for a downstream feature. VISCNN transforms modern computer vision models into easily-explorable empirical objects for the vision science community to study. There are doubtless many interesting latent computations performed by such models, and VISCNN allows researchers to probe this vast space of computations for targeted, interpretable circuits.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×