pith. machine review for the scientific record. sign in

arxiv: 1506.06579 · v1 · submitted 2015-06-22 · 💻 cs.CV · cs.LG· cs.NE

Recognition: unknown

Understanding Neural Networks Through Deep Visualization

Authors on Pith no claims yet
classification 💻 cs.CV cs.LGcs.NE
keywords neuralnetworksproducedtoolsworkactivationsconvnetconvnets
0
0 comments X
read the original abstract

Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pre-trained convnet with minimal setup.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Concrete Problems in AI Safety

    cs.AI 2016-06 accept novelty 7.0

    The paper categorizes five concrete AI safety problems arising from flawed objectives, costly evaluation, and learning dynamics.

  2. Inside-Out: Measuring Generalization in Vision Transformers Through Inner Workings

    cs.LG 2026-04 unverdicted novelty 6.0

    Circuit-based metrics from Vision Transformer internals provide better label-free proxies for generalization under distribution shift than existing methods like model confidence.

  3. Seeing What Shouldn't Be There: Counterfactual GANs for Medical Image Attribution

    cs.CV 2026-05 unverdicted novelty 5.0

    A cycle-consistent GAN generates counterfactual medical images to attribute classification decisions more comprehensively than standard saliency methods.

  4. NeuroViz: Real-time Interactive Visualization of Forward and Backward Passes in Neural Network Training

    cs.LG 2026-05 unverdicted novelty 5.0

    NeuroViz offers interactive real-time visualization of neural network forward and backward passes, achieving top usability scores in a study with 31 participants compared to existing tools.