pith. machine review for the scientific record. sign in

arxiv: 1805.12233 · v1 · submitted 2018-05-30 · 💻 cs.LG · stat.ML

Recognition: unknown

How Important Is a Neuron?

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords conductanceemphhiddennetworkunitattributiondeepeffectiveness
0
0 comments X
read the original abstract

The problem of attributing a deep network's prediction to its \emph{input/base} features is well-studied. We introduce the notion of \emph{conductance} to extend the notion of attribution to the understanding the importance of \emph{hidden} units. Informally, the conductance of a hidden unit of a deep network is the \emph{flow} of attribution via this hidden unit. We use conductance to understand the importance of a hidden unit to the prediction for a specific input, or over a set of inputs. We evaluate the effectiveness of conductance in multiple ways, including theoretical properties, ablation studies, and a feature selection task. The empirical evaluations are done using the Inception network over ImageNet data, and a sentiment analysis network over reviews. In both cases, we demonstrate the effectiveness of conductance in identifying interesting insights about the internal workings of these networks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Hessian-Enhanced Token Attribution (HETA): Interpreting Autoregressive LLMs

    cs.CL 2026-04 unverdicted novelty 5.0

    HETA is a new attribution framework for decoder-only LLMs that combines semantic transition vectors, Hessian-based sensitivity scores, and KL divergence to produce more faithful and human-aligned token attributions th...