pith. machine review for the scientific record. sign in

arxiv: 1611.07429 · v1 · submitted 2016-11-22 · 📊 stat.ML · cs.LG

Recognition: unknown

TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords interpretabilityaccuracydeepmodelmodelspartitioningtreeviewachieve
0
0 comments X
read the original abstract

With the advent of highly predictive but opaque deep learning models, it has become more important than ever to understand and explain the predictions of such models. Existing approaches define interpretability as the inverse of complexity and achieve interpretability at the cost of accuracy. This introduces a risk of producing interpretable but misleading explanations. As humans, we are prone to engage in this kind of behavior \cite{mythos}. In this paper, we take a step in the direction of tackling the problem of interpretability without compromising the model accuracy. We propose to build a Treeview representation of the complex model via hierarchical partitioning of the feature space, which reveals the iterative rejection of unlikely class labels until the correct association is predicted.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Debunking Grad-ECLIP: A Comprehensive Study on Its Incorrectness and Fundamental Principles for Model Interpretation

    cs.CV 2026-05 unverdicted novelty 4.0

    Grad-ECLIP is an equivalent but flawed variant of attention-based interpretation, with two principles proposed to ensure model explanations reflect the original model.