pith. machine review for the scientific record. sign in

arxiv: 1810.07052 · v3 · submitted 2018-10-16 · 💻 cs.LG · cs.CV· stat.ML

Recognition: unknown

Shallow-Deep Networks: Understanding and Mitigating Network Overthinking

Authors on Pith no claims yet
classification 💻 cs.LG cs.CVstat.ML
keywords overthinkingeffectpredictioncharacterizecorrectdestructivednnsfinal
0
0 comments X
read the original abstract

We characterize a prevalent weakness of deep neural networks (DNNs)---overthinking---which occurs when a DNN can reach correct predictions before its final layer. Overthinking is computationally wasteful, and it can also be destructive when, by the final layer, a correct prediction changes into a misclassification. Understanding overthinking requires studying how each prediction evolves during a DNN's forward pass, which conventionally is opaque. For prediction transparency, we propose the Shallow-Deep Network (SDN), a generic modification to off-the-shelf DNNs that introduces internal classifiers. We apply SDN to four modern architectures, trained on three image classification tasks, to characterize the overthinking problem. We show that SDNs can mitigate the wasteful effect of overthinking with confidence-based early exits, which reduce the average inference cost by more than 50% and preserve the accuracy. We also find that the destructive effect occurs for 50% of misclassifications on natural inputs and that it can be induced, adversarially, with a recent backdooring attack. To mitigate this effect, we propose a new confusion metric to quantify the internal disagreements that will likely lead to misclassifications.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. EdgeServing: Deadline-Aware Multi-DNN Serving at the Edge

    cs.DC 2026-05 unverdicted novelty 5.0

    EdgeServing schedules multi-DNN inference on edge GPUs via time-division sharing and early exits, using a stability score to minimize system-wide SLO violations and P95 latency.

  2. A Comparative Study of CNN Optimization Methods for Edge AI: Exploring the Role of Early Exits

    cs.AI 2026-04 unverdicted novelty 4.0

    Combining pruning, quantization, and early exits in CNNs reduces inference latency and memory on real edge devices with minimal accuracy loss.