pith. machine review for the scientific record. sign in

arxiv: 2603.23677 · v2 · submitted 2026-03-24 · 💻 cs.CV · cs.AI

Recognition: 2 theorem links

· Lean Theorem

Prototype Fusion: A Training-Free Multi-Layer Approach to OOD Detection

Authors on Pith no claims yet

Pith reviewed 2026-05-15 00:23 UTC · model grok-4.3

classification 💻 cs.CV cs.AI
keywords OOD detectionmulti-layer featuresprototypescosine similaritytraining-freeimage classificationout-of-distribution
0
0 comments X

The pith

Aggregating features from multiple layers creates effective prototypes for OOD detection without any training.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Current OOD detection methods typically rely on features from the penultimate layer of neural networks. This paper shows that intermediate convolutional layers contain equally useful information for identifying out-of-distribution samples. The proposed method combines class-wise mean embeddings from several layers, normalizes them, and uses cosine similarity to score how well a test sample matches known classes. ID samples match at least one prototype closely, while OOD samples stay distant from all of them. This training-free technique improves performance on standard benchmarks across different model architectures.

Core claim

Our scheme aggregates features from successive convolutional blocks, computes class-wise mean embeddings, and applies L_2 normalization to form compact ID prototypes capturing class semantics. During inference, cosine similarity between test features and these prototypes serves as an OOD score--ID samples exhibit strong affinity to at least one prototype, whereas OOD samples remain uniformly distant.

What carries the argument

Multi-layer ID prototypes formed by averaging and L2-normalizing features from successive convolutional blocks, scored via cosine similarity at inference time.

If this is right

  • ID samples exhibit strong affinity to at least one prototype while OOD samples remain uniformly distant from all prototypes.
  • The approach improves AUROC by up to 4.41% and reduces FPR by 13.58% on state-of-the-art OOD benchmarks.
  • It delivers robust, architecture-agnostic performance for image classification without requiring training or layer-specific tuning.
  • Multi-layer feature aggregation challenges the dominance of penultimate-layer-based methods.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar prototype fusion could be tested on transformer models to see if multi-layer benefits hold beyond CNNs.
  • Practitioners in safety-critical applications might adopt this to enhance robustness with minimal changes to existing models.
  • The uniform distance property for OOD samples could be leveraged in other anomaly detection settings.
  • Layer selection might be optimized further based on dataset characteristics for even better results.

Load-bearing premise

Intermediate layers encode equally rich and discriminative information for OOD detection allowing simple aggregation without training or tuning.

What would settle it

Observing no performance gain or degradation when using the multi-layer prototypes compared to single-layer penultimate features on multiple diverse OOD datasets would falsify the central claim.

Figures

Figures reproduced from arXiv: 2603.23677 by Ardhendu Tripathy, Mohamed Elmahallawy, Sanjay Madria, Shreen Gul.

Figure 1
Figure 1. Figure 1: Overview of the proposed training-free OOD detector. A pretrained CNN is tapped [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Effect of calibration set size on AUROC (top row) and false positive rate (bottom [PITH_FULL_IMAGE:figures/full_fig_p009_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: AUROC and FPR using the penultimate layer vs. the last three layers across three [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Impact of weighting schemes across layers on overall detection performance. [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Per-layer cosine similarity scores for the last three layers of ResNet-18 on ID data [PITH_FULL_IMAGE:figures/full_fig_p010_5.png] view at source ↗
read the original abstract

Deep learning models are increasingly deployed in safety-critical applications, where reliable out-of-distribution (OOD) detection is essential to ensure robustness. Existing methods predominantly rely on the penultimate-layer activations of neural networks, assuming they encapsulate the most informative in-distribution (ID) representations. In this work, we revisit this assumption to show that intermediate layers encode equally rich and discriminative information for OOD detection. Based on this observation, we propose a simple yet effective model-agnostic approach that leverages internal representations across multiple layers. Our scheme aggregates features from successive convolutional blocks, computes class-wise mean embeddings, and applies L_2 normalization to form compact ID prototypes capturing class semantics. During inference, cosine similarity between test features and these prototypes serves as an OOD score--ID samples exhibit strong affinity to at least one prototype, whereas OOD samples remain uniformly distant. Extensive experiments on state-of-the-art OOD benchmarks across diverse architectures demonstrate that our approach delivers robust, architecture-agnostic performance and strong generalization for image classification. Notably, it improves AUROC by up to 4.41% and reduces FPR by 13.58%, highlighting multi-layer feature aggregation as a powerful yet underexplored signal for OOD detection, challenging the dominance of penultimate-layer-based methods. Our code is available at: https://github.com/sgchr273/cosine-layers.git.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes Prototype Fusion, a training-free and model-agnostic method for OOD detection. It extracts features from multiple convolutional blocks, computes class-wise mean embeddings on ID data, applies L2 normalization to form compact prototypes, and uses the maximum cosine similarity to these prototypes as the OOD score at inference. The central claim is that this multi-layer aggregation yields stronger OOD signals than standard penultimate-layer baselines, with reported AUROC gains up to 4.41% and FPR reductions up to 13.58% across diverse architectures and benchmarks.

Significance. If the results hold under closer scrutiny, the work is significant for providing a simple, parameter-free empirical procedure that leverages intermediate-layer representations without any training or layer-specific tuning. It directly challenges the field’s heavy reliance on penultimate activations and supplies reproducible code, which strengthens its utility for safety-critical deployment. The approach is internally consistent and falsifiable via the released implementation.

major comments (3)
  1. [Abstract] Abstract: the headline AUROC gain of 4.41% and FPR reduction of 13.58% are stated without identifying the specific datasets, architectures, number of layers aggregated, or exact baseline implementations, which is load-bearing for assessing whether the multi-layer claim is robust.
  2. [§3] §3 (method description): the aggregation mechanics are underspecified—e.g., whether per-block global-pooled vectors are simply concatenated before class-mean computation, how dimension mismatches across layers are resolved, and the precise rule for selecting which successive blocks to include—preventing exact reproduction of the reported prototypes.
  3. [§4] §4 (experiments): no standard deviations, multiple random seeds, or statistical significance tests accompany the tabulated improvements, so it is impossible to determine whether the gains over penultimate-layer baselines are reliable or within experimental noise.
minor comments (2)
  1. [Introduction] The assumption that intermediate layers are “equally rich” could be softened to “complementary” to avoid overstatement, as the method does not require layer equivalence for its construction.
  2. [Figures/Tables] Figure captions and table headers should explicitly list the layer indices used for each architecture to aid readers in replicating the exact prototype construction.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the thorough review and positive recommendation for minor revision. The comments highlight important aspects for improving clarity and rigor, which we address point by point below.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the headline AUROC gain of 4.41% and FPR reduction of 13.58% are stated without identifying the specific datasets, architectures, number of layers aggregated, or exact baseline implementations, which is load-bearing for assessing whether the multi-layer claim is robust.

    Authors: We agree with this observation. The reported maximum AUROC improvement of 4.41% was achieved on the CIFAR-100 dataset using a ResNet-18 model by aggregating features from 4 convolutional blocks, compared against the standard penultimate-layer baseline using maximum softmax probability. The FPR reduction of 13.58% corresponds to the same configuration on the same benchmark. In the revised manuscript, we will update the abstract to specify: 'Notably, it improves AUROC by up to 4.41% on CIFAR-100 with ResNet-18 and reduces FPR by 13.58% across diverse settings.' This will provide the necessary context for readers. revision: yes

  2. Referee: [§3] §3 (method description): the aggregation mechanics are underspecified—e.g., whether per-block global-pooled vectors are simply concatenated before class-mean computation, how dimension mismatches across layers are resolved, and the precise rule for selecting which successive blocks to include—preventing exact reproduction of the reported prototypes.

    Authors: Thank you for pointing this out; we will clarify the method in the revision. Features from each convolutional block are independently globally average-pooled and L2-normalized to form per-layer vectors. Class-wise mean prototypes are computed separately for each layer. At inference, cosine similarity is calculated to the prototypes of each layer, and the OOD score is the maximum similarity across all layers. No concatenation occurs, and dimension mismatches are avoided by operating within each layer's native feature space. Successive blocks are selected as all convolutional blocks up to but not including the final fully-connected layer, typically 4 blocks for ResNet architectures. We will include a detailed algorithm box and expanded text in §3 to ensure exact reproducibility. revision: yes

  3. Referee: [§4] §4 (experiments): no standard deviations, multiple random seeds, or statistical significance tests accompany the tabulated improvements, so it is impossible to determine whether the gains over penultimate-layer baselines are reliable or within experimental noise.

    Authors: We acknowledge the value of statistical reporting. Since Prototype Fusion is a deterministic post-hoc method with no stochastic training, the results are reproducible given the same ID data split. However, to address this, we will conduct experiments with 5 different random seeds for data shuffling in prototype computation and report means with standard deviations in the updated tables. We will also perform paired t-tests where appropriate to confirm significance of improvements. This will be added in the revised §4. revision: yes

Circularity Check

0 steps flagged

No significant circularity identified

full rationale

The paper describes a training-free procedure: aggregate features from convolutional blocks, compute class-wise means on ID data, apply L2 normalization to obtain prototypes, and use maximum cosine similarity at inference as the OOD score. No equations or derivations are present that reduce to fitted parameters by construction, no self-citations are invoked as load-bearing uniqueness theorems, and no ansatz is smuggled via prior work. The construction is a direct, self-contained empirical recipe whose performance claims rest on reported benchmarks rather than internal self-reference. This matches the default case of an honest non-finding.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The method introduces no free parameters or invented entities. It relies on the domain assumption that features from intermediate convolutional layers are sufficiently informative for OOD tasks when aggregated.

axioms (1)
  • domain assumption Intermediate layers of CNNs encode equally rich and discriminative information for OOD detection as the penultimate layer.
    Explicitly stated in the abstract as the basis for revisiting the penultimate-layer assumption and proposing multi-layer aggregation.

pith-pipeline@v0.9.0 · 5553 in / 1171 out tokens · 56117 ms · 2026-05-15T00:23:23.740530+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

21 extracted references · 21 canonical work pages

  1. [1]

    arXiv (2023)

    Ammar, M.B., Belkhir, N., Popescu, S., Manzanera, A., Franchi, G.: Neco: Neural collapse based out-of-distribution detection. arXiv (2023)

  2. [2]

    In: Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) (2024)

    Arnez, F., et al.: Latent representation entropy density for distribution shift detection. In: Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) (2024)

  3. [3]

    In: Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR)

    Dong, X., Guo, J., Li, A., Ting, W.M., Liu, C., Kung, H.T.: Neural mean discrepancy for efficient out-of-distribution detection. In: Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR). pp. 19195–19205 (2021)

  4. [4]

    In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)

    Guan, X., Chen, J., Bu, S., Zhou, Y., Zheng, W., Wang, R.: Exploiting discrepancy in fea- ture statistic for out-of-distribution detection. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). vol. 38, pp. 19858–19866 (2024)

  5. [5]

    arXiv (2025)

    Guglielmo, Masana: Leveraging intermediate representations for better out-of-distribution detection. arXiv (2025)

  6. [6]

    arXiv (2025)

    Harun, M.Y., Gallardo, J., Kanan, C.: Controlling neural collapse enhances out-of- distribution detection and transfer learning. arXiv (2025)

  7. [7]

    arXiv (2019)

    Hendrycks, D., Basart, S., Mazeika, M., Zou, A., Kwon, J., Mostajabi, M., Steinhardt, J., Song, D.: Scaling out-of-distribution detection for real-world settings. arXiv (2019)

  8. [8]

    arXiv (2016)

    Hendrycks, D., Gimpel, K.: A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv (2016)

  9. [9]

    In: Advances in Neural Information Processing Systems (NeurIPS)

    Huang, R., Geng, A., Li, Y.: On the importance of gradients for detecting distributional shifts in the wild. In: Advances in Neural Information Processing Systems (NeurIPS). vol. 34, pp. 677–689 (2021)

  10. [10]

    arXiv (2023)

    Jelenić, et al.: Out-of-distribution detection by leveraging between-layer transformation smoothness. arXiv (2023)

  11. [11]

    In: International Workshop on Uncertainty for Safe Utilization of Ma- chine Learning in Medical Imaging

    Lambert, B., Forbes, F., Doyle, S., Dojat, M.: Multi-layer aggregation as a key to feature- based ood detection. In: International Workshop on Uncertainty for Safe Utilization of Ma- chine Learning in Medical Imaging. pp. 104–114 (2023)

  12. [12]

    In: Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Liu, L., Qin, Y.: Detecting out-of-distribution through the lens of neural collapse. In: Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 15424–15433 (2025)

  13. [13]

    In: Ad- vances in Neural Information Processing Systems (NeurIPS)

    Liu, W., Wang, X., Owens, J., Li, Y.: Energy-based out-of-distribution detection. In: Ad- vances in Neural Information Processing Systems (NeurIPS). vol. 33, pp. 21464–21475 (2020)

  14. [14]

    In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)

    Park, J., Jung, Y.G., Teoh, A.B.J.: Nearest neighbor guidance for out-of-distribution de- tection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 1686–1695 (2023)

  15. [15]

    arXiv (2021)

    Sehwag, V., Chiang, M., Mittal, P.: Ssd: A unified framework for self-supervised outlier detection. arXiv (2021)

  16. [16]

    In: Advances in Neural Information Processing Systems (NeurIPS)

    Sun, Y., Guo, C., Li, Y.: React: Out-of-distribution detection with rectified activations. In: Advances in Neural Information Processing Systems (NeurIPS). vol. 34, pp. 144–157 (2021)

  17. [17]

    In: Proceedings of the 39th International Conference on Machine Learning (ICML)

    Sun, Y., et al.: Out-of-distribution detection with deep nearest neighbors. In: Proceedings of the 39th International Conference on Machine Learning (ICML). pp. 20827–20840. Proceed- ings of Machine Learning Research (PMLR) (2022)

  18. [18]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Wang, H., Li, Z., Feng, L., Zhang, W.: Vim: Out-of-distribution detection with virtual-logit matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4921–4930 (2022)

  19. [19]

    Frontiers in Big Data7, 1444634 (2024)

    Wang,H.,Zhao,C.,Chen,F.:Efficientout-of-distributiondetectionvialayer-adaptivescoring and early stopping. Frontiers in Big Data7, 1444634 (2024)

  20. [20]

    In: International Conference on Learning Representations (ICLR) (2025)

    Wu, Y., et al.: Pursuing feature separation based on neural collapse for out-of-distribution detection. In: International Conference on Learning Representations (ICLR) (2025)

  21. [21]

    arXiv (2025)

    Yang, G., et al.: Eood: Entropy-based out-of-distribution detection. arXiv (2025)