pith. machine review for the scientific record. sign in

arxiv: 2605.01063 · v1 · submitted 2026-05-01 · 💻 cs.LG · cs.CV

Recognition: unknown

GEODE: Angle-Adaptive OOD Detection with Universal Scorer Compatibility

Authors on Pith no claims yet

Pith reviewed 2026-05-09 19:22 UTC · model grok-4.3

classification 💻 cs.LG cs.CV
keywords out-of-distribution detectionoutlier exposureneural collapseangle-adaptive norm lossfeature geometryscorer compatibilityboundary calibration
0
0 comments X

The pith

GEODE replicates outlier exposure's boundary calibration using an angle-adaptive norm loss to achieve consistent out-of-distribution detection performance across all standard scorers.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that outlier exposure succeeds mainly by positioning its features at the geometric boundary between in-distribution classes, specifically in the boundary-adjacent quartile. GEODE achieves the same calibration effect without any auxiliary data by using an angle-adaptive norm loss that scales each sample's target according to its angle to the nearest class mean. This approach preserves the feature geometry needed by distance-based scorers while also benefiting probability-based ones. Readers should care because deployment-time OOD types are unknown, making scorer-independent performance essential for reliable detection.

Core claim

Outlier exposure works by boundary calibration rather than broad OOD coverage, with its gain coming from features in the boundary-adjacent quartile. GEODE replicates this synthetically with an angle-adaptive norm loss whose targets scale per-sample with cosine similarity to the nearest class mean. Four theorems based on neural collapse justify the design, leading to strong performance across all scorers.

What carries the argument

Angle-adaptive norm loss that sets targets scaling with cosine similarity to the nearest class mean to replicate boundary-adjacent quartile effects.

If this is right

  • GEODE delivers consistent AUROC improvements across all seven standard scorers on CIFAR-10 without catastrophic drops on any one.
  • It outperforms standard cross-entropy training when trained for the same number of epochs.
  • When combined with outlier exposure, it reaches top results on both MSP and KNN scorers.
  • The gains extend to CIFAR-100 and larger models like WRN-28-10.
  • It avoids the damage to distance-based scorers caused by methods that push OOD into null spaces.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The geometric explanation suggests that any OOD method should be evaluated on its effect on the full range of scorers rather than just one or two.
  • Angle adaptation could be applied to other loss functions to improve compatibility in multi-scorer environments.
  • If the neural collapse theorems hold, similar adaptive mechanisms might help in related tasks like domain adaptation.
  • Future work might test whether the method scales to high-resolution images where feature geometry differs.

Load-bearing premise

The angle-adaptive norm loss exactly replicates the boundary-adjacent quartile effect of real OE data without introducing new distortions to feature geometry that could harm certain scorers or datasets.

What would settle it

Observing that GEODE underperforms vanilla training on a particular scorer or that the synthetic features do not match the geometric positions of real near-OOD samples would disprove the replication claim.

Figures

Figures reproduced from arXiv: 2605.01063 by Bruno Abrahao.

Figure 1
Figure 1. Figure 1: (a) Distance of each candidate source from real near-OOD geometry (ResNet-18, CIFAR￾10), measured as Euclidean distance in the (∥h∥/rid, maxc cos) plane. OE alone matches (dis￾tance 0.006; the next closest source is 13× further). Including the projection ratio and perpendicular norm columns of [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Theorem 4 in pictures. (a) For each method, the fraction of OOD feature energy that lies in the classifier subspace col(W⊤) (green; productive for both logit and feature scorers) versus the null space nul(W) (gray; invisible to logit scorers, but still contributes norm). PFS pushes 92.7% of OOD energy into nul(W); GEODE preserves 93.0% in col(W ⊤). (b) Resulting near￾OOD KNN AUROC. PFS collapses to 14.38 A… view at source ↗
Figure 3
Figure 3. Figure 3: Feature norm distributions for ID (CIFAR-10, blue) and near-OOD (CIFAR-100, red) [PITH_FULL_IMAGE:figures/full_fig_p029_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Training dynamics on CIFAR-10 (ResNet-18, 200 epochs) for the GEODE [PITH_FULL_IMAGE:figures/full_fig_p030_4.png] view at source ↗
read the original abstract

Outlier Exposure (OE) is among the strongest training-based OOD detectors on standard benchmarks but exhibits scorer-dependent tradeoffs (e.g., strong on MSP, weak on KNN) and requires curated auxiliary data. We show why OE works: its features sit at the same geometric locus as real near-OOD data, with the boundary-adjacent quartile driving nearly all of OE's gain. OE is boundary calibration, not OOD coverage. GEODE (GEOmetry-preserving DEtection) replicates this calibration synthetically through an angle-adaptive norm loss in which targets scale per-sample with cosine similarity to the nearest class mean, preserving feature geometry where boundary structure matters. Four theorems grounded in neural collapse justify the design. GEODE works across all seven standard scorers on CIFAR-10 (near-OOD AUROC 89.0-92.3, far-OOD reaching 93.05; no catastrophic failure on any scorer). Since the OOD regime is unknown at deployment, this is the test that matters. GEODE outperforms vanilla CE at matched epoch counts. Combined with OE, GEODE reaches 95.0 MSP / 94.8 KNN on CIFAR-10 and beats OE on every scorer on CIFAR-100. The gains hold on WRN-28-10 (+4.5 Energy, 3 seeds). Unlike methods that push OOD into the classifier null space (e.g., PFS, 14.38 KNN AUROC, worse than random), GEODE's adaptive target preserves the geometry that distance-based scorers depend on.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript proposes GEODE, a geometry-preserving OOD detection method using an angle-adaptive norm loss (targets scaled per-sample by cosine similarity to the nearest class mean) to synthetically replicate the boundary-adjacent quartile calibration effect of Outlier Exposure (OE) without auxiliary data. Grounded in four neural collapse theorems, it claims universal compatibility with seven standard scorers, reporting near-OOD AUROC 89.0-92.3 and far-OOD up to 93.05 on CIFAR-10 (no catastrophic failures), gains over vanilla CE at matched epochs, and further improvements when combined with OE on CIFAR-10/100 and WRN-28-10.

Significance. If the geometric replication claim holds, this would be a meaningful contribution to OOD detection by addressing OE's scorer-dependent tradeoffs and auxiliary-data requirement while preserving feature geometry needed for distance-based methods (KNN, Energy). The explicit grounding in neural collapse results and the multi-scorer, multi-dataset empirical evaluation (including 3-seed WRN results) are strengths; the work offers a practical alternative to methods that distort geometry (e.g., PFS).

major comments (3)
  1. [Theoretical Analysis] Theoretical Analysis (theorems section): The four neural collapse theorems are load-bearing for justifying the angle-adaptive scaling, yet the manuscript provides no explicit derivation showing how per-sample cosine-based target adjustment preserves inter-class angles and intra-class norm distributions equivalently to real OE data; without this, the claim that distance-based scorers remain undistorted is unsupported.
  2. [Experimental Evaluation] Experimental Evaluation (CIFAR-10 results): The universal scorer compatibility claim (89.0-92.3 near-OOD AUROC across all seven scorers) rests on the assumption that the adaptive loss replicates OE's boundary-adjacent quartile locus without new distortions, but no direct geometric verification (e.g., quartile-masked AUROC deltas, norm histograms, or inter-class angle comparisons between GEODE and OE features) is reported; this is required to rule out scorer-specific effects.
  3. [Results tables] Results tables (CIFAR-10/100): While AUROCs and the +4.5 Energy gain on WRN-28-10 (3 seeds) are given, the absence of full error bars, quartile-effect ablations, and matched-epoch controls for all baselines leaves the 'no catastrophic failure' and 'outperforms vanilla CE' claims difficult to assess for robustness.
minor comments (2)
  1. [Abstract] Abstract: The seven standard scorers should be named explicitly rather than referenced generically; the GEODE acronym expansion appears late.
  2. [Method] Notation and loss definition: The angle-adaptive target scaling would benefit from an equation number and a short pseudocode block for reproducibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback on our manuscript. We address each major comment below with clarifications and indicate the revisions we will make to strengthen the presentation of the theoretical and empirical claims.

read point-by-point responses
  1. Referee: [Theoretical Analysis] Theoretical Analysis (theorems section): The four neural collapse theorems are load-bearing for justifying the angle-adaptive scaling, yet the manuscript provides no explicit derivation showing how per-sample cosine-based target adjustment preserves inter-class angles and intra-class norm distributions equivalently to real OE data; without this, the claim that distance-based scorers remain undistorted is unsupported.

    Authors: We agree that an explicit derivation connecting the per-sample cosine scaling to the preservation of inter-class angles and intra-class norm distributions (as achieved by real OE) would make the link to neural collapse more direct. The four theorems establish the NC regime conditions under which geometry is preserved, and the adaptive loss is constructed to target the boundary-adjacent quartile locus without pushing features into the null space. To address the gap, we will add a dedicated derivation subsection in the revised theorems section that step-by-step shows how the cosine-based target adjustment maintains the required angle and norm statistics equivalently to OE, thereby supporting undistorted performance for distance-based scorers. revision: yes

  2. Referee: [Experimental Evaluation] Experimental Evaluation (CIFAR-10 results): The universal scorer compatibility claim (89.0-92.3 near-OOD AUROC across all seven scorers) rests on the assumption that the adaptive loss replicates OE's boundary-adjacent quartile locus without new distortions, but no direct geometric verification (e.g., quartile-masked AUROC deltas, norm histograms, or inter-class angle comparisons between GEODE and OE features) is reported; this is required to rule out scorer-specific effects.

    Authors: The consistent AUROC range across all seven scorers provides indirect support for the absence of new distortions, but we acknowledge that direct geometric verification would more rigorously confirm replication of OE's quartile locus. In the revision we will add norm histograms, inter-class angle distributions, and quartile-masked AUROC delta comparisons between GEODE, OE, and vanilla CE features on CIFAR-10. These analyses will explicitly demonstrate that the adaptive loss replicates the boundary calibration geometry without introducing scorer-specific artifacts. revision: yes

  3. Referee: [Results tables] Results tables (CIFAR-10/100): While AUROCs and the +4.5 Energy gain on WRN-28-10 (3 seeds) are given, the absence of full error bars, quartile-effect ablations, and matched-epoch controls for all baselines leaves the 'no catastrophic failure' and 'outperforms vanilla CE' claims difficult to assess for robustness.

    Authors: The WRN-28-10 results already report 3-seed averages for the +4.5 Energy gain, but we agree that fuller statistical reporting and controls would improve assessment of robustness. We will revise the results tables and experimental section to include standard-deviation error bars for all reported AUROCs, add targeted quartile-effect ablations, and explicitly present matched-epoch comparisons against all baselines. This will better substantiate the 'no catastrophic failure' and 'outperforms vanilla CE' statements. revision: yes

Circularity Check

0 steps flagged

No circularity: derivation grounded in external neural collapse analysis and empirical validation

full rationale

The paper derives the angle-adaptive norm loss from geometric analysis of OE features (boundary-adjacent quartile locus) and four theorems grounded in neural collapse, which are presented as independent prior results rather than self-referential. The universal scorer compatibility claim (AUROC ranges across seven scorers) is supported by direct experiments on CIFAR-10/CIFAR-100, not by any equation that forces the outcome from the loss definition itself. No self-definitional steps, fitted inputs renamed as predictions, or load-bearing self-citations appear in the provided derivation chain; the central result remains falsifiable against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Based solely on the abstract, the central claim rests on four unspecified theorems from neural collapse theory and the geometric equivalence between OE features and near-OOD data; no explicit free parameters or invented entities are named.

axioms (1)
  • domain assumption Neural collapse organizes features such that boundary-adjacent quartiles dominate OE gains
    Invoked to justify why synthetic angle-adaptive targets replicate real OE calibration

pith-pipeline@v0.9.0 · 5581 in / 1316 out tokens · 27150 ms · 2026-05-09T19:22:34.204383+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

64 extracted references · 6 canonical work pages

  1. [1]

    International Conference on Learning Representations , year=

    A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks , author=. International Conference on Learning Representations , year=

  2. [2]

    International Conference on Learning Representations , year=

    Enhancing the Reliability of Out-of-Distribution Image Detection in Neural Networks , author=. International Conference on Learning Representations , year=

  3. [3]

    Advances in Neural Information Processing Systems , volume=

    Energy-Based Out-of-Distribution Detection , author=. Advances in Neural Information Processing Systems , volume=

  4. [4]

    International Conference on Machine Learning , year=

    Out-of-Distribution Detection with Deep Nearest Neighbors , author=. International Conference on Machine Learning , year=

  5. [5]

    Advances in Neural Information Processing Systems , volume=

    A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks , author=. Advances in Neural Information Processing Systems , volume=

  6. [6]

    2022 , doi=

    Wang, Haoqi and Li, Zhizhong and Feng, Litong and Zhang, Wayne , booktitle=. 2022 , doi=

  7. [7]

    Sun, Yiyou and Guo, Chuan and Li, Yixuan , booktitle=

  8. [8]

    International Conference on Learning Representations , year=

    Extremely Simple Activation Shaping for Out-of-Distribution Detection , author=. International Conference on Learning Representations , year=

  9. [9]

    International Conference on Learning Representations , year=

    Scaling for Training Time and Post-hoc Out-of-Distribution Detection Enhancement , author=. International Conference on Learning Representations , year=

  10. [10]

    2022 , doi=

    Sun, Yiyou and Li, Yixuan , booktitle=. 2022 , doi=

  11. [11]

    Advances in Neural Information Processing Systems , volume=

    On the Importance of Gradients for Detecting Distributional Shifts in the Wild , author=. Advances in Neural Information Processing Systems , volume=

  12. [12]

    International Conference on Learning Representations , year=

    Deep Anomaly Detection with Outlier Exposure , author=. International Conference on Learning Representations , year=

  13. [13]

    International Conference on Learning Representations , year=

    Pursuing Feature Separation based on Neural Collapse for Out-of-Distribution Detection , author=. International Conference on Learning Representations , year=

  14. [14]

    International Conference on Machine Learning , year=

    Mitigating Neural Network Overconfidence with Logit Normalization , author=. International Conference on Machine Learning , year=

  15. [15]

    2022 , url=

    Du, Xuefeng and Wang, Zhaoning and Cai, Mu and Li, Yixuan , booktitle=. 2022 , url=

  16. [16]

    International Conference on Learning Representations , year=

    Non-Parametric Outlier Synthesis , author=. International Conference on Learning Representations , year=

  17. [17]

    International Conference on Learning Representations , year=

    How to Exploit Hyperspherical Embeddings for Out-of-Distribution Detection? , author=. International Conference on Learning Representations , year=

  18. [18]

    Advances in Neural Information Processing Systems , volume=

    Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty , author=. Advances in Neural Information Processing Systems , volume=

  19. [19]

    IEEE/CVF Winter Conference on Applications of Computer Vision , year=

    Mixture Outlier Exposure: Towards Out-of-Distribution Detection in Fine-Grained Environments , author=. IEEE/CVF Winter Conference on Applications of Computer Vision , year=

  20. [20]

    Learning Confidence for Out -of-Distribution Detection in Neural Networks,

    Learning Confidence for Out-of-Distribution Detection in Neural Networks , author=. arXiv preprint arXiv:1802.04865 , year=

  21. [21]

    Generalized

    Hsu, Yen-Chang and Shen, Yilin and Jin, Hongxia and Kira, Zsolt , booktitle=. Generalized. 2020 , doi=

  22. [22]

    2024 , url=

    Zhang, Jingyang and Yang, Jingkang and Wang, Pengyun and Wang, Haoqi and Lin, Yueqian and Zhang, Haoran and Sun, Yiyou and Du, Xuefeng and Zhou, Kaiyang and Zhang, Wayne and Li, Yixuan and Liu, Ziwei and Chen, Yiran and Li, Hai , journal=. 2024 , url=

  23. [23]

    Learning Multiple Layers of Features from Tiny Images , author=

  24. [24]

    Proceedings of the IEEE , volume=

    Gradient-Based Learning Applied to Document Recognition , author=. Proceedings of the IEEE , volume=. 1998 , doi=

  25. [25]

    NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning , year=

    Reading Digits in Natural Images with Unsupervised Feature Learning , author=. NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning , year=

  26. [26]

    IEEE/CVF Conference on Computer Vision and Pattern Recognition , year=

    Describing Textures in the Wild , author=. IEEE/CVF Conference on Computer Vision and Pattern Recognition , year=

  27. [27]

    IEEE Transactions on Pattern Analysis and Machine Intelligence , volume=

    Places: A 10 Million Image Database for Scene Recognition , author=. IEEE Transactions on Pattern Analysis and Machine Intelligence , volume=. 2018 , doi=

  28. [28]

    IEEE/CVF Conference on Computer Vision and Pattern Recognition , year=

    Deep Residual Learning for Image Recognition , author=. IEEE/CVF Conference on Computer Vision and Pattern Recognition , year=

  29. [29]

    Proceedings of the National Academy of Sciences , volume=

    Prevalence of Neural Collapse During the Terminal Phase of Deep Learning Training , author=. Proceedings of the National Academy of Sciences , volume=. 2020 , doi=

  30. [30]

    Impossibility of Simultaneous Near- and Far-

    Wang, Charles and Abrahao, Bruno , journal=. Impossibility of Simultaneous Near- and Far-. 2025 , note=

  31. [31]

    International Conference on Learning Representations , year=

    Ben Ammar, Mou. International Conference on Learning Representations , year=

  32. [32]

    International Conference on Learning Representations , year=

    Open-Set Recognition: A Good Closed-Set Classifier is All You Need , author=. International Conference on Learning Representations , year=

  33. [33]

    Van Horn, Grant and Mac Aodha, Oisin and Song, Yang and Cui, Yin and Sun, Chen and Shepard, Alex and Adam, Hartwig and Perona, Pietro and Belongie, Serge , booktitle=. The. 2018 , doi=

  34. [34]

    2025 , eprint=

    BootOOD: Self-Supervised Out-of-Distribution Detection via Synthetic Sample Exposure under Neural Collapse , author=. 2025 , eprint=

  35. [35]

    International Conference on Machine Learning , year=

    Manifold Mixup: Better Representations by Interpolating Hidden States , author=. International Conference on Machine Learning , year=

  36. [36]

    and Lopez-Paz, David , booktitle=

    Zhang, Hongyi and Cisse, Moustapha and Dauphin, Yann N. and Lopez-Paz, David , booktitle=. mixup:. 2018 , url=

  37. [37]

    International Journal of Computer Vision , year=

    Dissecting Out-of-Distribution Detection and Open-Set Recognition: A Critical Analysis of Methods and Benchmarks , author=. International Journal of Computer Vision , year=. 2408.16757 , archivePrefix=

  38. [38]

    In or Out? Fixing

    Bitterwolf, Julian and M. In or Out? Fixing. International Conference on Machine Learning , year=

  39. [39]

    NECO : NE ural collapse based out-of-distribution detection

    Mou \" n Ben Ammar, Nacim Belkhir, Sebastian Popescu, Antoine Manzanera, and Gianni Franchi. NECO : NE ural collapse based out-of-distribution detection. In International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=3klVMuoS0s

  40. [40]

    In or out? fixing I mage N et out-of-distribution detection evaluation

    Julian Bitterwolf, Maximilian M \"u ller, and Matthias Hein. In or out? fixing I mage N et out-of-distribution detection evaluation. In International Conference on Machine Learning, 2023

  41. [41]

    Extremely simple activation shaping for out-of-distribution detection

    Andrija Djurisic, Nebojsa Bozanic, Arjun Ashok, and Rosanne Liu. Extremely simple activation shaping for out-of-distribution detection. In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=ndYXTEL6nR

  42. [42]

    VOS : Learning what you don't know by virtual outlier synthesis

    Xuefeng Du, Zhaoning Wang, Mu Cai, and Yixuan Li. VOS : Learning what you don't know by virtual outlier synthesis. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=TW7d65uYu5M

  43. [43]

    Deep Residual Learning for Image Recognition , isbn =

    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2016. doi:10.1109/CVPR.2016.90

  44. [44]

    A baseline for detecting misclassified and out-of-distribution examples in neural networks

    Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In International Conference on Learning Representations, 2017. URL https://openreview.net/forum?id=Hkg4TI9xl

  45. [45]

    Deep anomaly detection with outlier exposure

    Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep anomaly detection with outlier exposure. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HyxCxhRcY7

  46. [46]

    Learning multiple layers of features from tiny images

    Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009

  47. [47]

    A simple unified framework for detecting out-of-distribution samples and adversarial attacks

    Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems, volume 31, 2018

  48. [48]

    Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the reliability of out-of-distribution image detection in neural networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1VGkIxRZ

  49. [49]

    Energy-based out-of-distribution detection

    Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. In Advances in Neural Information Processing Systems, volume 33, 2020

  50. [50]

    How to exploit hyperspherical embeddings for out-of-distribution detection? In International Conference on Learning Representations, 2023

    Yifei Ming, Yiyou Sun, Ousmane Dia, and Yixuan Li. How to exploit hyperspherical embeddings for out-of-distribution detection? In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=aEFaE0W5pAd

  51. [51]

    1 Nemhauser, G

    Vardan Papyan, X.Y. Han, and David L. Donoho. Prevalence of neural collapse during the terminal phase of deep learning training. Proceedings of the National Academy of Sciences, 117 0 (40): 0 24652--24663, 2020. doi:10.1073/pnas.2015509117

  52. [52]

    ReAct : Out-of-distribution detection with rectified activations

    Yiyou Sun, Chuan Guo, and Yixuan Li. ReAct : Out-of-distribution detection with rectified activations. In Advances in Neural Information Processing Systems, volume 34, 2021

  53. [53]

    Out-of-distribution detection with deep nearest neighbors

    Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. In International Conference on Machine Learning, 2022. URL https://proceedings.mlr.press/v162/sun22d.html

  54. [54]

    Non-parametric outlier synthesis

    Leitian Tao, Xuefeng Du, Jerry Zhu, and Yixuan Li. Non-parametric outlier synthesis. In International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=JHklpEZqduQ

  55. [55]

    Manifold mixup: Better representations by interpolating hidden states

    Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states. In International Conference on Machine Learning, 2019. URL https://proceedings.mlr.press/v97/verma19a.html

  56. [56]

    Impossibility of simultaneous near- and far- OOD detection under neural collapse

    Charles Wang and Bruno Abrahao. Impossibility of simultaneous near- and far- OOD detection under neural collapse. arXiv preprint, 2025. In preparation

  57. [57]

    Dissecting out-of-distribution detection and open-set recognition: A critical analysis of methods and benchmarks

    Hongjun Wang, Sagar Vaze, and Kai Han. Dissecting out-of-distribution detection and open-set recognition: A critical analysis of methods and benchmarks. International Journal of Computer Vision, 2024

  58. [58]

    Bootood: Self-supervised out-of-distribution detection via synthetic sample exposure under neural collapse, 2025

    Yuanchao Wang, Tian Qin, Eduardo Valle, and Bruno Abrahao. Bootood: Self-supervised out-of-distribution detection via synthetic sample exposure under neural collapse, 2025. URL https://arxiv.org/abs/2511.13539

  59. [59]

    Mitigating neural network overconfidence with logit normalization

    Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, and Yixuan Li. Mitigating neural network overconfidence with logit normalization. In International Conference on Machine Learning, 2022. URL https://proceedings.mlr.press/v162/wei22d.html

  60. [60]

    Pursuing feature separation based on neural collapse for out-of-distribution detection

    Yingwen Wu, Ruiji Yu, Xinwen Cheng, Zhengbao He, and Xiaolin Huang. Pursuing feature separation based on neural collapse for out-of-distribution detection. In International Conference on Learning Representations, 2025. URL https://openreview.net/forum?id=sNEMqdReel

  61. [61]

    Scaling for training time and post-hoc out-of-distribution detection enhancement

    Kai Xu, Rongyu Chen, Gianni Franchi, and Angela Yao. Scaling for training time and post-hoc out-of-distribution detection enhancement. In International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=nanyAujl6e

  62. [62]

    Dauphin, and David Lopez-Paz

    Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz. mixup: B eyond empirical risk minimization. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=r1Ddp1-Rb

  63. [63]

    In: IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2023, Waikoloa, HI, USA, January 2-7, 2023

    Jingyang Zhang, Nathan Inkawhich, Randolph Linderman, Yiran Chen, and Hai Li. Mixture outlier exposure: Towards out-of-distribution detection in fine-grained environments. In IEEE/CVF Winter Conference on Applications of Computer Vision, 2023. doi:10.1109/WACV56688.2023.00054

  64. [64]

    OpenOOD v1.5: Enhanced benchmark for out-of-distribution detection

    Jingyang Zhang, Jingkang Yang, Pengyun Wang, Haoqi Wang, Yueqian Lin, Haoran Zhang, Yiyou Sun, Xuefeng Du, Kaiyang Zhou, Wayne Zhang, Yixuan Li, Ziwei Liu, Yiran Chen, and Hai Li. OpenOOD v1.5: Enhanced benchmark for out-of-distribution detection. Journal of Data-centric Machine Learning Research, 2, 2024. URL https://data.mlr.press/assets/pdf/v02-3.pdf