pith. machine review for the scientific record. sign in

arxiv: 2604.09232 · v2 · submitted 2026-04-10 · 💻 cs.CV · cs.AI

Recognition: no theorem link

Neural Distribution Prior for LiDAR Out-of-Distribution Detection

Authors on Pith no claims yet

Pith reviewed 2026-05-10 18:10 UTC · model grok-4.3

classification 💻 cs.CV cs.AI
keywords LiDARout-of-distribution detectionneural distribution priorPerlin noiseautonomous drivingsemantic segmentationopen-world perceptionpoint cloud
0
0 comments X

The pith

Neural Distribution Prior models prediction patterns to adaptively reweight OOD scores in LiDAR scans.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

LiDAR perception systems for autonomous driving often fail on unexpected objects because existing OOD scoring methods overlook class imbalance and assume uniform distributions. The paper establishes that learning a distribution prior from training logit patterns allows correction of class-dependent confidence bias through an attention mechanism. It pairs this with Perlin noise synthesis to create auxiliary OOD training samples directly from input scans, avoiding the need for external data. A sympathetic reader would care because this targets a practical gap in open-world operation where unrecognized obstacles pose safety risks. The approach is presented as compatible with multiple existing scoring functions and shows large gains on standard benchmarks.

Core claim

The Neural Distribution Prior framework dynamically captures the logit distribution patterns of training data and corrects class-dependent confidence bias through an attention-based module that reweights OOD scores according to alignment with the learned prior. This is enabled by a Perlin noise-based OOD synthesis strategy that generates diverse auxiliary samples from input LiDAR scans, supporting robust training without external datasets. The result is substantially improved OOD detection on SemanticKITTI and STU benchmarks.

What carries the argument

Neural Distribution Prior, which learns logit distribution patterns from training data and uses attention to reweight OOD scores based on alignment with that prior.

If this is right

  • NDP integrates directly with existing OOD scoring formulations to boost their performance.
  • It enables OOD training using only the original input scans via the synthesis step.
  • Performance reaches 61.31 percent point-level AP on the STU test set.
  • The method supports open-world LiDAR perception on SemanticKITTI and STU benchmarks.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The prior could be tested on camera or radar data to check whether distribution modeling generalizes across sensors.
  • If the synthesis step holds, fleets could collect and label fewer explicit OOD examples during development.
  • Alternative synthesis functions might be swapped in if Perlin noise fails on certain scene types.

Load-bearing premise

Perlin noise-based synthesis from input scans produces sufficiently diverse and realistic auxiliary OOD samples to enable robust training without external datasets or introducing misleading artifacts.

What would settle it

Running the full NDP pipeline on the STU test set after replacing the Perlin noise synthesis with random perturbations or no synthesis at all, and observing whether the point-level AP remains near 61 percent or falls back to prior-method levels.

Figures

Figures reproduced from arXiv: 2604.09232 by Feng Liu, Jiayang Ao, Joseph West, Kourosh Khoshelham, Zhengkang Xiang, Zizhao Li.

Figure 1
Figure 1. Figure 1: OOD objects are hazardous for LiDAR perception mod [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the proposed Neural Distribution Prior (NDP) framework. Given an input point cloud, synthetic OOD samples [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Visualization of OOD score map on the STU benchmark. Points are labeled as [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Additional visualization of OOD score map on the STU benchmark with image reference. Points are labeled as [PITH_FULL_IMAGE:figures/full_fig_p013_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Additional visualization of OOD score map on the STU benchmark. Points are labeled as [PITH_FULL_IMAGE:figures/full_fig_p014_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Visualization of OOD score map on the SemanticKITTI. Points are labeled as [PITH_FULL_IMAGE:figures/full_fig_p015_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Class distribution in the SemanticKITTI dataset. In the dataset, vegetation, road, and sidewalk account for most points. Vegetation [PITH_FULL_IMAGE:figures/full_fig_p016_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Class distribution in the STU dataset. Using the sequence 201 with full annotation as an example, the scene contains over [PITH_FULL_IMAGE:figures/full_fig_p017_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Range-view visualization of Perlin Raise–generated [PITH_FULL_IMAGE:figures/full_fig_p018_9.png] view at source ↗
read the original abstract

LiDAR-based perception is critical for autonomous driving due to its robustness to poor lighting and visibility conditions. Yet, current models operate under the closed-set assumption and often fail to recognize unexpected out-of-distribution (OOD) objects in the open world. Existing OOD scoring functions exhibit limited performance because they ignore the pronounced class imbalance inherent in LiDAR OOD detection and assume a uniform class distribution. To address this limitation, we propose the Neural Distribution Prior (NDP), a framework that models the distributional structure of network predictions and adaptively reweights OOD scores based on alignment with a learned distribution prior. NDP dynamically captures the logit distribution patterns of training data and corrects class-dependent confidence bias through an attention-based module. We further introduce a Perlin noise-based OOD synthesis strategy that generates diverse auxiliary OOD samples from input scans, enabling robust OOD training without external datasets. Extensive experiments on the SemanticKITTI and STU benchmarks demonstrate that NDP substantially improves OOD detection performance, achieving a point-level AP of 61.31% on the STU test set, which is more than 10$\times$ higher than the previous best result. Our framework is compatible with various existing OOD scoring formulations, providing an effective solution for open-world LiDAR perception.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 3 minor

Summary. The paper proposes Neural Distribution Prior (NDP), a framework for LiDAR out-of-distribution (OOD) detection that models the distributional structure of network logits to correct class-imbalance bias via an attention-based reweighting module. It introduces a Perlin noise-based synthesis strategy to generate auxiliary OOD samples directly from input scans, enabling training without external datasets. Experiments on SemanticKITTI and STU benchmarks report substantial gains, including a point-level AP of 61.31% on the STU test set claimed to exceed the prior best by more than 10×, with compatibility to existing OOD scoring methods.

Significance. If the results hold under rigorous validation, NDP could meaningfully advance open-world LiDAR perception by addressing pronounced class imbalance in OOD scoring without relying on external data. The attention-driven prior modeling and internal synthesis approach offer a practical, compatible enhancement to existing pipelines, with potential safety benefits for autonomous driving. The work includes falsifiable performance predictions on standard benchmarks but does not mention machine-checked proofs or fully reproducible code releases.

major comments (3)
  1. [§4] §4 (Perlin noise OOD synthesis): The >10× AP gain on STU is load-bearing on the claim that Perlin noise produces sufficiently diverse and realistic auxiliary OOD samples (differing in point density, intensity statistics, and spatial distribution from real open-world objects). No quantitative validation—such as distribution comparisons, ablation on noise parameters, or side-by-side statistics with actual OOD instances—is provided; if the samples primarily introduce synthetic artifacts, the learned prior risks capturing those rather than genuine distributional structure.
  2. [Experiments] Experiments section (results on STU benchmark): The reported 61.31% point-level AP lacks error bars, multi-seed statistics, explicit baseline scores with implementation details, or exclusion criteria for the 'previous best result.' This undermines verification of the improvement magnitude and whether it stems from the method or evaluation choices, especially given the strong central claim.
  3. [§3.3] §3.3 (attention-based reweighting): The distribution prior is learned from training logits and used to adaptively correct class-dependent bias, yet the formulation does not demonstrate independence from the free parameters of the attention module; without an ablation isolating the prior's contribution from the module's capacity, the performance attribution remains unclear.
minor comments (3)
  1. [Abstract] Abstract: The 'more than 10× higher than the previous best result' statement does not cite the specific prior value or reference, reducing immediate clarity for readers.
  2. [§3] Notation and equations: Several equations for logit distribution modeling and reweighting lack consistent numbering or explicit definitions for all symbols on first use, complicating traceability.
  3. [Figures] Figures: Visualizations of synthesized samples would be strengthened by including quantitative metrics (e.g., histograms) alongside qualitative examples to support the diversity claim.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We address each major comment below with clarifications and commit to specific revisions that strengthen the empirical support and reproducibility of the claims.

read point-by-point responses
  1. Referee: §4 (Perlin noise OOD synthesis): The >10× AP gain on STU is load-bearing on the claim that Perlin noise produces sufficiently diverse and realistic auxiliary OOD samples (differing in point density, intensity statistics, and spatial distribution from real open-world objects). No quantitative validation—such as distribution comparisons, ablation on noise parameters, or side-by-side statistics with actual OOD instances—is provided; if the samples primarily introduce synthetic artifacts, the learned prior risks capturing those rather than genuine distributional structure.

    Authors: We agree that direct quantitative validation of the Perlin-generated samples would strengthen the central claim. In the revised manuscript we will add: (i) side-by-side statistical comparisons (histograms and KL divergence) of point density, intensity, and spatial occupancy between Perlin-augmented scans and the real OOD objects present in the STU validation set; (ii) an ablation table varying Perlin parameters (octaves, persistence, lacunarity) and reporting the resulting point-level AP on STU; and (iii) qualitative visualizations with overlaid statistics. While the >10× performance lift already provides indirect evidence that the synthesized samples capture useful distributional structure rather than mere artifacts, we accept that explicit metrics are necessary and will include them. revision: yes

  2. Referee: Experiments section (results on STU benchmark): The reported 61.31% point-level AP lacks error bars, multi-seed statistics, explicit baseline scores with implementation details, or exclusion criteria for the 'previous best result.' This undermines verification of the improvement magnitude and whether it stems from the method or evaluation choices, especially given the strong central claim.

    Authors: We acknowledge the need for greater statistical rigor. The revised version will report mean and standard deviation of point-level AP over three independent random seeds for all STU results. We will also expand the baseline table to list the exact prior method (including its OOD scoring function and training protocol), its published score, and our re-implementation score under identical evaluation settings. Exclusion criteria will be stated explicitly: only methods that operate on the same point-level AP metric and are compatible with the SemanticKITTI/STU label spaces are compared; methods requiring external OOD data or different sensor modalities are noted as out of scope but not used in the 10× claim. These additions will make the magnitude of improvement directly verifiable. revision: yes

  3. Referee: §3.3 (attention-based reweighting): The distribution prior is learned from training logits and used to adaptively correct class-dependent bias, yet the formulation does not demonstrate independence from the free parameters of the attention module; without an ablation isolating the prior's contribution from the module's capacity, the performance attribution remains unclear.

    Authors: We will add a controlled ablation study in the revision that isolates the two components: (1) NDP with the full attention-based reweighting, (2) the distribution prior applied via a fixed (non-learned) reweighting vector derived from class frequencies, and (3) the attention module alone without the learned logit-distribution prior. Performance differences on both SemanticKITTI and STU will be reported. In addition, we will visualize the learned attention weights across classes and show their correlation with the empirical class-imbalance statistics of the training logits, thereby demonstrating that the prior supplies the class-dependent signal that the attention module then exploits. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation chain is additive and self-contained

full rationale

The paper introduces NDP as a modeling of logit distributions with attention-based reweighting plus Perlin noise synthesis for auxiliary OOD samples. No equations, predictions, or self-citations are shown that reduce the reported AP gains (e.g., 61.31% on STU) to quantities defined by construction from the inputs or prior fits. The framework is presented as compatible with existing OOD scorers and evaluated empirically on benchmarks, with no load-bearing steps that collapse to self-definition, fitted-input renaming, or author-unique ansatzes. This is the common case of an independent architectural contribution.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

Abstract-only view yields minimal ledger entries; the core additions are the learned distribution prior and noise synthesis, whose internal parameters are unspecified.

free parameters (1)
  • attention module parameters
    The attention-based module for reweighting likely contains learnable parameters fitted to training data, though exact count or values are not stated.
axioms (1)
  • domain assumption Pronounced class imbalance is inherent in LiDAR OOD detection and current scoring functions assume uniform distribution
    Directly stated as the limitation being addressed.

pith-pipeline@v0.9.0 · 5539 in / 1305 out tokens · 56610 ms · 2026-05-10T18:10:33.788648+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

78 extracted references · 4 canonical work pages · 1 internal anchor

  1. [1]

    Se- manticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences

    Jens Behley, Martin Garbade, Andres Milioto, Jan Quen- zel, Sven Behnke, Cyrill Stachniss, and Juergen Gall. Se- manticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences. InInternational Conference on Com- puter Vision (ICCV), 2019. 2, 3, 6, 7, 1, 4

  2. [2]

    Seeing through fog without seeing fog: Deep multimodal sensor fu- sion in unseen adverse weather

    Mario Bijelic, Tobias Gruber, Fahim Mannan, Florian Kraus, Werner Ritter, Klaus Dietmayer, and Felix Heide. Seeing through fog without seeing fog: Deep multimodal sensor fu- sion in unseen adverse weather. InThe IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 1

  3. [3]

    Nieto, Roland Y

    Hermann Blum, Paul-Edouard Sarlin, Juan I. Nieto, Roland Y . Siegwart, and C ´esar Cadena. Fishyscapes: A benchmark for safe semantic segmentation in autonomous driving.International Conference on Computer Vision Work- shop (ICCV’W), 2019. 1, 2, 5

  4. [4]

    The Fishyscapes Benchmark: Measuring Blind Spots in Semantic Segmentation.Interna- tional Journal on Computer Vision (IJCV), 2021

    Hermann Blum, Paul-Edouard Sarlin, Juan Nieto, Roland Siegwart, and Cesar Cadena. The Fishyscapes Benchmark: Measuring Blind Spots in Semantic Segmentation.Interna- tional Journal on Computer Vision (IJCV), 2021. 2, 6, 7, 4

  5. [5]

    Lang, Sourabh V ora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Gi- ancarlo Baldan, and Oscar Beijbom

    Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh V ora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Gi- ancarlo Baldan, and Oscar Beijbom. nuScenes: A multi- modal dataset for autonomous driving. InConference on Computer Vision and Pattern Recognition (CVPR), 2020. 2, 3

  6. [6]

    Learning imbalanced datasets with label- distribution-aware margin loss

    Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label- distribution-aware margin loss. InAdvances in Neural Infor- mation Processing Systems, 2019. 2

  7. [7]

    Open- world semantic segmentation for lidar point clouds

    Jun Cen, Peng Yun, Shiwei Zhang, Junhao Cai, Di Luan, Mingqian Tang, Ming Liu, and Michael Yu Wang. Open- world semantic segmentation for lidar point clouds. InCom- puter Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXVIII, page 318–334, Berlin, Heidelberg, 2022. Springer- Verlag. 2, 3, 4, 5, 7

  8. [8]

    SegmentMeIfYou- Can: A Benchmark for Anomaly Segmentation

    Robin Chan, Krzysztof Lis, Svenja Uhlemeyer, Hermann Blum, Sina Honari, Roland Siegwart, Pascal Fua, Math- ieu Salzmann, and Matthias Rottmann. SegmentMeIfYou- Can: A Benchmark for Anomaly Segmentation. InProceed- ings of the Neural Information Processing Systems Track on Datasets and Benchmarks, 2021. 2, 1

  9. [9]

    Entropy maximization and meta classification for out-of- distribution detection in semantic segmentation

    Robin Chan, Matthias Rottmann, and Hanno Gottschalk. Entropy maximization and meta classification for out-of- distribution detection in semantic segmentation. InInterna- tional Conference on Computer Vision (ICCV), 2021. 1, 2, 4, 5

  10. [10]

    ShapeNet: An Information-Rich 3D Model Repository

    Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Mano- lis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An Information-Rich 3D Model Repository. Technical Report arXiv:1512.03012 [cs.GR], Stanford University — Princeton University — Toyota Tech- nological Institute at ...

  11. [11]

    Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation

    Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In European Conference on Computer Vision (ECCV), 2018. 2

  12. [12]

    Dual energy-based model with open- world uncertainty estimation for out-of-distribution detec- tion

    Qi Chen and Hu Ding. Dual energy-based model with open- world uncertainty estimation for out-of-distribution detec- tion. InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 25728– 25737, 2025. 5

  13. [13]

    Schwing, Alexan- der Kirillov, and Rohit Girdhar

    Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexan- der Kirillov, and Rohit Girdhar. Masked-attention Mask Transformer for Universal Image Segmentation. InConfer- ence on Computer Vision and Pattern Recognition (CVPR),

  14. [14]

    Schwing, and Alexander Kir- illov

    Bowen Cheng, Alexander G. Schwing, and Alexander Kir- illov. Per-pixel classification is not all you need for seman- tic segmentation. InNeural Information Processing Systems (NeurIPS), 2021. 2

  15. [15]

    3d-pnas: 3d industrial surface anomaly synthesis with perlin noise, 2025

    Yifeng Cheng and Juan Du. 3d-pnas: 3d industrial surface anomaly synthesis with perlin noise, 2025. 4

  16. [16]

    Bal- anced energy regularization loss for out-of-distribution de- tection

    Hyunjun Choi, Hawook Jeong, and Jin Young Choi. Bal- anced energy regularization loss for out-of-distribution de- tection. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1–9, 2023. 2, 3, 5, 8

  17. [17]

    4D Spatio-Temporal ConvNets: Minkowski Convolutional Neu- ral Networks

    Christopher Choy, JunYoung Gwak, and Silvio Savarese. 4D Spatio-Temporal ConvNets: Minkowski Convolutional Neu- ral Networks. InConference on Computer Vision and Pattern Recognition (CVPR), 2019. 3, 4, 2

  18. [18]

    Outlier detec- tion by ensembling uncertainty with negative objectness

    Anja Deli ´c, Matej Grcic, and Sini ˇsa ˇSegvi´c. Outlier detec- tion by ensembling uncertainty with negative objectness. In British Machine Vision Conference (BMVC), 2024. 2

  19. [19]

    V os: Learning what you don’t know by virtual outlier synthe- sis

    Xuefeng Du, Zhaoning Wang, Mu Cai, and Yixuan Li. V os: Learning what you don’t know by virtual outlier synthe- sis. InInternational Conference on Learning Representa- tions (ICLR), 2021. 5

  20. [20]

    A density-based algorithm for discovering clusters in large spatial databases with noise

    Martin Ester, Hans-Peter Kriegel, J ¨org Sander, and Xiaowei Xu. A density-based algorithm for discovering clusters in large spatial databases with noise. InProceedings of the Sec- ond International Conference on Knowledge Discovery and Data Mining, page 226–231. AAAI Press, 1996. 4, 6

  21. [21]

    Is out-of-distribution detection learnable? InPro- ceedings of the 36th International Conference on Neural In- formation Processing Systems, Red Hook, NY , USA, 2022

    Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, and Feng Liu. Is out-of-distribution detection learnable? InPro- ceedings of the 36th International Conference on Neural In- formation Processing Systems, Red Hook, NY , USA, 2022. Curran Associates Inc. 3 9

  22. [22]

    Vision meets robotics: The kitti dataset.Interna- tional Journal of Robotics Research (IJRR), 2013

    Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset.Interna- tional Journal of Robotics Research (IJRR), 2013. 3

  23. [23]

    Densehy- brid: Hybrid anomaly detection for dense open-set recogni- tion

    Matej Grci ´c, Petra Bevandi ´c, and Sini ˇsa ˇSegvi´c. Densehy- brid: Hybrid anomaly detection for dense open-set recogni- tion. InEuropean Conference on Computer Vision (ECCV),

  24. [24]

    A Baseline for Detect- ing Misclassified and Out-of-Distribution Examples in Neu- ral Networks

    Dan Hendrycks and Kevin Gimpel. A Baseline for Detect- ing Misclassified and Out-of-Distribution Examples in Neu- ral Networks. InInternational Conference on Learning Rep- resentations (ICLR), 2018. 2

  25. [25]

    Deep Anomaly Detection with Outlier Exposure

    Dan Hendrycks, Mantas Mazeika, and Thomas Dietterich. Deep Anomaly Detection with Outlier Exposure. InInter- national Conference on Learning Representations (ICLR),

  26. [26]

    Scaling Out-of-Distribution Detection for Real- World Settings

    Dan Hendrycks, Steven Basart, Mantas Mazeika, Andy Zou, Joe Kwon, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. Scaling Out-of-Distribution Detection for Real- World Settings. InInternational Conference on Machine Learning (ICML), 2022. 6, 7, 1

  27. [27]

    Czarnecki

    Chengjie Huang, Van Duong Nguyen, Vahdat Abdelzad, Christopher Gus Mannes, Luke Rowe, Benjamin Therien, Rick Salay, and K. Czarnecki. Out-of-distribution detection for lidar-based 3d object detection.IEEE Intelligent Trans- portation Systems Conference (ITSC), 2022. 3

  28. [28]

    On the impor- tance of gradients for detecting distributional shifts in the wild

    Rui Huang, Andrew Geng, and Yixuan Li. On the impor- tance of gradients for detecting distributional shifts in the wild. InAdvances in Neural Information Processing Sys- tems, 2021. 2

  29. [29]

    Detecting out-of-distribution data through in-distribution class prior

    Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, and Bo Han. Detecting out-of-distribution data through in-distribution class prior. InProceedings of the 40th International Conference on Machine Learning, pages 15067–15088. PMLR, 2023. 2, 3, 8

  30. [30]

    Negative label guided OOD detection with pretrained vision-language models

    Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, and Bo Han. Negative label guided OOD detection with pretrained vision-language models. InThe Twelfth International Conference on Learning Representa- tions, 2024. 5

  31. [31]

    Panoptic Segmentation

    Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, and Piotr Doll´ar. Panoptic Segmentation. InConfer- ence on Computer Vision and Pattern Recognition (CVPR),

  32. [32]

    Revisiting Out-of-Distribution Detection in LiDAR-based 3D Object Detection

    Michael K ¨osel, Marcel Schreiber, Michael Ulrich, Claudius Gl¨aser, and Klaus Dietmayer. Revisiting Out-of-Distribution Detection in LiDAR-based 3D Object Detection. InIntelli- gent Vehicles Symposium (IV), 2024. 1, 2, 3

  33. [33]

    Simple and Scalable Predictive Uncertainty Es- timation using Deep Ensembles

    Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and Scalable Predictive Uncertainty Es- timation using Deep Ensembles. InNeural Information Pro- cessing Systems (NeurIPS), 2017. 2, 6

  34. [34]

    A simple unified framework for detecting out-of-distribution samples and adversarial attacks

    Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. InNeural Information Pro- cessing Systems (NeurIPS), 2018. 2

  35. [35]

    Open-set semantic segmenta- tion for point clouds via adversarial prototype framework

    Jianan Li and Qiulei Dong. Open-set semantic segmenta- tion for point clouds via adversarial prototype framework. In 2023 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 9425–9434, 2023. 3, 7

  36. [36]

    Das3d: Dual-modality anomaly synthesis for 3d anomaly de- tection, 2025

    Kecen Li, Bingquan Dai, Jingjing Fu, and Xinwen Hou. Das3d: Dual-modality anomaly synthesis for 3d anomaly de- tection, 2025. 4

  37. [37]

    Out-of-distribution detection in 3d applica- tions: a review.arXiv preprint arXiv:2507.00570, 2025

    Zizhao Li, Xueyang Kang, Joseph West, and Kourosh Khoshelham. Out-of-distribution detection in 3d applica- tions: a review.arXiv preprint arXiv:2507.00570, 2025. 1

  38. [38]

    Relative energy learning for lidar out- of-distribution detection.arXiv preprint arXiv:2511.06720,

    Zizhao Li, Zhengkang Xiang, Jiayang Ao, Joseph West, and Kourosh Khoshelham. Relative energy learning for lidar out- of-distribution detection.arXiv preprint arXiv:2511.06720,

  39. [39]

    From open vocabulary to open world: Teach- ing vision language models to detect novel objects

    Zizhao Li, Zhengkang Xiang, Joseph West, and Kourosh Khoshelham. From open vocabulary to open world: Teach- ing vision language models to detect novel objects. In 36th British Machine Vision Conference 2025, BMVC 2025, Sheffield, UK, November 24-27, 2025. BMV A, 2025. 1

  40. [40]

    GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models

    Chen Liang, Wenguan Wang, Jiaxu Miao, and Yi Yang. GMMSeg: Gaussian Mixture based Generative Semantic Segmentation Models. InNeural Information Processing Systems (NeurIPS), 2022. 2

  41. [41]

    Shiyu Liang, Yixuan Li, and R. Srikant. Enhancing the re- liability of out-of-distribution image detection in neural net- works. InInternational Conference on Learning Represen- tations, 2018. 2

  42. [42]

    Rethinking out-of-distribution detection on imbalanced data distribution.Advances in Neural Information Processing Systems, 38, 2024

    Kai Liu, Zhihang Fu, Sheng Jin, Chao Chen, Ze Chen, Rongxin Jiang, Fan Zhou, Yaowu Chen, and Jieping Ye. Rethinking out-of-distribution detection on imbalanced data distribution.Advances in Neural Information Processing Systems, 38, 2024. 2, 3

  43. [43]

    Energy-based out-of-distribution detection

    Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detection. InNeural Infor- mation Processing Systems (NeurIPS), 2020. 2, 4, 5, 8

  44. [44]

    Reid, and Gustavo Carneiro

    Yuyuan Liu, Choubo Ding, Yu Tian, Guansong Pang, Vasileios Belagiannis, Ian D. Reid, and Gustavo Carneiro. Residual pattern learning for pixel-wise out-of-distribution detection in semantic segmentation. InICCV, pages 1151– 1161, 2023. 1, 2

  45. [45]

    Ziwei Liu, Zhongqi Miao, Xiaohang Zhan, Jiayun Wang, Boqing Gong, and Stella X. Yu. Large-scale long-tailed recognition in an open world. InIEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), 2019. 3

  46. [46]

    Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving

    Rodrigo Marcuzzi, Lucas Nunes, Louis Wiesmann, Jens Behley, and Cyrill Stachniss. Mask-Based Panoptic LiDAR Segmentation for Autonomous Driving. InIEEE Robotics And Automation Letters (RAL), 2023. 7

  47. [47]

    Out-of-distribution detection in long-tailed recogni- tion with calibrated outlier class learning

    Wenjun Miao, Guansong Pang, Xiao Bai, Tianqi Li, and Jin Zheng. Out-of-distribution detection in long-tailed recogni- tion with calibrated outlier class learning. InProceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educa-...

  48. [48]

    Henriques, and Fatma G¨uney

    Nazir Nayal, Mısra Yavuz, Jo ˜ao F. Henriques, and Fatma G¨uney. RbA: Segmenting Unknown Regions Rejected by All. InInternational Conference on Computer Vision (ICCV), 2023. 1, 2, 6, 7 10

  49. [49]

    A likeli- hood ratio-based approach to segmenting unknown objects

    Nazir Nayal, Youssef Shoeb, and Fatma G ¨uney. A likeli- hood ratio-based approach to segmenting unknown objects. International Journal of Computer Vision, 2025. 7

  50. [50]

    Spotting the Unexpected (STU): A 3D LiDAR Dataset for Anomaly Seg- mentation in Autonomous Driving

    Alexey Nekrasov, Malcolm Burdorf, Stewart Worrall, Bas- tian Leibe, and Julie Stephany Berrio Perez. Spotting the Unexpected (STU): A 3D LiDAR Dataset for Anomaly Seg- mentation in Autonomous Driving. In”Conference on Com- puter Vision and Pattern Recognition (CVPR)”, 2025. 2, 3, 4, 5, 6, 7, 1

  51. [51]

    The majority can help the minority: Context-rich minority oversampling for long-tailed classifi- cation

    Seulki Park, Youngkyu Hong, Byeongho Heo, Sangdoo Yun, and Jin Young Choi. The majority can help the minority: Context-rich minority oversampling for long-tailed classifi- cation. In2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 6877–6886, 2022. 2

  52. [52]

    An image synthesizer.SIGGRAPH Comput

    Ken Perlin. An image synthesizer.SIGGRAPH Comput. Graph., 19(3):287–296, 1985. 4

  53. [53]

    Unmasking Anomalies in Road-Scene Segmentation

    Shyam Nandan Rai, Fabio Cermelli, Dario Fontanel, Carlo Masone, and Barbara Caputo. Unmasking Anomalies in Road-Scene Segmentation. InInternational Conference on Computer Vision (ICCV), 2023. 2

  54. [54]

    Adaptive robust evidential opti- mization for open set detection from imbalanced data

    Hitesh Sapkota and Qi Yu. Adaptive robust evidential opti- mization for open set detection from imbalanced data. InThe Eleventh International Conference on Learning Representa- tions, 2023. 3

  55. [55]

    Scheirer, Anderson de Rezende Rocha, Archana Sapkota, and Terrance E

    Walter J. Scheirer, Anderson de Rezende Rocha, Archana Sapkota, and Terrance E. Boult. Toward open set recogni- tion.IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(7):1757–1772, 2013. 1

  56. [56]

    Dropout: a simple way to prevent neural networks from overfitting

    Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. InNeural Information Processing Systems (NeurIPS), 2014. 6

  57. [57]

    G2sf: Geometry-guided score fusion for multimodal industrial anomaly detection

    Chengyu Tao, Xuanming Cao, and Juan Du. G2sf: Geometry-guided score fusion for multimodal industrial anomaly detection. InProceedings of the IEEE/CVF In- ternational Conference on Computer Vision (ICCV), pages 20551–20560, 2025. 4

  58. [58]

    Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Com- plex Urban Driving Scenes

    Yu Tian, Yuyuan Liu, Guansong Pang, Fengbei Liu, Yuan- hong Chen, and Gustavo Carneiro. Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Com- plex Urban Driving Scenes. InEuropean Conference on Computer Vision (ECCV), 2022. 1, 2, 5

  59. [59]

    Panoptic-CUDAL Technical Report: Ru- ral Australia Point Cloud Dataset in Rainy Conditions.arXiv preprint arXiv:2503.16378, 2025

    Tzu-Yun Tseng, Alexey Nekrasov, Malcolm Burdorf, Bas- tian Leibe, Julie Stephany Berrio Perez, Mao Shan, and Stewart Worrall. Panoptic-CUDAL Technical Report: Ru- ral Australia Point Cloud Dataset in Rainy Conditions.arXiv preprint arXiv:2503.16378, 2025. 1

  60. [60]

    Partial and asymmetric contrastive learning for out-of-distribution detection in long- tailed recognition

    Haotao Wang, Aston Zhang, Yi Zhu, Shuai Zheng, Mu Li, Alex J Smola, and Zhangyang Wang. Partial and asymmetric contrastive learning for out-of-distribution detection in long- tailed recognition. InInternational Conference on Machine Learning, pages 23446–23458, 2022. 3

  61. [61]

    Watermarking for out-of-distribution detection

    Qizhou Wang, Feng Liu, Yonggang Zhang, Jing Zhang, Chen Gong, Tongliang Liu, and Bo Han. Watermarking for out-of-distribution detection. InAdvances in Neural Infor- mation Processing Systems, pages 15545–15557. Curran As- sociates, Inc., 2022. 2

  62. [62]

    Learning to augment distributions for out-of-distribution detection

    Qizhou Wang, Zhen Fang, Yonggang Zhang, Feng Liu, Yix- uan Li, and Bo Han. Learning to augment distributions for out-of-distribution detection. InAdvances in Neural Infor- mation Processing Systems, pages 73274–73286. Curran As- sociates, Inc., 2023. 2

  63. [63]

    Out- of-distribution detection with implicit outlier transformation

    Qizhou Wang, Junjie Ye, Feng Liu, Quanyu Dai, Marcus Kalander, Tongliang Liu, Jianye Hao, and Bo Han. Out- of-distribution detection with implicit outlier transformation. InInternational Conference on Learning Representations,

  64. [64]

    Eat: towards long-tailed out-of-distribution detection

    Tong Wei, Bo-Lin Wang, and Min-Ling Zhang. Eat: towards long-tailed out-of-distribution detection. InProceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence and Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence and Fourteenth Symposium on Educa- tional Advances in Artificial Intelligence. AAAI Press, 2024. 3

  65. [65]

    Identifying Unknown Instances for Autonomous Driving

    Kelvin Wong, Shenlong Wang, Mengye Ren, Ming Liang, and Raquel Urtasun. Identifying Unknown Instances for Autonomous Driving. InConference on Robot Learning (CoRL), 2019. 6, 1

  66. [66]

    Lion: learn- ing point-wise abstaining penalty for lidar outlier detection using diverse synthetic data

    Shaocong Xu, Pengfei Li, Qianpu Sun, Xinyu Liu, Yang Li, Shihui Guo, Zhen Wang, Bo Jiang, Rui Wang, Kehua Sheng, Bo Zhang, Li Jiang, Hao Zhao, and Yilun Chen. Lion: learn- ing point-wise abstaining penalty for lidar outlier detection using diverse synthetic data. InProceedings of the AAAI Conference on Artificial Intelligence. AAAI Press, 2025. 2, 3, 4, 5, 7

  67. [67]

    Generalized out-of-distribution detection: A survey.Inter- national Journal of Computer Vision, pages 1–28, 2024

    Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu. Generalized out-of-distribution detection: A survey.Inter- national Journal of Computer Vision, pages 1–28, 2024. 1, 2

  68. [68]

    Mask4Former: Mask Transformer for 4D Panoptic Segmentation

    Kadir Yilmaz, Jonas Schult, Alexey Nekrasov, and Bastian Leibe. Mask4Former: Mask Transformer for 4D Panoptic Segmentation. InInternational Conference on Robotics and Automation (ICRA), 2024. 3, 5, 6, 7, 4

  69. [69]

    Keep drÆming: Discriminative 3d anomaly detection through anomaly simulation.Pattern Recognition Letters, 181:113– 119, 2024

    Vitjan Zavrtanik, Matej Kristan, and Danijel Sko ˇcaj. Keep drÆming: Discriminative 3d anomaly detection through anomaly simulation.Pattern Recognition Letters, 181:113– 119, 2024. 4

  70. [70]

    Darl: Mitigating gradient conflicts in long-tailed out-of-distribution learning

    Xuan Zhang, Sinchee Chin, Jing-Hao Xue, Xiaochen Yang, and Wenming Yang. Darl: Mitigating gradient conflicts in long-tailed out-of-distribution learning. InProceedings of the 33rd ACM International Conference on Multimedia, page 6868–6877, New York, NY , USA, 2025. Association for Computing Machinery. 3

  71. [71]

    Out-of-distribution detec- tion learning with unreliable out-of-distribution sources

    Haotian Zheng, Qizhou Wang, Zhen Fang, Xiaobo Xia, Feng Liu, Tongliang Liu, and Bo Han. Out-of-distribution detec- tion learning with unreliable out-of-distribution sources. In Advances in Neural Information Processing Systems, pages 72110–72123. Curran Associates, Inc., 2023. 2

  72. [72]

    Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic Segmenta- tion

    Hui Zhou, Xinge Zhu, Xiao Song, Yuexin Ma, Zhe Wang, Hongsheng Li, and Dahua Lin. Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic Segmenta- tion. InConference on Computer Vision and Pattern Recog- nition (CVPR), 2020. 7 11 Neural Distribution Prior for LiDAR Out-of-Distribution Detection Supplementary Material

  73. [73]

    The model is then fine-tuned for up to 10 epochs on the downstream datasets, with Perlin noise–synthesized OOD samples included during training

    Implementation Details We initialize the model using a Mask4Former check- point pretrained on SemanticKITTI [1] and Panoptic CU- DAL [59]. The model is then fine-tuned for up to 10 epochs on the downstream datasets, with Perlin noise–synthesized OOD samples included during training. Optimization uses AdamW with a learning rate of2×10 −4 and a batch size o...

  74. [74]

    These metrics are widely used in OOD detection and anomaly segmenta- tion [3, 8, 50, 67]

    Explanation of Evaluation Metrics Point-level Evaluation MetricsPoint-level evaluation metrics for LiDAR OOD detection include AUROC, FPR@95, and Average Precision (AP). These metrics are widely used in OOD detection and anomaly segmenta- tion [3, 8, 50, 67]. AUROCassesses how well the OOD score separates OOD points from ID points across all possible thre...

  75. [75]

    4 and Fig

    Additional Visualization Fig. 4 and Fig. 5 illustrate the qualitative performance of our method. Across diverse environments, including nar- row urban alleys and unstructured rural roads, the model consistently identifies a broad range of OOD objects such as armchairs, fallen branches, packages, and yoga mats. Our method also substantially reduces the fal...

  76. [76]

    8 presents an ablation study on the template sizedof the NDP matrixψ, whereddetermines the dimensionality of the vectors stored inψas the learnable prior

    Additional Results Tab. 8 presents an ablation study on the template sizedof the NDP matrixψ, whereddetermines the dimensionality of the vectors stored inψas the learnable prior. A moder- ate NDP size yields the best performance:d= 16achieves the highest AP (74.24%) and a strong AUROC (99.53%). Overall, NDP is not highly sensitive to this hyperparame- ter...

  77. [77]

    This motivates the use of adaptive mechanisms such as distribution-aware priors or dynamic reweighting

    Dataset Statistics For OOD detection, class imbalance is especially severe in LiDAR data and makes anomaly discrimination more diffi- cult. This motivates the use of adaptive mechanisms such as distribution-aware priors or dynamic reweighting. As shown in Fig. 7, SemanticKITTI [1] exhibits an ex- tremely long-tailed distribution. Vegetation, road, and sid...

  78. [78]

    As shown in Fig

    Visualization of OOD Samples Generated by Perlin Noise The Perlin Raise augmentation produces synthetic OOD re- gions highlighted in blue, which exhibit substantial vari- ation in geometry and scale. As shown in Fig. 9, these OOD insertions span small localized perturbations to larger, irregular structures that integrate coherently with the sur- rounding ...