pith. machine review for the scientific record. sign in

arxiv: 2605.08952 · v1 · submitted 2026-05-09 · 💻 cs.CV

Recognition: 2 theorem links

· Lean Theorem

FugSeg: Fast Uncertainty-aware Ground Segmentation for 3D Point Cloud

Authors on Pith no claims yet

Pith reviewed 2026-05-12 01:55 UTC · model grok-4.3

classification 💻 cs.CV
keywords ground segmentationLiDAR point cloudpolar griduncertainty-awarereal-time processingnon-learning methodenvironment perceptionadaptive slope
0
0 comments X

The pith

FugSeg segments ground from LiDAR point clouds more accurately and faster than prior non-learning methods by labeling on a polar grid while modeling measurement uncertainties.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops FugSeg to improve ground segmentation as a preprocessing step for LiDAR-based mapping and navigation. It represents point clouds in a polar grid and applies within- and cross-segment labeling that identifies visible, isolated, and occluded ground cells. An adaptive slope incorporates sensor uncertainties to handle complex terrain and reflection noise, followed by fine-grained elevation estimation to reach point-level results. This non-learning pipeline achieves top F1, accuracy, and mIoU scores on four public datasets while running at 135 Hz and 487 Hz on a single CPU thread.

Core claim

FugSeg adopts a polar grid map for point cloud representation to ensure generalizability across LiDAR types, develops a within- and cross-segment ground labeling strategy that identifies directly visible ground cells as well as isolated or occluded ones, introduces an adaptive slope that incorporates measurement uncertainties for reliability under complex terrain, and adds a fine-grained ground elevation estimation method for point-level segmentation while explicitly handling reflection noise via noisy ground cells.

What carries the argument

Polar grid representation together with within- and cross-segment ground labeling and an adaptive slope that folds in measurement uncertainties to label ground cells and filter noise.

If this is right

  • Real-time ground segmentation becomes feasible on single-CPU resource-limited platforms for mapping and navigation.
  • Identification of occluded and isolated ground points reduces downstream errors in environment perception.
  • Explicit reflection noise handling improves segmentation reliability in urban or indoor LiDAR scenes.
  • No-training generalizability across 32- and 64-layer LiDAR sensors simplifies system deployment.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach could be inserted into existing SLAM pipelines to lower overall compute without sacrificing accuracy.
  • Extending the uncertainty model to dynamic scenes might allow tracking of moving ground surfaces.
  • Similar non-learning uncertainty techniques could transfer to related tasks such as curb or drivable-area detection.

Load-bearing premise

The polar grid, within- and cross-segment labeling, and uncertainty-adjusted adaptive slope together suffice to identify ground points reliably in complex unstructured environments without any machine learning training.

What would settle it

A test set of LiDAR scans from highly irregular terrain or dense reflective surfaces where FugSeg's F1 score falls below at least one other non-learning method or its runtime exceeds the claimed 135 Hz threshold on equivalent hardware.

Figures

Figures reproduced from arXiv: 2605.08952 by Volker Schwieger, Yu Li.

Figure 1
Figure 1. Figure 1: The role of ground segmentation in a LiDAR-centric intelligent [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 3
Figure 3. Figure 3: (a) Polar grid mapping, Ci,j represents the j th cell in segment Si. (b) Categorization of cells, Green: ground points; red: reflection noise; blue: above-ground objects. cells may contain reflection artifacts below the actual ground surface, which is caused by laser interference with reflective objects [18]–[20]. The objective of ground labeling is to iden￾tify all ground cells (including noisy ground cel… view at source ↗
Figure 4
Figure 4. Figure 4: Traditional slope (black) versus the proposed adaptive slope (red) in [PITH_FULL_IMAGE:figures/full_fig_p004_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Cross-segment ground propagation. (a) Ground propagation from left [PITH_FULL_IMAGE:figures/full_fig_p005_5.png] view at source ↗
Figure 7
Figure 7. Figure 7: Elevation interpolation for arbitrary point [PITH_FULL_IMAGE:figures/full_fig_p006_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: F1-based grid search for algorithmic parameters ∆α, M, T∆r, TZ and T∆slope on sequence 08 of the SemanticKITTI dataset. The optimal configurations are highlighted in red rectangles. and [17]: {road, sidewalk, other-ground, terrain, parking, lane-marking} compose the ground labels, {unlabeled, outlier, vegetation} are excluded from numerical evaluations, and all other labels are considered non-ground. Simil… view at source ↗
Figure 9
Figure 9. Figure 9: Qualitative comparison. Scenario 1: curvy slip road with a neighboring path on the left; scenario 2: bidirectional highway with a central barrier; [PITH_FULL_IMAGE:figures/full_fig_p010_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Impact of ∆α, M, T∆r, TZ , T∆slope, σR, σϕ and σθ on FugSeg’s performance on the SemanticKITTI dataset. Left vertical axis: score in [%]; right vertical axis: runtime in [ms]. mIoU measure is not plotted due to its shifted vertical range, but it basically follows the same trend as F1 and accuracy. 1) Impact of ∆α and M: Parameters ∆α and M define the spatial resolution of the constructed polar grid map. A… view at source ↗
Figure 11
Figure 11. Figure 11: Performance of FugSeg on different LiDAR sensors across various environments. VLP32C and OS1-128 are self-collected measurements (including [PITH_FULL_IMAGE:figures/full_fig_p012_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Road boundary detection with the support of FugSeg. FugSeg helps [PITH_FULL_IMAGE:figures/full_fig_p012_12.png] view at source ↗
read the original abstract

In LiDAR-based environment perception systems, ground segmentation is a key preprocessing step supporting various applications such as mapping and navigation. Although extensively studied, problems such as reflection noise and isolated ground remain challenging. To address these issues, we propose FugSeg, a fast uncertainty-aware ground segmentation method. A polar grid map is adopted as the point cloud representation to ensure generalizability across LiDAR types. Building on that, we develop a within- and cross-segment ground labeling strategy that identifies not only directly visible ground cells but also those that are isolated or occluded. During this process, an adaptive slope is introduced, which incorporates measurement uncertainties to enhance its reliability under complex terrain. Finally, to achieve point-level ground segmentation, a fine-grained ground elevation estimation method is introduced. Throughout the complete workflow, reflection noise is explicitly handled via the proposed noisy ground cells. We conduct comprehensive evaluations on four public datasets covering both structured and unstructured environments. Results show that FugSeg outperforms state-of-the-art non-learning methods, achieving the highest F1, accuracy, and mIoU across all datasets, while maintaining the fastest runtime (135 Hz and 487 Hz for 64- and 32-layer LiDARs) using a single CPU thread, making it suitable for resource-limited systems. The code will be available at https://github.com/Leo-YuLi/FugSeg.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes FugSeg, a fast uncertainty-aware ground segmentation algorithm for 3D point clouds from LiDAR sensors. The approach represents the point cloud in a polar grid, applies a within-segment and cross-segment labeling strategy using an adaptive slope that accounts for measurement uncertainties to label ground cells including isolated or occluded ones, and performs fine-grained elevation estimation for point-level segmentation. Reflection noise is handled by identifying noisy ground cells. The method is evaluated on four public datasets from structured and unstructured environments, claiming to achieve the best F1, accuracy, and mean IoU among non-learning methods while running at 135 Hz and 487 Hz on 64- and 32-layer LiDARs using one CPU thread.

Significance. Should the experimental claims be substantiated, this work would represent a meaningful advance in real-time 3D perception by delivering a training-free, computationally efficient ground segmentation technique that is robust to sensor noise and terrain variations. Its generalizability across different LiDAR configurations and explicit uncertainty modeling could benefit applications in autonomous driving, robotics, and mapping. The planned code release is a positive step toward reproducibility.

major comments (2)
  1. Experiments section: The results claim that FugSeg achieves the highest F1, accuracy, and mIoU across all four datasets, but no standard deviations, number of trials, or statistical significance tests (e.g., paired t-tests) are reported for the metric differences versus baselines. This is load-bearing for the central outperformance claim, as point-cloud variability could affect whether gains are consistent.
  2. Method section on cross-segment labeling and adaptive slope: The procedure for recovering occluded/isolated ground cells via cross-segment propagation is described at a high level, but the exact threshold or uncertainty propagation rule (e.g., how measurement noise variance modifies the slope adaptation) is not formalized with an equation or pseudocode. This directly impacts the skeptic concern that false positives from reflection noise could undermine the reported gains in unstructured environments.
minor comments (2)
  1. Abstract: The runtime figures (135 Hz, 487 Hz) are given without specifying the CPU model or average point count per scan, which would aid readers in assessing practicality.
  2. Introduction: A brief comparison table of prior non-learning methods' limitations (e.g., handling of occlusion) would better highlight the novelty of the uncertainty-aware components.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments and the recommendation for major revision. We address each major comment point by point below, committing to revisions that strengthen the clarity and rigor of the claims without misrepresenting the work.

read point-by-point responses
  1. Referee: Experiments section: The results claim that FugSeg achieves the highest F1, accuracy, and mIoU across all four datasets, but no standard deviations, number of trials, or statistical significance tests (e.g., paired t-tests) are reported for the metric differences versus baselines. This is load-bearing for the central outperformance claim, as point-cloud variability could affect whether gains are consistent.

    Authors: We agree that the lack of standard deviations or statistical significance testing weakens the substantiation of our outperformance claims. The method is fully deterministic and the evaluations use standard fixed benchmark datasets, so repeated random trials are not applicable. To address this, we will revise the experiments section to report standard deviations computed across multiple sequences or data subsets within each of the four datasets and explicitly discuss the consistency of the observed gains. This revision will be incorporated in the next version. revision: yes

  2. Referee: Method section on cross-segment labeling and adaptive slope: The procedure for recovering occluded/isolated ground cells via cross-segment propagation is described at a high level, but the exact threshold or uncertainty propagation rule (e.g., how measurement noise variance modifies the slope adaptation) is not formalized with an equation or pseudocode. This directly impacts the skeptic concern that false positives from reflection noise could undermine the reported gains in unstructured environments.

    Authors: We acknowledge that a more formal description of the cross-segment labeling and adaptive slope is needed to address potential concerns about noise handling. In the revised manuscript, we will add an explicit equation showing how measurement noise variance is used to adapt the slope threshold during propagation. We will also include pseudocode for the full within- and cross-segment labeling process, including the identification and filtering of noisy ground cells. These additions will clarify the uncertainty incorporation and demonstrate mitigation of reflection noise false positives. revision: yes

Circularity Check

0 steps flagged

No circularity: algorithmic workflow is self-contained and empirically validated

full rationale

The paper introduces FugSeg as an explicit sequence of geometric and uncertainty-handling steps (polar grid representation, within/cross-segment labeling, adaptive slope with measurement uncertainty, fine-grained elevation estimation, and explicit noisy-cell handling) without any fitted parameters renamed as predictions, self-citation load-bearing premises, or ansatz smuggled from prior author work. Performance claims rest on direct comparison against external baselines on four public datasets rather than any reduction of outputs to inputs by construction. The derivation chain is therefore independent and falsifiable outside the paper itself.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The abstract does not specify any free parameters, mathematical axioms, or newly invented entities. Concepts like 'adaptive slope' and 'noisy ground cells' are introduced but without implementation details or evidence of fitting.

pith-pipeline@v0.9.0 · 5540 in / 1232 out tokens · 70253 ms · 2026-05-12T01:55:52.704593+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

41 extracted references · 41 canonical work pages

  1. [1]

    Road-segmentation- based curb detection method for self-driving via a 3d-lidar sensor,

    Y . Zhang, J. Wang, X. Wang, and J. M. Dolan, “Road-segmentation- based curb detection method for self-driving via a 3d-lidar sensor,”IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 12, pp. 3981–3991, 2018

  2. [2]

    Similar but different: A survey of ground segmentation and traversability estimation for terrestrial robots,

    H. Lim, M. Oh, S. Lee, S. Ahn, and H. Myung, “Similar but different: A survey of ground segmentation and traversability estimation for terrestrial robots,”International Journal of Control, Automation and Systems, vol. 22, pp. 347–359, 2024

  3. [3]

    Loam: Lidar odometry and mapping in real- time,

    J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in real- time,” inProceedings of Robotics: Science and Systems, Berkeley, USA, July 2014

  4. [4]

    Model based vehicle detection and tracking for autonomous urban driving,

    A. Petrovskaya and S. Thrun, “Model based vehicle detection and tracking for autonomous urban driving,”Autonomous Robots, vol. 26, pp. 123–139, 2009

  5. [5]

    Predictive cruise control using high-definition map and real vehicle implementa- tion,

    H. Chu, L. Guo, B. Gao, H. Chen, N. Bian, and J. Zhou, “Predictive cruise control using high-definition map and real vehicle implementa- tion,”IEEE Transactions on Vehicular Technology, vol. 67, no. 12, pp. 11 377–11 389, 2018

  6. [6]

    Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,

    M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,”Commun. ACM, vol. 24, no. 6, p. 381–395, Jun. 1981

  7. [7]

    Fast segmentation of 3d point clouds: A paradigm on lidar data for autonomous vehicle applications,

    D. Zermas, I. Izzat, and N. Papanikolopoulos, “Fast segmentation of 3d point clouds: A paradigm on lidar data for autonomous vehicle applications,” in2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 5067–5073

  8. [8]

    A slope-robust cascaded ground segmentation in 3d point cloud for autonomous vehicles,

    P. Narksri, E. Takeuchi, Y . Ninomiya, Y . Morales, N. Akai, and N. Kawaguchi, “A slope-robust cascaded ground segmentation in 3d point cloud for autonomous vehicles,” in2018 21st International Con- ference on Intelligent Transportation Systems (ITSC), 2018, pp. 497– 504

  9. [9]

    Erasor: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3d point cloud map building,

    H. Lim, S. Hwang, and H. Myung, “Erasor: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3d point cloud map building,”IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2272–2279, 2021

  10. [10]

    Patchwork: Concentric zone-based region-wise ground segmentation with ground likelihood estimation using a 3d lidar sensor,

    H. Lim, M. Oh, and H. Myung, “Patchwork: Concentric zone-based region-wise ground segmentation with ground likelihood estimation using a 3d lidar sensor,”IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 6458–6465, 2021

  11. [11]

    Patchwork++: Fast and robust ground segmentation solving partial under-segmentation using 3d point cloud,

    S. Lee, H. Lim, and H. Myung, “Patchwork++: Fast and robust ground segmentation solving partial under-segmentation using 3d point cloud,” in2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 13 276–13 283

  12. [12]

    Travel: Traversable ground and above-ground object segmentation using graph representation of 3d lidar scans,

    M. Oh, E. Jung, H. Lim, W. Song, S. Hu, E. M. Lee, J. Park, J. Kim, J. Lee, and H. Myung, “Travel: Traversable ground and above-ground object segmentation using graph representation of 3d lidar scans,”IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 7255–7262, 2022

  13. [13]

    A fast ground segmentation method for 3d point cloud,

    P. M. Chu, S. Cho, S. Sim, K. H. Kwak, and K. Cho, “A fast ground segmentation method for 3d point cloud,”Journal of Information Processing Systems, vol. 13, no. 3, pp. 491–499, 2017

  14. [14]

    Enhanced ground seg- mentation method for lidar point clouds in human-centric autonomous robot systems,

    P. M. Chu, S. Cho, J. Park, S. Fong, and K. Cho, “Enhanced ground seg- mentation method for lidar point clouds in human-centric autonomous robot systems,”Human-centric Computing and Information Sciences, vol. 9, 2019

  15. [15]

    Efficient online segmentation for sparse 3d laser scans,

    I. Bogoslavskyi and C. Stachniss, “Efficient online segmentation for sparse 3d laser scans,”PFG – Journal of Photogrammetry, Remote Sensing and Geoinformation Science, vol. 85, pp. 41–52, 2017

  16. [16]

    Ground segmen- tation algorithm for sloped terrain and sparse lidar point cloud,

    V . Jim ´enez, J. Godoy, A. Artu ˜nedo, and J. Villagra, “Ground segmen- tation algorithm for sloped terrain and sparse lidar point cloud,”IEEE Access, vol. 9, pp. 132 914–132 927, 2021

  17. [17]

    Groundgrid: Lidar point cloud ground segmentation and terrain estimation,

    N. Steinke, D. Goehring, and R. Rojas, “Groundgrid: Lidar point cloud ground segmentation and terrain estimation,”IEEE Robotics and Automation Letters, vol. 9, no. 1, pp. 420–426, 2024

  18. [18]

    Virtual point removal for large-scale 3d point clouds with multiple glass planes,

    J.-S. Yun and J.-Y . Sim, “Virtual point removal for large-scale 3d point clouds with multiple glass planes,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 2, pp. 729–744, 2021

  19. [19]

    Reflective noise filtering of large-scale point cloud using multi-position lidar sensing data,

    R. Gao, J. Park, X. Hu, S. Yang, and K. Cho, “Reflective noise filtering of large-scale point cloud using multi-position lidar sensing data,”Remote Sensing, vol. 13, no. 16, 2021

  20. [20]

    A coupled optical- radiometric modeling approach to removing reflection noise in tls data of urban areas,

    L. Fang, T. Li, Y . Lin, S. Zhou, and W. Yao, “A coupled optical- radiometric modeling approach to removing reflection noise in tls data of urban areas,”ISPRS Journal of Photogrammetry and Remote Sensing, vol. 220, pp. 217–231, 2025

  21. [21]

    Fast ground segmentation for 3d lidar point cloud based on jump-convolution- process,

    Z. Shen, H. Liang, L. Lin, Z. Wang, W. Huang, and J. Yu, “Fast ground segmentation for 3d lidar point cloud based on jump-convolution- process,”Remote Sensing, vol. 13, no. 16, 2021

  22. [22]

    Dipg-seg: Fast and accurate double image-based pixel-wise ground segmentation,

    H. Wen, S. Liu, Y . Liu, and C. Liu, “Dipg-seg: Fast and accurate double image-based pixel-wise ground segmentation,”IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 6, pp. 5189–5200, 2024

  23. [23]

    Segmentation of 3d lidar data in non-flat urban environments using a local convexity criterion,

    F. Moosmann, O. Pink, and C. Stiller, “Segmentation of 3d lidar data in non-flat urban environments using a local convexity criterion,” in2009 IEEE Intelligent Vehicles Symposium, 2009, pp. 215–220

  24. [24]

    Cylindrical and asymmetrical 3d convolution networks for lidar segmentation,

    X. Zhu, H. Zhou, T. Wang, F. Hong, Y . Ma, W. Li, H. Li, and D. Lin, “Cylindrical and asymmetrical 3d convolution networks for lidar segmentation,” in2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 9934–9943

  25. [25]

    Fast segmentation of 3d point clouds for ground vehicles,

    M. Himmelsbach, F. v. Hundelshausen, and H.-J. Wuensche, “Fast segmentation of 3d point clouds for ground vehicles,” in2010 IEEE Intelligent Vehicles Symposium, 2010, pp. 560–565

  26. [26]

    Cnn for very fast ground segmentation in velodyne lidar data,

    M. Velas, M. Spanel, M. Hradis, and A. Herout, “Cnn for very fast ground segmentation in velodyne lidar data,” in2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2018, pp. 97–103

  27. [27]

    Gndnet: Fast ground plane estimation and point cloud segmentation for au- tonomous vehicles,

    A. Paigwar, ¨O. Erkent, D. Sierra-Gonzalez, and C. Laugier, “Gndnet: Fast ground plane estimation and point cloud segmentation for au- tonomous vehicles,” in2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 2150–2156

  28. [28]

    Pointnet: Deep learning on point sets for 3d classification and segmentation,

    R. Q. Charles, H. Su, M. Kaichun, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 77–85

  29. [29]

    Sectorgsnet: Sector learning for efficient ground segmentation of outdoor lidar point clouds,

    D. He, F. Abid, Y .-M. Kim, and J.-H. Kim, “Sectorgsnet: Sector learning for efficient ground segmentation of outdoor lidar point clouds,”IEEE Access, vol. 10, pp. 11 938–11 946, 2022

  30. [30]

    Bevington and D

    P. Bevington and D. Robinson,Data Reduction and Error Analysis for the Physical Sciences. McGraw-Hill Education, 2003. IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS 14

  31. [31]

    Are we ready for autonomous driving? the kitti vision benchmark suite,

    A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the kitti vision benchmark suite,” in2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3354–3361

  32. [32]

    Semantickitti: A dataset for semantic scene understanding of lidar sequences,

    J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, “Semantickitti: A dataset for semantic scene understanding of lidar sequences,” in2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 9296–9306

  33. [33]

    nuscenes: A multimodal dataset for autonomous driving,

    H. Caesar, V . Bankiti, A. H. Lang, S. V ora, V . E. Liong, Q. Xu, A. Kr- ishnan, Y . Pan, G. Baldan, and O. Beijbom, “nuscenes: A multimodal dataset for autonomous driving,” in2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 11 618– 11 628

  34. [34]

    Kitti-360: A novel dataset and bench- marks for urban scene understanding in 2d and 3d,

    Y . Liao, J. Xie, and A. Geiger, “Kitti-360: A novel dataset and bench- marks for urban scene understanding in 2d and 3d,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 3, pp. 3292– 3310, 2023

  35. [35]

    Lidardustx: A lidar dataset for dusty unstructured road environments,

    C. Wei, Q. Wu, S. Zuo, J. Xu, B. Zhao, Z. Yang, G. Xie, and S. Wang, “Lidardustx: A lidar dataset for dusty unstructured road environments,” in2025 IEEE International Conference on Robotics and Automation (ICRA), 2025, pp. 12 703–12 709

  36. [36]

    Calibration of a rotating multi-beam lidar,

    N. Muhammad and S. Lacroix, “Calibration of a rotating multi-beam lidar,” in2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010, pp. 5648–5653

  37. [37]

    Static calibration and analysis of the velo- dyne hdl-64e s2 for high accuracy mobile scanning,

    C. Glennie and D. D. Lichti, “Static calibration and analysis of the velo- dyne hdl-64e s2 for high accuracy mobile scanning,”Remote Sensing, vol. 2, no. 6, pp. 1610–1624, 2010

  38. [38]

    Levinson and S

    J. Levinson and S. Thrun,Unsupervised Calibration for Multi-beam Lasers. Springer Berlin Heidelberg, 2014, pp. 179–193

  39. [39]

    2dpass: 2d priors assisted semantic segmentation on lidar point clouds,

    X. Yan, J. Gao, C. Zheng, C. Zheng, R. Zhang, S. Cui, and Z. Li, “2dpass: 2d priors assisted semantic segmentation on lidar point clouds,” inEuropean Conference on Computer Vision. Springer, 2022, pp. 677– 695

  40. [40]

    Spherical transformer for lidar-based 3d recognition,

    X. Lai, Y . Chen, F. Lu, J. Liu, and J. Jia, “Spherical transformer for lidar-based 3d recognition,” in2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 17 545–17 555

  41. [41]

    Lsk3dnet: Towards effective and efficient 3d perception with large sparse kernels,

    T. Feng, W. Wang, F. Ma, and Y . Yang, “Lsk3dnet: Towards effective and efficient 3d perception with large sparse kernels,” in2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 14 916–14 927. Yu Lireceived the B.Eng. degree in geodesy and geomatics engineering from Wuhan University, Wuhan, China, in 2016, and the M.Sc. d...