pith. machine review for the scientific record. sign in

arxiv: 2512.15369 · v2 · submitted 2025-12-17 · 💻 cs.CV

Recognition: no theorem link

SemanticBridge - A Dataset for 3D Semantic Segmentation of Bridges and Domain Gap Analysis

Authors on Pith no claims yet

Pith reviewed 2026-05-16 21:46 UTC · model grok-4.3

classification 💻 cs.CV
keywords 3D semantic segmentationbridge inspectiondomain gappoint cloud datasetinfrastructure monitoringstructural health monitoringsensor variation
0
0 comments X

The pith

A new dataset of labeled 3D bridge scans shows sensor differences can drop segmentation accuracy by up to 11.4 percent mIoU.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents SemanticBridge, a collection of high-resolution 3D scans of diverse bridge structures from multiple countries, each with detailed semantic labels for components. It tests three state-of-the-art 3D deep learning architectures on the data and finds they handle the segmentation task robustly. The work further acquires scans from different sensors to measure the resulting domain gap, which produces mIoU drops reaching 11.4 percent. This addresses a practical need in infrastructure inspection because manual analysis of bridge scans is slow and error-prone, while automated segmentation could support ongoing structural health monitoring. A sympathetic reader sees the contribution as supplying both the data resource and the first quantified evidence of sensor-induced performance limits on this specific class of objects.

Core claim

The SemanticBridge dataset supplies high-resolution 3D bridge scans from various countries together with semantic labels, enabling evaluation of existing 3D segmentation models that prove robust overall, yet revealing that sensor variations introduce a domain gap capable of reducing mean intersection-over-union by as much as 11.4 percent.

What carries the argument

The SemanticBridge dataset of multi-country, multi-sensor 3D point clouds carrying component-level semantic labels, used both to train and test segmentation models and to isolate performance loss from sensor domain shift.

If this is right

  • Existing 3D segmentation models can be applied directly to bridge inspection with acceptable baseline accuracy.
  • Data collection protocols for infrastructure must include sensor calibration or adaptation steps to limit accuracy loss.
  • The dataset supplies a public benchmark for comparing future models on bridge-specific point clouds.
  • Domain-gap quantification supplies a concrete target for developing sensor-invariant or adaptation techniques.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Models trained on one sensor type will likely need explicit domain-adaptation layers when deployed on scans from another device.
  • The same sensor-gap pattern may appear in related tasks such as road or building segmentation, suggesting a broader need for multi-sensor infrastructure datasets.
  • Extending the dataset with temporal scans of the same bridges would allow testing whether the 11.4 percent drop persists across time or changes with structural aging.
  • Integration with real-time monitoring platforms could use the quantified gap to set confidence thresholds for automated alerts.

Load-bearing premise

The chosen state-of-the-art architectures and the collected multi-sensor scans are representative of actual bridge inspection conditions without label noise or scan-quality differences that would distort the measured domain gap.

What would settle it

New 3D scans of the same bridges acquired with a different sensor, followed by retraining and cross-testing the same three architectures, would show whether the mIoU drop between sensor domains consistently reaches or exceeds 11.4 percent.

Figures

Figures reproduced from arXiv: 2512.15369 by Alexander Reiterer, Ioannis Brilakis, Mariana Ferrandon Cervantes, Maximilian Kellner, Ruodan Lu, Yuandong Pan.

Figure 1
Figure 1. Figure 1: Visualization of point clouds showing the same [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Visualization of point clouds showing a colored [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Visualization of point clouds showing a colored [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Visualization of example predictions. The initial column depicts the outcomes obtained with the UNet3D, the [PITH_FULL_IMAGE:figures/full_fig_p011_4.png] view at source ↗
read the original abstract

We propose a novel dataset that has been specifically designed for 3D semantic segmentation of bridges and the domain gap analysis caused by varying sensors. This addresses a critical need in the field of infrastructure inspection and maintenance, which is essential for modern society. The dataset comprises high-resolution 3D scans of a diverse range of bridge structures from various countries, with detailed semantic labels provided for each. Our initial objective is to facilitate accurate and automated segmentation of bridge components, thereby advancing the structural health monitoring practice. To evaluate the effectiveness of existing 3D deep learning models on this novel dataset, we conduct a comprehensive analysis of three distinct state-of-the-art architectures. Furthermore, we present data acquired through diverse sensors to quantify the domain gap resulting from sensor variations. Our findings indicate that all architectures demonstrate robust performance on the specified task. However, the domain gap can potentially lead to a decline in the performance of up to 11.4% mIoU.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes the SemanticBridge dataset of high-resolution 3D scans of diverse bridge structures from various countries, equipped with detailed semantic labels for 3D semantic segmentation. It evaluates three state-of-the-art 3D deep learning architectures on the dataset and quantifies the domain gap from different sensors, reporting robust performance across models but a potential mIoU decline of up to 11.4% attributable to sensor variations.

Significance. If the dataset curation and cross-sensor evaluation are rigorous, the work supplies a needed benchmark for automated bridge inspection via point clouds and highlights practical limits on cross-sensor generalization in infrastructure monitoring.

major comments (2)
  1. [Abstract] Abstract: the headline claim of an 11.4% mIoU drop is presented without any dataset statistics (number of scans, points per scan, class balance), label protocol, inter-annotator agreement figures, or model hyper-parameters, leaving the magnitude and attribution of the domain gap unverifiable.
  2. [Methods] Methods / Experimental setup: no evidence is supplied that multi-sensor scans were matched on point density, coverage, or noise characteristics, nor are any point-cloud statistics or label-consistency checks reported; without these controls the observed mIoU gap cannot be isolated from confounding data-quality differences.
minor comments (1)
  1. [Abstract] Abstract: the qualifier 'up to 11.4%' is imprecise; the maximum drop should be tied to a specific architecture-sensor pair and accompanied by the corresponding baseline mIoU.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback. We address each major comment point by point below, indicating the revisions we will incorporate to improve the clarity and rigor of the manuscript.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the headline claim of an 11.4% mIoU drop is presented without any dataset statistics (number of scans, points per scan, class balance), label protocol, inter-annotator agreement figures, or model hyper-parameters, leaving the magnitude and attribution of the domain gap unverifiable.

    Authors: We agree that the abstract would benefit from additional context to support the key claim. In the revised version, we will include summary dataset statistics (total number of scans, average points per scan, and class balance) along with a brief reference to the label protocol and inter-annotator agreement figures. Model hyper-parameters are already detailed in the Experimental Setup section; we will add an explicit pointer to this section in the abstract so readers can readily verify the reported domain gap. revision: yes

  2. Referee: [Methods] Methods / Experimental setup: no evidence is supplied that multi-sensor scans were matched on point density, coverage, or noise characteristics, nor are any point-cloud statistics or label-consistency checks reported; without these controls the observed mIoU gap cannot be isolated from confounding data-quality differences.

    Authors: We acknowledge the need for explicit controls to isolate sensor effects. While the multi-sensor scans were acquired from identical bridge structures under comparable conditions, we did not report matching statistics in the current manuscript. In the revision, we will add point-cloud statistics (density, coverage, and noise characteristics per sensor), describe the label-consistency verification process, and clarify how confounding data-quality factors were controlled. This will strengthen the attribution of the observed mIoU decline to sensor variations. revision: yes

Circularity Check

0 steps flagged

Empirical dataset creation and model evaluation with no derivations or self-referential claims

full rationale

The paper introduces a new multi-sensor bridge point-cloud dataset and reports direct empirical results from training three off-the-shelf 3D segmentation architectures on it. No equations, fitted parameters, uniqueness theorems, or ansatzes appear; the reported mIoU values and the 11.4 % domain-gap figure are simply measured outcomes on the authors' train/test splits. Because the work contains no derivation chain that could reduce to its own inputs, no circular steps exist.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The paper relies on standard assumptions from 3D deep learning literature for semantic segmentation and on the representativeness of the collected bridge scans; no free parameters, new axioms, or invented entities are introduced.

axioms (1)
  • domain assumption Existing 3D semantic segmentation architectures transfer to bridge point clouds without fundamental architectural changes.
    The evaluation of three state-of-the-art models on the new dataset implicitly assumes these models remain effective in the bridge domain.

pith-pipeline@v0.9.0 · 5482 in / 1247 out tokens · 73776 ms · 2026-05-16T21:46:31.275517+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

70 extracted references · 70 canonical work pages · 3 internal anchors

  1. [1]

    Klemt-Albert, R

    K. Klemt-Albert, R. Hartung, S. Bahlau, En- hancing resilience of traffic networks with a fo- 10 Abutment Deck Railing Superstructure Pillar Unlabeled Traffic Sign Ground High Vegetation Figure 4: Visualization of example predictions. The initial column depicts the outcomes obtained with the UNet3D, the subsequent column illustrates the results achieved w...

  2. [2]

    Miyamoto, K

    A. Miyamoto, K. Kawamura, H. Naka- mura, Development of a bridge manage- ment system for existing bridges, Advances in Engineering Software 32 (10) (2001) 821–833.doi:https://doi.org/10.1016/ S0965-9978(01)00034-5

  3. [4]

    A. K. Agrawal, A. Kawaguchi, Z. Chen, De- terioration rates of typical bridge elements in new york, Journal of Bridge Engineering 15 (4) (2010) 419–429.doi:10.1061/(ASCE) BE.1943-5592.0000123

  4. [5]

    A. M. Chyad, O. Abudayyeh, F. Zakhil, O. Hakimi, Deterioration rates of concrete bridge decks in several climatic regions, in: 2018 IEEE International Conference on Electro/Information Technology, EIT 2018, Rochester, MI, USA, May 3-5, 2018, IEEE, 2018, pp. 65–68.doi:10.1109/EIT.2018. 8500084. URLhttps://doi.org/10.1109/EIT.2018. 8500084

  5. [6]

    Clonts, L

    S. Clonts, L. Cooley, P. Freitag, B. Soltis, Virginia bridge deterioration factors, in: 2019 Systems and Information Engineering Design Symposium (SIEDS), 2019, pp. 1–5.doi: 10.1109/SIEDS.2019.8735618

  6. [7]

    Miao, Prediction-based maintenance of ex- isting bridges using neural network and sensi- tivity analysis, Advances in Civil Engineering 2021 (1) (2021) 4598337.doi:https://doi

    P. Miao, Prediction-based maintenance of ex- isting bridges using neural network and sensi- tivity analysis, Advances in Civil Engineering 2021 (1) (2021) 4598337.doi:https://doi. org/10.1155/2021/4598337

  7. [8]

    A. M. Chyad, O. Abudayyeh, Impact of environmental factors on the condition rat- ing of concrete bridge decks using statistical- distribution methods, Practice Periodical on Structural Design and Construction 26 (3) (2021) 04021014.doi:10.1061/(ASCE)SC. 1943-5576.0000578

  8. [11]

    H.-K. Liao, M. Jallow, N.-J. Yau, M.-Y. Jiang, J.-H. Huang, C.-W. Su, P.-Y. Chen, Compar- ison of bridge inspection methodologies and 11 evaluation criteria in taiwan and foreign prac- tices, in: M.-Y. N. T. U. o. S. Cheng, Technol- ogy), H.-M. N. T. U. o. S. Chen, Technology), K. C. N. T. U. o. S. Chiu, Technology) (Eds.), Proceedings of the 34th Interna...

  9. [12]

    V. S. de Freitas Bello, C. Popescu, T. Blanksvärd, B. Täljsten, Bridge man- agement systems: overview and framework for smart management, IABSE Congress, Ghent 2021: Structural Engineering for Future Societal Needs (2021)

  10. [13]

    Z. I. Turksezer, P. F. Giordano, M. P. Limon- gelli, C. Iacovino, Inspection of roadway bridges: A comparison at the European level, Taylor & Francis, 2021, pp. 436–436.doi: 10.1201/9780429279119-272

  11. [14]

    Tyvoniuk, R

    V. Tyvoniuk, R. Trach, T. Wierzbicki, Bridge management systems: an overview and com- parison, Acta Scientiarum Polonorum. Archi- tectura 23 (2024) 112–120.doi:10.22630/ ASPA.2024.23.8. URLhttps://aspa.sggw.edu.pl/article/ view/9204

  12. [15]

    H. H. Hosamo, M. H. Hosamo, Digital twin technology for bridge maintenance using 3d laser scanning: A review, Advances in Civil Engineering 2022 (1) (2022) 2194949. arXiv:https://onlinelibrary.wiley. com/doi/pdf/10.1155/2022/2194949,doi: https://doi.org/10.1155/2022/2194949. URLhttps://onlinelibrary.wiley.com/ doi/abs/10.1155/2022/2194949

  13. [16]

    Honghong, Y

    S. Honghong, Y. Gang, L. Haijiang, Z. Tian, J. Annan, Digital twin enhanced bim to shape full life cycle digital transformation for bridge engineering, Automation in Construction 147 (2023) 104736.doi:https://doi.org/10. 1016/j.autcon.2022.104736. URLhttps://www.sciencedirect.com/ science/article/pii/S0926580522006069

  14. [17]

    Kellner, H

    M. Kellner, H. Vassilev, A. Busch, R. Blaskow, M. Ferrandon Cervantes, K. N. Poku- Agyemang, A. Schmitt, S. Weisbrich, H. Maas, F. Neitzel, A. Reiterer, J. Blankenbach, Scan2bim - a review on the automated cre- ation of semantic aware geometric as-is mod- els of bridges, Zeitschrift für alle Bereiche der Geodäsie und Geoinformation 03/2024 (2024) 159–181....

  15. [18]

    Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, M. Bennamoun, Deep learning for 3d point clouds: A survey, CoRR abs/1912.12033 (2019).arXiv:1912.12033. URLhttp://arxiv.org/abs/1912.12033

  16. [19]

    Sarker, P

    S. Sarker, P. Sarker, G. Stone, R. Gor- man, A. Tavakkoli, G. Bebis, J. Sattar- vand, A comprehensive overview of deep learning techniques for 3d point cloud classification and semantic segmenta- tion, Mach. Vis. Appl. 35 (4) (2024) 67. doi:10.1007/S00138-024-01543-1. URLhttps://doi.org/10.1007/ s00138-024-01543-1

  17. [20]

    H. Kim, J. Yoon, S.-H. Sim, Automated bridge component recognition from point clouds using deep learning, Structural Control and Health Monitoring 27 (9) (2020) e2591.doi:https: //doi.org/10.1002/stc.2591

  18. [22]

    Sansoni, M

    G. Sansoni, M. Trebeschi, F. Docchio, State- of-the-art and applications of 3d imaging sen- sors in industry, cultural heritage, medicine, andcriminalinvestigation, Sensors9(1)(2009) 568–601.doi:10.3390/s90100568. URLhttps://www.mdpi.com/1424-8220/9/ 1/568

  19. [23]

    A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, M. Nießner, Scannet: Richly- annotated 3d reconstructions of indoor scenes, in: Proc. Computer Vision and Pattern Recog- nition (CVPR), IEEE, 2017

  20. [24]

    Armeni, O

    I. Armeni, O. Sener, A. R. Zamir, H. Jiang, I. Brilakis, M. Fischer, S. Savarese, 3d se- mantic parsing of large-scale indoor spaces, in: CVPR, 2016, pp. 1534–1543. 12

  21. [25]

    Munoz, N

    D. Munoz, N. Vandapel, M. Hebert, On- board contextual classification of 3-d point clouds with learned high-order markov random fields, in: IEEE International Conference on Robotics and Automation (ICRA), 2009

  22. [26]

    Hackel, N

    T. Hackel, N. Savinov, L. Ladicky, J. D. Wegner, K. Schindler, M. Pollefeys, SEMAN- TIC3D.NET: A new large-scale point cloud classification benchmark, in: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. IV-1-W1, 2017, pp. 91–98

  23. [27]

    Roynard, J.-E

    X. Roynard, J.-E. Deschaud, F. Goulette, Paris-lille-3d: A large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification, The International Journal of Robotics Re- search 37 (6) (2018) 545–557.arXiv:https: //doi.org/10.1177/0278364918767506, doi:10.1177/0278364918767506. URLhttps://doi.org/10.1177/ 0278364918767506

  24. [28]

    Y. Pan, B. Gao, J. Mei, S. Geng, C. Li, H. Zhao, Semanticposs: A point cloud dataset with large quantity of dynamic instances, in: 2020 IEEE Intelligent Vehicles Sympo- sium (IV), 2020, pp. 687–693.doi:10.1109/ IV47402.2020.9304596

  25. [29]

    Behley, M

    J. Behley, M. Garbade, A. Milioto, J. Quen- zel, S. Behnke, C. Stachniss, J. Gall, Se- manticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences, in: Proc. of the IEEE/CVF International Conf. on Com- puter Vision (ICCV), 2019

  26. [30]

    nuScenes: A multimodal dataset for autonomous driving

    H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, O. Beijbom, nuscenes: A multi- modal dataset for autonomous driving, arXiv preprint arXiv:1903.11027 (2019)

  27. [31]

    P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, V. Vasudevan, W. Han, J. Ngiam, H. Zhao, A. Timofeev, S. Ettinger, M. Krivokon, A. Gao, A. Joshi, Y. Zhang, J. Shlens, Z. Chen, D. Anguelov, Scalability in perception for autonomous driv- ing: Waymo open dataset, in: Proceedings of the IEEE/CVF Confe...

  28. [32]

    A. Xiao, J. Huang, D. Guan, F. Zhan, S. Lu, Transfer learning from synthetic to real li- dar point cloud for semantic segmentation, in: Proceedings of the AAAI Conference on Artifi- cial Intelligence, Vol. 36, 2022, pp. 2795–2803. URLhttps://arxiv.org/abs/2107.05399

  29. [33]

    Varney, V

    N. Varney, V. K. Asari, Q. Graehling, Dales: A large-scale aerial lidar data set for se- mantic segmentation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 186–187

  30. [34]

    S. M. I. Zolanvari, S. Ruano, A. Rana, A. Cummins, R. E. da Silva, M. Rahbar, A. Smolic, Dublincity: Annotated lidar point cloud and its applications, ArXiv abs/1909.03613 (2019). URLhttps://api.semanticscholar.org/ CorpusID:202540484

  31. [35]

    Rottensteiner, G

    F. Rottensteiner, G. Sohn, J. Jung, M. Gerke, C. Baillard, S. Benitez, U. Breitkopf, The IS- PRS benchmark on urban object classification and 3D building reconstruction, ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences I-3 (2012), Nr. 1 1 (1) (2012) 293–298

  32. [36]

    Q. Hu, B. Yang, S. Khalid, W. Xiao, N. Trigoni, A. Markham, Towards semantic segmentation of urban-scale 3d point clouds: A dataset, benchmarks and challenges, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021

  33. [37]

    M. Chen, Q. Hu, T. Hugues, A. Feng, Y. Hou, K. McCullough, L. Soibelman, Stpls3d: A large-scale synthetic and real aerial pho- togrammetry 3d point cloud dataset (2022). arXiv:2203.09065. URLhttps://arxiv.org/abs/2203.09065

  34. [38]

    Agapaki, A

    E. Agapaki, A. Glyn-Davies, S. Mandoki, I. Brilakis, CLOI: A Shape Classification Benchmark Dataset for Industrial Facilities, American Society of Civil Engineers, 2019. doi:10.17863/CAM.36600. URLhttps://www.repository.cam.ac.uk/ handle/1810/289351 13

  35. [39]

    C. Yin, B. Wang, V. J. Gan, M. Wang, J. C. Cheng, Automated semantic segmentation of industrial point clouds using respointnet++, Automation in Construction 130 (2021) 103874.doi:https://doi.org/10.1016/j. autcon.2021.103874. URLhttps://www.sciencedirect.com/ science/article/pii/S0926580521003253

  36. [40]

    X. Han, C. Liu, Y. Zhou, K. Tan, Z. Dong, B. Yang, Whu-urban3d: An urban scene lidar point cloud dataset for semantic in- stance segmentation, ISPRS Journal of Photogrammetry and Remote Sensing 209 (2024) 500–513.doi:https://doi.org/10. 1016/j.isprsjprs.2024.02.007. URLhttps://www.sciencedirect.com/ science/article/pii/S0924271624000522

  37. [41]

    Nguyen, S

    T.-M. Nguyen, S. Yuan, T. H. Nguyen, P. Yin, H. Cao, L. Xie, M. Wozniak, P. Jensfelt, M. Thiel, J. Ziegenbein, N. Blunder, Mcd: Diverse large-scale multi-campus dataset for robot perception, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 22304– 22313. URLhttps://mcdviral.github.io/

  38. [42]

    Kellner, T

    M. Kellner, T. König, J.-I. Jäkel, K. Klemt- Albert, A. Reiterer, 3d bridge segmentation using semi-supervised domain adaptation, Au- tomation in Construction 172 (2025) 106021. doi:https://doi.org/10.1016/j.autcon. 2025.106021. URLhttps://www.sciencedirect.com/ science/article/pii/S0926580525000615

  39. [43]

    Milioto, I

    A. Milioto, I. Vizzo, J. Behley, C. Stach- niss, RangeNet++: Fast and Accurate Li- DAR Semantic Segmentation, in: IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2019

  40. [44]

    3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation

    Ö. Çiçek, A. Abdulkadir, S. Lienkamp, T. Brox, O. Ronneberger, 3d u-net: Learning dense volumetric segmentation from sparse annotation, in: Medical Image Computing and Computer-Assisted Intervention (MIC- CAI), Vol. 9901 of LNCS, Springer, 2016, pp. 424–432, (available on arXiv:1606.06650 [cs.CV]). URLhttp://lmb.informatik. uni-freiburg.de/Publications/20...

  41. [45]

    Savarese, Segcloud: Semantic segmentation of 3d point clouds, in: International Confer- ence on 3D Vision (3DV), 2017

    L.P.Tchapmi, C.B.Choy, I.Armeni, J.Gwak, S. Savarese, Segcloud: Semantic segmentation of 3d point clouds, in: International Confer- ence on 3D Vision (3DV), 2017

  42. [46]

    C. Choy, J. Gwak, S. Savarese, 4d spatio- temporal convnets: Minkowski convolutional neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 3075–3084

  43. [47]

    R. Q. Charles, H. Su, M. Kaichun, L. J. Guibas, Pointnet: Deep learning on point sets for 3d classification and segmentation, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 77–85. doi:10.1109/CVPR.2017.16

  44. [48]

    C. Qi, L. Yi, H. Su, L. J. Guibas, Pointnet++: Deep hierarchical feature learning on point sets in a metric space, in: Neural Information Processing Systems, 2017. URLhttps://api.semanticscholar.org/ CorpusID:1745976

  45. [49]

    Q. Hu, B. Yang, L. Xie, S. Rosa, Y. Guo, Z. Wang, N. Trigoni, A. Markham, Randla- net: Efficient semantic segmentation of large- scale point clouds, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2020)

  46. [50]

    Thomas, C

    H. Thomas, C. R. Qi, J.-E. Deschaud, B. Mar- cotegui, F. Goulette, L. J. Guibas, Kpconv: Flexible and deformable convolution for point clouds, Proceedings of the IEEE International Conference on Computer Vision (2019)

  47. [51]

    16259–16268

    H.Zhao, L.Jiang, J.Jia, P.H.Torr, V.Koltun, Point transformer, in: Proceedings of the IEEE/CVF International Conference on Com- puter Vision, 2021, pp. 16259–16268

  48. [52]

    X. Wu, Y. Lao, L. Jiang, X. Liu, H. Zhao, Point transformer v2: Grouped vector atten- tion and partition-based pooling, in: NeurIPS, 2022

  49. [53]

    X. Wu, L. Jiang, P.-S. Wang, Z. Liu, X. Liu, Y. Qiao, W. Ouyang, T. He, H. Zhao, Point transformer v3: Simpler, faster, stronger (2023).arXiv:2312.10035. 14

  50. [54]

    Yang, Y.-X

    Y.-Q. Yang, Y.-X. Guo, J.-Y. Xiong, Y. Liu, H. Pan, P.-S. Wang, X. Tong, B. Guo, Swin3d: A pretrained transformer backbone for 3d in- door scene understanding (2023).arXiv: 2304.06906

  51. [55]

    Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong, F. Wei, B. Guo, Swin transformer v2: Scaling up capacity and resolution, in: International Conference on Computer Vision and Pattern Recognition (CVPR), 2022

  52. [56]

    Kellner, B

    M. Kellner, B. Stahl, A. Reiterer, Fused projection-based point cloud segmentation, Sensors 22 (3) (2022). doi:10.3390/s22031139. URLhttps://www.mdpi.com/1424-8220/ 22/3/1139

  53. [57]

    Faro, Faro®laser scanner focus3d x 330, https://downloads.faro.com/index.php/ s/z6nEwtBPDpGPmYW, accessed: 2024-08-01 (2016)

  54. [58]

    Leica, Leica rtc360 3d real- ity capture solution,https:// leica-geosystems.com/-/media/files/ leicageosystems/products/datasheets/ leica-rtc360-ds-872750-0821-en.pdf, accessed: 2024-08-01 (2022)

  55. [59]

    hand- held.,https://leica-geosystems.com/-/ media/files/leicageosystems/products/ datasheets/leica-blk2go-ds-0521.pdf, accessed: 2024-08-01 (2019)

    Leica, Leica blk2go reality capture. hand- held.,https://leica-geosystems.com/-/ media/files/leicageosystems/products/ datasheets/leica-blk2go-ds-0521.pdf, accessed: 2024-08-01 (2019)

  56. [60]

    A.Dlesk, K.Vach, J.Šedina, K.Pavelka, Com- parison of leica blk360 and leica blk2go on cho- sen test objects, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (2022)

  57. [61]

    Conti, G

    A. Conti, G. Pagliaricci, V. Bonora, G. Tucci, A comparison between terrestrial laser scan- ning and hand-held mobile mapping for the documentation of built heritage, The Interna- tional Archives of the Photogrammetry, Re- mote Sensing and Spatial Information Sciences (2024)

  58. [62]

    Vosselman, H

    G. Vosselman, H. Maas, Airborne and Ter- restrial Laser Scanning, Whittles Publishing, 2010

  59. [63]

    Suchocki, Comparison of time-of-flight and phase-shift tls intensity data for the diag- nostics measurements of buildings, Materials 13 (2) (2020) 353

    C. Suchocki, Comparison of time-of-flight and phase-shift tls intensity data for the diag- nostics measurements of buildings, Materials 13 (2) (2020) 353

  60. [64]

    Neitzel, T

    D.Wujanz, M.Burger, F.Tschirschwitz, T.Ni- etzschmann, F. Neitzel, T. P. Kersten, De- termination of intensity-based stochastic mod- els for terrestrial laser scanners utilising 3d- point clouds, Sensors 18 (7) (2018) 2187.doi: 10.3390/S18072187. URLhttps://doi.org/10.3390/s18072187

  61. [65]

    T. P. Kersten, M. Lindstaedt, Geometric accu- racy investigations of terrestrial laser scanner systems in the laboratory and in the field, Ap- plied Geomatics 14 (2022) 421–434

  62. [66]

    R. Lu, I. Brilakis, C. Middleton, Detection of structural components in point clouds of exist- ing rc bridges, Computer-Aided Civil and In- frastructure Engineering 34 (2018) 1–22.doi: 10.1111/mice.12407

  63. [67]

    Girardeau-Montaut, et al., Cloudcompare, France: EDF R&D Telecom ParisTech 11 (5) (2016) 2016

    D. Girardeau-Montaut, et al., Cloudcompare, France: EDF R&D Telecom ParisTech 11 (5) (2016) 2016

  64. [68]

    Schellen, F

    M. Schellen, F. Kaufmann, C. Glock, T. Tschickardt, Annotation rules and classes for semantic segmentation of point clouds for digitalization of existing bridge structures, Computing in Construction (2023).doi:10. 35490/EC3.2023.228

  65. [69]

    U-Net: Convolutional Networks for Biomedical Image Segmentation

    O. Ronneberger, P.Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention (MIC- CAI), Vol. 9351 of LNCS, Springer, 2015, pp. 234–241, (available on arXiv:1505.04597 [cs.CV]). URLhttp://lmb.informatik. uni-freiburg.de/Publications/2015/ RFB15a

  66. [70]

    Contributors, Spconv: Spatially sparse convolution library,https://github.com/ traveller59/spconv(2022)

    S. Contributors, Spconv: Spatially sparse convolution library,https://github.com/ traveller59/spconv(2022)

  67. [71]

    K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015) 770–778. 15 URLhttps://api.semanticscholar.org/ CorpusID:206594692

  68. [72]

    Paszke, S

    A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, A. Lerer, Automatic differentiation in pytorch, in: NIPS 2017 Workshop on Au- todiff, 2017

  69. [73]

    C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cour- napeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Hal- dane, J. F. del Río, M. Wiebe, P. Peterson, P. Gérard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, T. E. Oliphant, Array programmin...

  70. [74]

    Szegedy, V

    C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the inception archi- tecture for computer vision, in: 2016 IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), IEEE Computer Society, Los Alamitos, CA, USA, 2016, pp. 2818–2826.doi:10.1109/CVPR.2016.308. URLhttps://doi.ieeecomputersociety. org/10.1109/CVPR.2016.308 16