Recognition: no theorem link
SemanticBridge - A Dataset for 3D Semantic Segmentation of Bridges and Domain Gap Analysis
Pith reviewed 2026-05-16 21:46 UTC · model grok-4.3
The pith
A new dataset of labeled 3D bridge scans shows sensor differences can drop segmentation accuracy by up to 11.4 percent mIoU.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The SemanticBridge dataset supplies high-resolution 3D bridge scans from various countries together with semantic labels, enabling evaluation of existing 3D segmentation models that prove robust overall, yet revealing that sensor variations introduce a domain gap capable of reducing mean intersection-over-union by as much as 11.4 percent.
What carries the argument
The SemanticBridge dataset of multi-country, multi-sensor 3D point clouds carrying component-level semantic labels, used both to train and test segmentation models and to isolate performance loss from sensor domain shift.
If this is right
- Existing 3D segmentation models can be applied directly to bridge inspection with acceptable baseline accuracy.
- Data collection protocols for infrastructure must include sensor calibration or adaptation steps to limit accuracy loss.
- The dataset supplies a public benchmark for comparing future models on bridge-specific point clouds.
- Domain-gap quantification supplies a concrete target for developing sensor-invariant or adaptation techniques.
Where Pith is reading between the lines
- Models trained on one sensor type will likely need explicit domain-adaptation layers when deployed on scans from another device.
- The same sensor-gap pattern may appear in related tasks such as road or building segmentation, suggesting a broader need for multi-sensor infrastructure datasets.
- Extending the dataset with temporal scans of the same bridges would allow testing whether the 11.4 percent drop persists across time or changes with structural aging.
- Integration with real-time monitoring platforms could use the quantified gap to set confidence thresholds for automated alerts.
Load-bearing premise
The chosen state-of-the-art architectures and the collected multi-sensor scans are representative of actual bridge inspection conditions without label noise or scan-quality differences that would distort the measured domain gap.
What would settle it
New 3D scans of the same bridges acquired with a different sensor, followed by retraining and cross-testing the same three architectures, would show whether the mIoU drop between sensor domains consistently reaches or exceeds 11.4 percent.
Figures
read the original abstract
We propose a novel dataset that has been specifically designed for 3D semantic segmentation of bridges and the domain gap analysis caused by varying sensors. This addresses a critical need in the field of infrastructure inspection and maintenance, which is essential for modern society. The dataset comprises high-resolution 3D scans of a diverse range of bridge structures from various countries, with detailed semantic labels provided for each. Our initial objective is to facilitate accurate and automated segmentation of bridge components, thereby advancing the structural health monitoring practice. To evaluate the effectiveness of existing 3D deep learning models on this novel dataset, we conduct a comprehensive analysis of three distinct state-of-the-art architectures. Furthermore, we present data acquired through diverse sensors to quantify the domain gap resulting from sensor variations. Our findings indicate that all architectures demonstrate robust performance on the specified task. However, the domain gap can potentially lead to a decline in the performance of up to 11.4% mIoU.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes the SemanticBridge dataset of high-resolution 3D scans of diverse bridge structures from various countries, equipped with detailed semantic labels for 3D semantic segmentation. It evaluates three state-of-the-art 3D deep learning architectures on the dataset and quantifies the domain gap from different sensors, reporting robust performance across models but a potential mIoU decline of up to 11.4% attributable to sensor variations.
Significance. If the dataset curation and cross-sensor evaluation are rigorous, the work supplies a needed benchmark for automated bridge inspection via point clouds and highlights practical limits on cross-sensor generalization in infrastructure monitoring.
major comments (2)
- [Abstract] Abstract: the headline claim of an 11.4% mIoU drop is presented without any dataset statistics (number of scans, points per scan, class balance), label protocol, inter-annotator agreement figures, or model hyper-parameters, leaving the magnitude and attribution of the domain gap unverifiable.
- [Methods] Methods / Experimental setup: no evidence is supplied that multi-sensor scans were matched on point density, coverage, or noise characteristics, nor are any point-cloud statistics or label-consistency checks reported; without these controls the observed mIoU gap cannot be isolated from confounding data-quality differences.
minor comments (1)
- [Abstract] Abstract: the qualifier 'up to 11.4%' is imprecise; the maximum drop should be tied to a specific architecture-sensor pair and accompanied by the corresponding baseline mIoU.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed feedback. We address each major comment point by point below, indicating the revisions we will incorporate to improve the clarity and rigor of the manuscript.
read point-by-point responses
-
Referee: [Abstract] Abstract: the headline claim of an 11.4% mIoU drop is presented without any dataset statistics (number of scans, points per scan, class balance), label protocol, inter-annotator agreement figures, or model hyper-parameters, leaving the magnitude and attribution of the domain gap unverifiable.
Authors: We agree that the abstract would benefit from additional context to support the key claim. In the revised version, we will include summary dataset statistics (total number of scans, average points per scan, and class balance) along with a brief reference to the label protocol and inter-annotator agreement figures. Model hyper-parameters are already detailed in the Experimental Setup section; we will add an explicit pointer to this section in the abstract so readers can readily verify the reported domain gap. revision: yes
-
Referee: [Methods] Methods / Experimental setup: no evidence is supplied that multi-sensor scans were matched on point density, coverage, or noise characteristics, nor are any point-cloud statistics or label-consistency checks reported; without these controls the observed mIoU gap cannot be isolated from confounding data-quality differences.
Authors: We acknowledge the need for explicit controls to isolate sensor effects. While the multi-sensor scans were acquired from identical bridge structures under comparable conditions, we did not report matching statistics in the current manuscript. In the revision, we will add point-cloud statistics (density, coverage, and noise characteristics per sensor), describe the label-consistency verification process, and clarify how confounding data-quality factors were controlled. This will strengthen the attribution of the observed mIoU decline to sensor variations. revision: yes
Circularity Check
Empirical dataset creation and model evaluation with no derivations or self-referential claims
full rationale
The paper introduces a new multi-sensor bridge point-cloud dataset and reports direct empirical results from training three off-the-shelf 3D segmentation architectures on it. No equations, fitted parameters, uniqueness theorems, or ansatzes appear; the reported mIoU values and the 11.4 % domain-gap figure are simply measured outcomes on the authors' train/test splits. Because the work contains no derivation chain that could reduce to its own inputs, no circular steps exist.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Existing 3D semantic segmentation architectures transfer to bridge point clouds without fundamental architectural changes.
Reference graph
Works this paper leans on
-
[1]
K. Klemt-Albert, R. Hartung, S. Bahlau, En- hancing resilience of traffic networks with a fo- 10 Abutment Deck Railing Superstructure Pillar Unlabeled Traffic Sign Ground High Vegetation Figure 4: Visualization of example predictions. The initial column depicts the outcomes obtained with the UNet3D, the subsequent column illustrates the results achieved w...
-
[2]
A. Miyamoto, K. Kawamura, H. Naka- mura, Development of a bridge manage- ment system for existing bridges, Advances in Engineering Software 32 (10) (2001) 821–833.doi:https://doi.org/10.1016/ S0965-9978(01)00034-5
work page 2001
-
[4]
A. K. Agrawal, A. Kawaguchi, Z. Chen, De- terioration rates of typical bridge elements in new york, Journal of Bridge Engineering 15 (4) (2010) 419–429.doi:10.1061/(ASCE) BE.1943-5592.0000123
-
[5]
A. M. Chyad, O. Abudayyeh, F. Zakhil, O. Hakimi, Deterioration rates of concrete bridge decks in several climatic regions, in: 2018 IEEE International Conference on Electro/Information Technology, EIT 2018, Rochester, MI, USA, May 3-5, 2018, IEEE, 2018, pp. 65–68.doi:10.1109/EIT.2018. 8500084. URLhttps://doi.org/10.1109/EIT.2018. 8500084
-
[6]
S. Clonts, L. Cooley, P. Freitag, B. Soltis, Virginia bridge deterioration factors, in: 2019 Systems and Information Engineering Design Symposium (SIEDS), 2019, pp. 1–5.doi: 10.1109/SIEDS.2019.8735618
-
[7]
P. Miao, Prediction-based maintenance of ex- isting bridges using neural network and sensi- tivity analysis, Advances in Civil Engineering 2021 (1) (2021) 4598337.doi:https://doi. org/10.1155/2021/4598337
-
[8]
A. M. Chyad, O. Abudayyeh, Impact of environmental factors on the condition rat- ing of concrete bridge decks using statistical- distribution methods, Practice Periodical on Structural Design and Construction 26 (3) (2021) 04021014.doi:10.1061/(ASCE)SC. 1943-5576.0000578
-
[11]
H.-K. Liao, M. Jallow, N.-J. Yau, M.-Y. Jiang, J.-H. Huang, C.-W. Su, P.-Y. Chen, Compar- ison of bridge inspection methodologies and 11 evaluation criteria in taiwan and foreign prac- tices, in: M.-Y. N. T. U. o. S. Cheng, Technol- ogy), H.-M. N. T. U. o. S. Chen, Technology), K. C. N. T. U. o. S. Chiu, Technology) (Eds.), Proceedings of the 34th Interna...
work page 2017
-
[12]
V. S. de Freitas Bello, C. Popescu, T. Blanksvärd, B. Täljsten, Bridge man- agement systems: overview and framework for smart management, IABSE Congress, Ghent 2021: Structural Engineering for Future Societal Needs (2021)
work page 2021
-
[13]
Z. I. Turksezer, P. F. Giordano, M. P. Limon- gelli, C. Iacovino, Inspection of roadway bridges: A comparison at the European level, Taylor & Francis, 2021, pp. 436–436.doi: 10.1201/9780429279119-272
-
[14]
V. Tyvoniuk, R. Trach, T. Wierzbicki, Bridge management systems: an overview and com- parison, Acta Scientiarum Polonorum. Archi- tectura 23 (2024) 112–120.doi:10.22630/ ASPA.2024.23.8. URLhttps://aspa.sggw.edu.pl/article/ view/9204
work page 2024
-
[15]
H. H. Hosamo, M. H. Hosamo, Digital twin technology for bridge maintenance using 3d laser scanning: A review, Advances in Civil Engineering 2022 (1) (2022) 2194949. arXiv:https://onlinelibrary.wiley. com/doi/pdf/10.1155/2022/2194949,doi: https://doi.org/10.1155/2022/2194949. URLhttps://onlinelibrary.wiley.com/ doi/abs/10.1155/2022/2194949
-
[16]
S. Honghong, Y. Gang, L. Haijiang, Z. Tian, J. Annan, Digital twin enhanced bim to shape full life cycle digital transformation for bridge engineering, Automation in Construction 147 (2023) 104736.doi:https://doi.org/10. 1016/j.autcon.2022.104736. URLhttps://www.sciencedirect.com/ science/article/pii/S0926580522006069
-
[17]
M. Kellner, H. Vassilev, A. Busch, R. Blaskow, M. Ferrandon Cervantes, K. N. Poku- Agyemang, A. Schmitt, S. Weisbrich, H. Maas, F. Neitzel, A. Reiterer, J. Blankenbach, Scan2bim - a review on the automated cre- ation of semantic aware geometric as-is mod- els of bridges, Zeitschrift für alle Bereiche der Geodäsie und Geoinformation 03/2024 (2024) 159–181....
work page 2024
- [18]
-
[19]
S. Sarker, P. Sarker, G. Stone, R. Gor- man, A. Tavakkoli, G. Bebis, J. Sattar- vand, A comprehensive overview of deep learning techniques for 3d point cloud classification and semantic segmenta- tion, Mach. Vis. Appl. 35 (4) (2024) 67. doi:10.1007/S00138-024-01543-1. URLhttps://doi.org/10.1007/ s00138-024-01543-1
-
[20]
H. Kim, J. Yoon, S.-H. Sim, Automated bridge component recognition from point clouds using deep learning, Structural Control and Health Monitoring 27 (9) (2020) e2591.doi:https: //doi.org/10.1002/stc.2591
-
[22]
G. Sansoni, M. Trebeschi, F. Docchio, State- of-the-art and applications of 3d imaging sen- sors in industry, cultural heritage, medicine, andcriminalinvestigation, Sensors9(1)(2009) 568–601.doi:10.3390/s90100568. URLhttps://www.mdpi.com/1424-8220/9/ 1/568
-
[23]
A. Dai, A. X. Chang, M. Savva, M. Halber, T. Funkhouser, M. Nießner, Scannet: Richly- annotated 3d reconstructions of indoor scenes, in: Proc. Computer Vision and Pattern Recog- nition (CVPR), IEEE, 2017
work page 2017
- [24]
- [25]
- [26]
-
[27]
X. Roynard, J.-E. Deschaud, F. Goulette, Paris-lille-3d: A large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification, The International Journal of Robotics Re- search 37 (6) (2018) 545–557.arXiv:https: //doi.org/10.1177/0278364918767506, doi:10.1177/0278364918767506. URLhttps://doi.org/10.1177/ 0278364918767506
- [28]
- [29]
-
[30]
nuScenes: A multimodal dataset for autonomous driving
H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, O. Beijbom, nuscenes: A multi- modal dataset for autonomous driving, arXiv preprint arXiv:1903.11027 (2019)
work page internal anchor Pith review arXiv 1903
-
[31]
P. Sun, H. Kretzschmar, X. Dotiwalla, A. Chouard, V. Patnaik, P. Tsui, J. Guo, Y. Zhou, Y. Chai, B. Caine, V. Vasudevan, W. Han, J. Ngiam, H. Zhao, A. Timofeev, S. Ettinger, M. Krivokon, A. Gao, A. Joshi, Y. Zhang, J. Shlens, Z. Chen, D. Anguelov, Scalability in perception for autonomous driv- ing: Waymo open dataset, in: Proceedings of the IEEE/CVF Confe...
work page 2020
- [32]
- [33]
- [34]
-
[35]
F. Rottensteiner, G. Sohn, J. Jung, M. Gerke, C. Baillard, S. Benitez, U. Breitkopf, The IS- PRS benchmark on urban object classification and 3D building reconstruction, ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences I-3 (2012), Nr. 1 1 (1) (2012) 293–298
work page 2012
-
[36]
Q. Hu, B. Yang, S. Khalid, W. Xiao, N. Trigoni, A. Markham, Towards semantic segmentation of urban-scale 3d point clouds: A dataset, benchmarks and challenges, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021
work page 2021
- [37]
-
[38]
E. Agapaki, A. Glyn-Davies, S. Mandoki, I. Brilakis, CLOI: A Shape Classification Benchmark Dataset for Industrial Facilities, American Society of Civil Engineers, 2019. doi:10.17863/CAM.36600. URLhttps://www.repository.cam.ac.uk/ handle/1810/289351 13
-
[39]
C. Yin, B. Wang, V. J. Gan, M. Wang, J. C. Cheng, Automated semantic segmentation of industrial point clouds using respointnet++, Automation in Construction 130 (2021) 103874.doi:https://doi.org/10.1016/j. autcon.2021.103874. URLhttps://www.sciencedirect.com/ science/article/pii/S0926580521003253
work page doi:10.1016/j 2021
-
[40]
X. Han, C. Liu, Y. Zhou, K. Tan, Z. Dong, B. Yang, Whu-urban3d: An urban scene lidar point cloud dataset for semantic in- stance segmentation, ISPRS Journal of Photogrammetry and Remote Sensing 209 (2024) 500–513.doi:https://doi.org/10. 1016/j.isprsjprs.2024.02.007. URLhttps://www.sciencedirect.com/ science/article/pii/S0924271624000522
work page 2024
-
[41]
T.-M. Nguyen, S. Yuan, T. H. Nguyen, P. Yin, H. Cao, L. Xie, M. Wozniak, P. Jensfelt, M. Thiel, J. Ziegenbein, N. Blunder, Mcd: Diverse large-scale multi-campus dataset for robot perception, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 22304– 22313. URLhttps://mcdviral.github.io/
work page 2024
-
[42]
M. Kellner, T. König, J.-I. Jäkel, K. Klemt- Albert, A. Reiterer, 3d bridge segmentation using semi-supervised domain adaptation, Au- tomation in Construction 172 (2025) 106021. doi:https://doi.org/10.1016/j.autcon. 2025.106021. URLhttps://www.sciencedirect.com/ science/article/pii/S0926580525000615
-
[43]
A. Milioto, I. Vizzo, J. Behley, C. Stach- niss, RangeNet++: Fast and Accurate Li- DAR Semantic Segmentation, in: IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2019
work page 2019
-
[44]
3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation
Ö. Çiçek, A. Abdulkadir, S. Lienkamp, T. Brox, O. Ronneberger, 3d u-net: Learning dense volumetric segmentation from sparse annotation, in: Medical Image Computing and Computer-Assisted Intervention (MIC- CAI), Vol. 9901 of LNCS, Springer, 2016, pp. 424–432, (available on arXiv:1606.06650 [cs.CV]). URLhttp://lmb.informatik. uni-freiburg.de/Publications/20...
work page internal anchor Pith review Pith/arXiv arXiv 2016
-
[45]
L.P.Tchapmi, C.B.Choy, I.Armeni, J.Gwak, S. Savarese, Segcloud: Semantic segmentation of 3d point clouds, in: International Confer- ence on 3D Vision (3DV), 2017
work page 2017
-
[46]
C. Choy, J. Gwak, S. Savarese, 4d spatio- temporal convnets: Minkowski convolutional neural networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 3075–3084
work page 2019
-
[47]
R. Q. Charles, H. Su, M. Kaichun, L. J. Guibas, Pointnet: Deep learning on point sets for 3d classification and segmentation, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 77–85. doi:10.1109/CVPR.2017.16
-
[48]
C. Qi, L. Yi, H. Su, L. J. Guibas, Pointnet++: Deep hierarchical feature learning on point sets in a metric space, in: Neural Information Processing Systems, 2017. URLhttps://api.semanticscholar.org/ CorpusID:1745976
work page 2017
-
[49]
Q. Hu, B. Yang, L. Xie, S. Rosa, Y. Guo, Z. Wang, N. Trigoni, A. Markham, Randla- net: Efficient semantic segmentation of large- scale point clouds, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2020)
work page 2020
- [50]
-
[51]
H.Zhao, L.Jiang, J.Jia, P.H.Torr, V.Koltun, Point transformer, in: Proceedings of the IEEE/CVF International Conference on Com- puter Vision, 2021, pp. 16259–16268
work page 2021
-
[52]
X. Wu, Y. Lao, L. Jiang, X. Liu, H. Zhao, Point transformer v2: Grouped vector atten- tion and partition-based pooling, in: NeurIPS, 2022
work page 2022
- [53]
-
[54]
Y.-Q. Yang, Y.-X. Guo, J.-Y. Xiong, Y. Liu, H. Pan, P.-S. Wang, X. Tong, B. Guo, Swin3d: A pretrained transformer backbone for 3d in- door scene understanding (2023).arXiv: 2304.06906
-
[55]
Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong, F. Wei, B. Guo, Swin transformer v2: Scaling up capacity and resolution, in: International Conference on Computer Vision and Pattern Recognition (CVPR), 2022
work page 2022
-
[56]
M. Kellner, B. Stahl, A. Reiterer, Fused projection-based point cloud segmentation, Sensors 22 (3) (2022). doi:10.3390/s22031139. URLhttps://www.mdpi.com/1424-8220/ 22/3/1139
-
[57]
Faro, Faro®laser scanner focus3d x 330, https://downloads.faro.com/index.php/ s/z6nEwtBPDpGPmYW, accessed: 2024-08-01 (2016)
work page 2024
-
[58]
Leica, Leica rtc360 3d real- ity capture solution,https:// leica-geosystems.com/-/media/files/ leicageosystems/products/datasheets/ leica-rtc360-ds-872750-0821-en.pdf, accessed: 2024-08-01 (2022)
work page 2024
-
[59]
Leica, Leica blk2go reality capture. hand- held.,https://leica-geosystems.com/-/ media/files/leicageosystems/products/ datasheets/leica-blk2go-ds-0521.pdf, accessed: 2024-08-01 (2019)
work page 2024
-
[60]
A.Dlesk, K.Vach, J.Šedina, K.Pavelka, Com- parison of leica blk360 and leica blk2go on cho- sen test objects, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences (2022)
work page 2022
- [61]
-
[62]
G. Vosselman, H. Maas, Airborne and Ter- restrial Laser Scanning, Whittles Publishing, 2010
work page 2010
-
[63]
C. Suchocki, Comparison of time-of-flight and phase-shift tls intensity data for the diag- nostics measurements of buildings, Materials 13 (2) (2020) 353
work page 2020
-
[64]
D.Wujanz, M.Burger, F.Tschirschwitz, T.Ni- etzschmann, F. Neitzel, T. P. Kersten, De- termination of intensity-based stochastic mod- els for terrestrial laser scanners utilising 3d- point clouds, Sensors 18 (7) (2018) 2187.doi: 10.3390/S18072187. URLhttps://doi.org/10.3390/s18072187
-
[65]
T. P. Kersten, M. Lindstaedt, Geometric accu- racy investigations of terrestrial laser scanner systems in the laboratory and in the field, Ap- plied Geomatics 14 (2022) 421–434
work page 2022
-
[66]
R. Lu, I. Brilakis, C. Middleton, Detection of structural components in point clouds of exist- ing rc bridges, Computer-Aided Civil and In- frastructure Engineering 34 (2018) 1–22.doi: 10.1111/mice.12407
-
[67]
Girardeau-Montaut, et al., Cloudcompare, France: EDF R&D Telecom ParisTech 11 (5) (2016) 2016
D. Girardeau-Montaut, et al., Cloudcompare, France: EDF R&D Telecom ParisTech 11 (5) (2016) 2016
work page 2016
-
[68]
M. Schellen, F. Kaufmann, C. Glock, T. Tschickardt, Annotation rules and classes for semantic segmentation of point clouds for digitalization of existing bridge structures, Computing in Construction (2023).doi:10. 35490/EC3.2023.228
work page 2023
-
[69]
U-Net: Convolutional Networks for Biomedical Image Segmentation
O. Ronneberger, P.Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: Medical Image Computing and Computer-Assisted Intervention (MIC- CAI), Vol. 9351 of LNCS, Springer, 2015, pp. 234–241, (available on arXiv:1505.04597 [cs.CV]). URLhttp://lmb.informatik. uni-freiburg.de/Publications/2015/ RFB15a
work page internal anchor Pith review Pith/arXiv arXiv 2015
-
[70]
S. Contributors, Spconv: Spatially sparse convolution library,https://github.com/ traveller59/spconv(2022)
work page 2022
-
[71]
K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015) 770–778. 15 URLhttps://api.semanticscholar.org/ CorpusID:206594692
work page 2016
- [72]
-
[73]
C. R. Harris, K. J. Millman, S. J. van der Walt, R. Gommers, P. Virtanen, D. Cour- napeau, E. Wieser, J. Taylor, S. Berg, N. J. Smith, R. Kern, M. Picus, S. Hoyer, M. H. van Kerkwijk, M. Brett, A. Hal- dane, J. F. del Río, M. Wiebe, P. Peterson, P. Gérard-Marchant, K. Sheppard, T. Reddy, W. Weckesser, H. Abbasi, C. Gohlke, T. E. Oliphant, Array programmin...
-
[74]
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna, Rethinking the inception archi- tecture for computer vision, in: 2016 IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), IEEE Computer Society, Los Alamitos, CA, USA, 2016, pp. 2818–2826.doi:10.1109/CVPR.2016.308. URLhttps://doi.ieeecomputersociety. org/10.1109/CVPR.2016.308 16
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.