Recognition: unknown
WILD SAM: A Simulated-and-Real Data Augmentation for Autonomous Driving Perception under Challenging Weather
Pith reviewed 2026-05-09 19:09 UTC · model grok-4.3
The pith
Denoising pseudo-labels from real adverse-weather images and mixing them with simulated data raises object detection accuracy in rain and snow by up to 13 percent.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors present the WILD framework, which filters noisy pseudo-labels generated by a detector on real images captured under rain or snow, and the WILD SAM hybrid methodology that trains on both the denoised real pseudo-labels and simulated data drawn from the target adverse-weather domain. On the Four Seasons dataset, the combined approach raises average precision by as much as 13 percent and shrinks the weather-induced performance gap relative to a standard baseline.
What carries the argument
The WILD pseudo-label denoising filter that removes unreliable detections from real adverse-weather images, paired with the WILD SAM hybrid training loop that augments simulation data with the cleaned real labels.
If this is right
- Object detectors trained this way maintain higher precision under rain and snow without depending solely on synthetic data.
- Real captured footage from harsh weather becomes a usable training resource once the denoising step is applied.
- The domain gap between clear and adverse conditions shrinks measurably, improving safety margins for autonomous perception.
- Hybrid real-plus-simulated training outperforms either pure simulation or unfiltered pseudo-labeling alone.
Where Pith is reading between the lines
- The same denoising-plus-hybrid pattern could be tested on other domain shifts such as night or fog by swapping the weather-specific filter criteria.
- If the denoising threshold proves stable across datasets, the method might reduce the need for large-scale manual labeling campaigns in new weather regimes.
- Extending the hybrid loop to include multiple weather types simultaneously could produce detectors that generalize across a wider range of conditions than single-weather baselines.
Load-bearing premise
That pseudo-labels produced by a model on real adverse-weather images contain enough correct signal to be filtered reliably without discarding useful boxes or adding systematic errors.
What would settle it
Apply the full WILD SAM pipeline to a fresh adverse-weather dataset and measure whether average precision stays at or above the reported 13 percent gain over the baseline; a null or negative result would falsify the central claim.
Figures
read the original abstract
The performance of state-of-the-art object detectors degrades significantly under adverse weather, causing a safety-critical domain shift problem for autonomous vehicles. Recent efforts address this problem by relying on synthetic data to train the object detectors, which limits their real-world applicability. Meanwhile, pseudo-labeling is widely used for cross-dataset domain adaptation problems. However, these methods have not been exploited by weather-based domain adaptation approaches due to the noisy nature of such labels generated under harsh weather conditions. In this paper, we propose two new approaches to mitigate this weather-induced domain shift. First, we propose a Weather-Induced pseudo Label Denoising (WILD) framework that filters noisy pseudo labels generated by real data captured under adverse weather conditions. Second, we develop a novel hybrid training methodology, WILD SAM, that exploits both pseudo-label denoising and simulation-based training solutions while using real-data from the target harsh-weather domain. We validate both proposed approaches, WILD and WILD SAM, on the recently released Four Seasons dataset across rainy and snowy scenarios. Experiments show that the proposed frameworks improve Average Precision (AP) up to 13\% and significantly reduce the weather-induced performance gap relative to the baseline. The code is available at: https://github.com/Kh-Hamed/WILD-SAM
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes WILD, a framework to filter noisy pseudo-labels generated by object detectors on real adverse-weather images, and WILD SAM, a hybrid training method that combines these denoised real pseudo-labels with simulation data. Experiments on the Four Seasons dataset for rainy and snowy conditions report up to 13% AP gains and a reduced weather-induced performance gap relative to baseline detectors.
Significance. If the denoising step is shown to be reliable, the hybrid real-plus-simulation approach could meaningfully advance practical domain adaptation for autonomous driving perception under weather shifts, moving beyond purely synthetic training. The public code release is a clear strength for reproducibility.
major comments (2)
- [Abstract] Abstract: the central claim of up to 13% AP improvement and reduced weather gap rests on WILD producing usable denoised pseudo-labels, yet the manuscript supplies no quantitative verification of label quality (e.g., precision/recall of retained boxes against ground truth on a held-out split, or before/after label statistics) or details on the denoising algorithm and threshold choices. Without these, it is unclear whether observed gains arise from reliable denoising rather than simply adding more real data or simulation alone.
- [Experiments] Experiments section: baseline implementations, statistical significance tests, and ablation isolating the contribution of denoised pseudo-labels versus simulation data are not described, preventing verification that the reported gains are robust to experimental choices.
minor comments (2)
- [Abstract] The abstract mentions 'recently released Four Seasons dataset' but does not specify the exact subsets, weather conditions, or object classes used in the reported AP numbers.
- [Method] Notation for the hybrid loss or filtering criteria in WILD could be clarified with an explicit equation or pseudocode.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback and the recommendation for major revision. We address each major comment below, agreeing where the manuscript requires strengthening, and describe the revisions we will incorporate.
read point-by-point responses
-
Referee: [Abstract] Abstract: the central claim of up to 13% AP improvement and reduced weather gap rests on WILD producing usable denoised pseudo-labels, yet the manuscript supplies no quantitative verification of label quality (e.g., precision/recall of retained boxes against ground truth on a held-out split, or before/after label statistics) or details on the denoising algorithm and threshold choices. Without these, it is unclear whether observed gains arise from reliable denoising rather than simply adding more real data or simulation alone.
Authors: We agree that the manuscript would benefit from explicit verification of the denoising step. In the revised version, we will expand the WILD section with a full description of the denoising algorithm, including the specific filtering criteria, threshold values, and their selection rationale. We will also add quantitative analysis: precision/recall of retained pseudo-labels versus ground truth on a held-out validation split, plus before-and-after statistics on label counts and quality metrics. These additions will clarify that performance gains stem from improved label reliability rather than data volume alone. revision: yes
-
Referee: [Experiments] Experiments section: baseline implementations, statistical significance tests, and ablation isolating the contribution of denoised pseudo-labels versus simulation data are not described, preventing verification that the reported gains are robust to experimental choices.
Authors: We acknowledge that additional experimental rigor is needed. The revised manuscript will include complete specifications of all baseline implementations and hyperparameters. We will report statistical significance tests (e.g., paired t-tests across multiple random seeds) for the observed AP improvements. We will further add dedicated ablation studies that separately evaluate (i) WILD denoising alone, (ii) simulation data alone, and (iii) the combined WILD SAM approach, thereby isolating the contribution of each component. revision: yes
Circularity Check
No significant circularity: empirical method with external validation
full rationale
The paper proposes two empirical frameworks (WILD for pseudo-label denoising and WILD SAM for hybrid simulated+real training) and evaluates them via experiments on the public Four Seasons dataset. No mathematical derivation chain, fitted parameters renamed as predictions, or self-citation load-bearing steps are present. Improvements (up to 13% AP) are reported as experimental outcomes against baselines, not derived by construction from inputs. The work relies on external benchmarks and code release rather than self-referential definitions or ansatzes.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Pseudo-labels generated under adverse weather contain recoverable signal that can be denoised without introducing new systematic errors
Reference graph
Works this paper leans on
-
[1]
Pointpillars: Fast encoders for object detection from point clouds,
A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “Pointpillars: Fast encoders for object detection from point clouds,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 12697–12705
2019
-
[2]
Second: Sparsely embedded convolutional detection,
Y. Yan, Y. Mao, and B. Li, “Second: Sparsely embedded convolutional detection,”Sensors, vol. 18, no. 10, p. 3337, 2018
2018
-
[3]
Voxel r- cnn: Towards high performance voxel-based 3d object detection,
J. Deng, S. Shi, P. Li, W. Zhou, Y. Zhang, and H. Li, “Voxel r- cnn: Towards high performance voxel-based 3d object detection,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, 2021, pp. 1201–1209
2021
-
[4]
Pv-rcnn++: Point-voxel feature set abstraction with local vector repre- sentation for 3d object detection,
S. Shi, L. Jiang, J. Deng, Z. Wang, C. Guo, J. Shi, X. Wang, and H. Li, “Pv-rcnn++: Point-voxel feature set abstraction with local vector repre- sentation for 3d object detection,”International Journal of Computer Vision, vol. 131, no. 2, pp. 531–551, 2023
2023
-
[5]
Robust object detection in challenging weather conditions,
H. Gupta, O. Kotlyar, H. Andreasson, and A. J. Lilienthal, “Robust object detection in challenging weather conditions,” inProceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 7523–7532
2024
-
[6]
Unpaired image-to-image translation using cycle-consistent adversarial networks,
J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” inProceedings of the IEEE international conference on computer vision, 2017, pp. 2223–2232
2017
-
[7]
A Neural Algorithm of Artistic Style
L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style,”arXiv preprint arXiv:1508.06576, 2015
work page Pith review arXiv 2015
-
[8]
Integrated multiscale domain adaptive yolo,
M. Hnewa and H. Radha, “Integrated multiscale domain adaptive yolo,” IEEE Transactions on Image Processing, vol. 32, pp. 1857–1867, 2023
2023
-
[9]
Domain adaptive faster r-cnn for object detection in the wild,
Y. Chen, W. Li, C. Sakaridis, D. Dai, and L. Van Gool, “Domain adaptive faster r-cnn for object detection in the wild,” in2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 3339–3348
2018
-
[10]
Msu-4s-the michigan state university four seasons dataset,
D. Kent, M. Alyaqoub, X. Lu, H. Khatounabadi, K. Sung, C. Scheller, A. Dalat, A. bin Thabit, R. Whitley, and H. Radha, “Msu-4s-the michigan state university four seasons dataset,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 22658–22667
2024
-
[11]
Adaptation via proxy: Building instance-aware proxy for unsupervised domain adaptive 3d object detection,
Z. Li, Y. Yao, Z. Quan, L. Qi, Z.-H. Feng, and W. Yang, “Adaptation via proxy: Building instance-aware proxy for unsupervised domain adaptive 3d object detection,”IEEE Transactions on Intelligent Vehicles, vol. 9, no. 2, pp. 3478–3492, 2023
2023
-
[12]
Density-insensitive unsupervised domain adaption on 3d object detection,
Q. Hu, D. Liu, and W. Hu, “Density-insensitive unsupervised domain adaption on 3d object detection,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 17556–17566
2023
-
[13]
Train in germany, test in the usa: Making 3d object detectors generalize,
Y. Wang, X. Chen, Y. You, L. E. Li, B. Hariharan, M. Campbell, K. Q. Weinberger, and W.-L. Chao, “Train in germany, test in the usa: Making 3d object detectors generalize,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11713–11723
2020
-
[14]
St3d: Self-training for unsupervised domain adaptation on 3d object detection,
J. Yang, S. Shi, Z. Wang, H. Li, and X. Qi, “St3d: Self-training for unsupervised domain adaptation on 3d object detection,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 10368–10378
2021
-
[15]
St3d++: Denoised self-training for unsupervised domain adapta- tion on 3d object detection,
——, “St3d++: Denoised self-training for unsupervised domain adapta- tion on 3d object detection,”IEEE transactions on pattern analysis and machine intelligence, vol. 45, no. 5, pp. 6354–6371, 2022
2022
-
[16]
Dali: Domain adaptive lidar object detection via distribution-level and instance-level pseudolabel denoising,
X. Lu and H. Radha, “Dali: Domain adaptive lidar object detection via distribution-level and instance-level pseudolabel denoising,”IEEE Transactions on Robotics, vol. 40, pp. 3866–3878, 2024
2024
-
[17]
Robust-fusionnet: Deep multimodal sensor fusion for 3-d ob- ject detection under severe weather conditions,
C. Zhang, H. Wang, Y. Cai, L. Chen, Y. Li, M. A. Sotelo, and Z. Li, “Robust-fusionnet: Deep multimodal sensor fusion for 3-d ob- ject detection under severe weather conditions,”IEEE Transactions on Instrumentation and Measurement, vol. 71, pp. 1–13, 2022
2022
-
[18]
L4dr: Lidar-4dradar fusion for weather-robust 3d object detection,
X. Huang, Z. Xu, H. Wu, J. Wang, Q. Xia, Y. Xia, J. Li, K. Gao, C. Wen, and C. Wang, “L4dr: Lidar-4dradar fusion for weather-robust 3d object detection,” inProceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 4, 2025, pp. 3806–3814
2025
-
[19]
Lidar snowfall simulation for robust 3d object detection,
M. Hahner, C. Sakaridis, M. Bijelic, F. Heide, F. Yu, D. Dai, and L. Van Gool, “Lidar snowfall simulation for robust 3d object detection,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 16364–16374
2022
-
[20]
Lidar light scattering augmentation (lisa): Physics-based simulation of adverse weather conditions for 3d object detection,
V. Kilic, D. Hegde, A. B. Cooper, V. M. Patel, and M. Foster, “Lidar light scattering augmentation (lisa): Physics-based simulation of adverse weather conditions for 3d object detection,” inICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025, pp. 1–5
2025
-
[21]
Fog simulation on real lidar point clouds for 3d object detection in adverse weather,
M. Hahner, C. Sakaridis, D. Dai, and L. Van Gool, “Fog simulation on real lidar point clouds for 3d object detection in adverse weather,” inProceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 15283–15292
2021
-
[22]
Robust 3d object detection in cold weather conditions,
A. Piroli, V. Dallabetta, M. Walessa, D. Meissner, J. Kopp, and K. Diet- mayer, “Robust 3d object detection in cold weather conditions,” in2022 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2022, pp. 287–294
2022
-
[23]
S. Shi, L. Jiang, J. Deng, Z. Wang, C. Guo, J. Shi, X. Wang, and H. Li, “Pv-rcnn++: Point-voxel feature set abstraction with local vector representation for 3d object detection,”arXiv preprint arXiv:2102.00463, 2021
-
[24]
Openpcdet: An open-source toolbox for 3d object de- tection from point clouds,
OpenPCDet, “Openpcdet: An open-source toolbox for 3d object de- tection from point clouds,” https://github.com/open-mmlab/OpenPCDet, 2020
2020
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.