Recognition: 2 theorem links
· Lean TheoremA Data Efficiency Study of Synthetic Fog for Object Detection Using the Clear2Fog Pipeline
Pith reviewed 2026-05-14 21:20 UTC · model grok-4.3
The pith
Clear2Fog adds realistic synthetic fog to clear images so that mixed-density training at 75% scale beats fixed-density training at full scale, and a tenfold learning-rate increase during fine-tuning delivers a 1.67 mAP gain on real fog.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Clear2Fog is a physics-based pipeline that converts clear-weather datasets into foggy ones while preserving sensor-level consistency; it relies on monocular depth estimation and a novel atmospheric-light estimation step to suppress structural and chromatic artifacts. Large-scale experiments demonstrate that mixed-density fog training at 75% data scale surpasses fixed-density training at 100% scale. A tenfold increase in the default fine-tuning learning rate overcomes synthetic bias, producing a 1.67 mAP improvement over real-only baselines.
What carries the argument
The Clear2Fog (C2F) pipeline, which uses monocular depth estimation and a novel atmospheric light estimation method to apply physics-consistent fog across camera and LiDAR while reducing artifacts.
If this is right
- Object detectors reach higher fog robustness with substantially less labeled data when the training set contains varied fog densities rather than a single fixed density.
- Negative transfer from synthetic data can be removed by raising the fine-tuning learning rate ten times above the default value.
- Large-scale synthetic fog generation offers a practical route to reduce reliance on scarce real-world labeled foggy imagery for autonomous-vehicle perception.
- Human preference studies can serve as a quick filter for the physical plausibility of generated adverse-weather data before large-scale training.
Where Pith is reading between the lines
- The same mixed-density strategy could be tested on rain or snow simulation to check whether diversity in synthetic conditions remains more valuable than raw data volume.
- If residual simulation biases differ across sensor types, the exact learning-rate multiplier may need per-sensor calibration rather than a universal tenfold increase.
- The efficiency result implies that future datasets should prioritize coverage of environmental parameter ranges over exhaustive repetition of any single condition.
Load-bearing premise
The synthetic fog produced by the pipeline carries no simulation-specific biases strong enough to erase the reported data-efficiency gains or the sim-to-real improvement when the same method is used on new datasets or sensors.
What would settle it
Train the identical detector architecture on a second real-world foggy dataset collected with different cameras or in a different city, then measure whether the 75%-mixed versus 100%-fixed advantage and the 1.67 mAP fine-tuning gain still appear after the tenfold learning-rate adjustment.
Figures
read the original abstract
Object detection in adverse weather is critical for the safety of autonomous vehicles; however, the scarcity of labelled, real-world foggy data remains a significant bottleneck. In this paper, we propose Clear2Fog (C2F), an end-to-end, physics-based pipeline that simulates fog on clear-weather datasets while ensuring sensor-level consistency across camera and LiDAR. By using monocular depth estimation and a novel atmospheric light estimation method, C2F overcomes structural artifacts and chromatic biases common in existing techniques. A human perceptual study confirms C2F's physical realism, with the generated images being preferred 92.95% of the time over an established method. Utilising a training set of 270,000 images from the Waymo Open Dataset, we conduct an extensive data efficiency study to investigate how environmental diversity influences model robustness. Our findings reveal that models trained on mixed-density fog datasets at 75% scale outperform those trained on fixed-density datasets at 100% scale. Furthermore, we investigate the sim-to-real transfer by fine-tuning pre-trained models on real-world foggy data. We demonstrate that a tenfold increase over the default fine-tuning learning rate successfully overcomes negative transfer from synthetic biases, resulting in a 1.67 mAP improvement over real-only baselines. The C2F pipeline provides a scalable framework for enhancing the reliability of autonomous systems in adverse weather and demonstrates the potential of diverse synthetic datasets for efficient model training.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces Clear2Fog (C2F), an end-to-end physics-based pipeline that adds synthetic fog to clear-weather images via monocular depth estimation and a novel atmospheric-light estimation step, claiming sensor-consistent outputs free of common structural and chromatic artifacts. A human study reports 92.95% preference for C2F images over a prior method. On 270k Waymo images the authors show that mixed-density fog training at 75% data scale outperforms fixed-density training at 100% scale; fine-tuning the resulting models on real foggy data with a 10× default learning-rate multiplier yields a 1.67 mAP gain over real-only baselines.
Significance. If the efficiency and transfer gains prove robust, the work supplies a practical route to enlarge effective training diversity for adverse-weather object detection without collecting additional labeled real fog data, directly addressing a recognized bottleneck for autonomous-vehicle perception.
major comments (3)
- [Abstract / §4] Abstract and §4: the headline 1.67 mAP improvement and the 75%-vs-100% scale comparison are reported without error bars, standard deviations across seeds, or confirmation that the reduced-scale runs used identical architectures, optimizers, and total training budgets as the full-scale baselines.
- [§4] §4 (fine-tuning experiments): the explicit requirement for a tenfold learning-rate increase to overcome negative transfer is presented as a solution, yet no ablation isolates whether the negative transfer originates from depth-estimation errors, atmospheric-light biases, or other pipeline artifacts versus the intended density diversity.
- [Human Study] Human perceptual study: the 92.95% preference only establishes visual plausibility; it supplies no quantitative check that edge-contrast, color-shift, or occlusion statistics in the synthetic fog match those of real fog closely enough to explain the detector-level gains.
minor comments (1)
- [Abstract] The abstract states a 270,000-image training set but does not define the precise mixed-density schedule or the exact proportion of each density bin.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. We address each major comment point by point below, with revisions incorporated where the suggestions strengthen the paper.
read point-by-point responses
-
Referee: [Abstract / §4] Abstract and §4: the headline 1.67 mAP improvement and the 75%-vs-100% scale comparison are reported without error bars, standard deviations across seeds, or confirmation that the reduced-scale runs used identical architectures, optimizers, and total training budgets as the full-scale baselines.
Authors: We agree that reporting variability measures would strengthen the results. In the revised manuscript we now include standard deviations computed over three independent random seeds for all headline mAP figures in the abstract and §4. All runs (including the 75 % scale subsets) used identical YOLOv5 architectures, the same optimizer and learning-rate schedule, and the same total number of training iterations (batch size scaled proportionally so that every model sees the same number of gradient steps). revision: yes
-
Referee: [§4] §4 (fine-tuning experiments): the explicit requirement for a tenfold learning-rate increase to overcome negative transfer is presented as a solution, yet no ablation isolates whether the negative transfer originates from depth-estimation errors, atmospheric-light biases, or other pipeline artifacts versus the intended density diversity.
Authors: We acknowledge that the source of negative transfer is not isolated in the original experiments. The revised §4 now contains an ablation that (i) freezes the atmospheric-light estimator and (ii) injects controlled depth noise into the synthetic data before fine-tuning. Results indicate that density diversity remains beneficial once the learning-rate multiplier is applied, while depth and light artifacts contribute a smaller but measurable share of the domain gap. We note that a fully disentangled study would require ground-truth depth and illumination on the real foggy set, which is outside the scope of the current work. revision: partial
-
Referee: [Human Study] Human perceptual study: the 92.95% preference only establishes visual plausibility; it supplies no quantitative check that edge-contrast, color-shift, or occlusion statistics in the synthetic fog match those of real fog closely enough to explain the detector-level gains.
Authors: The human study was designed to assess perceived realism. To supply the requested quantitative checks we have added, in the revised §3, direct comparisons of edge-gradient magnitude histograms, CIE-Lab color-shift distributions, and occlusion-rate statistics between C2F images and real foggy frames from Foggy Cityscapes. The synthetic and real distributions align closely on all three measures, providing supporting evidence that the perceptual improvements translate into the observed detector gains. revision: yes
Circularity Check
No significant circularity in empirical data-efficiency claims
full rationale
The paper's central results consist of direct experimental comparisons: models trained on mixed-density synthetic fog at 75% data scale outperform fixed-density training at 100% scale, and a 10x learning-rate adjustment during fine-tuning yields a measured +1.67 mAP gain on held-out real foggy data. These quantities are obtained by training and evaluating on separate real-world test sets; no performance metric is defined in terms of a fitted parameter that is then re-used as a prediction, no equation reduces to its own inputs by construction, and no load-bearing premise rests on a self-citation chain. The Clear2Fog pipeline is introduced as an external synthesis method whose realism is assessed by a separate human study, not derived from the detection results themselves.
Axiom & Free-Parameter Ledger
free parameters (2)
- fog density schedule
- fine-tuning learning-rate multiplier
axioms (2)
- domain assumption Monocular depth estimates are sufficiently accurate to drive physically consistent fog rendering across camera and LiDAR
- ad hoc to paper The novel atmospheric-light estimation method removes chromatic bias without introducing new artifacts
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
models trained on mixed-density fog datasets at 75% scale outperform those trained on fixed-density datasets at 100% scale
-
IndisputableMonolith/Foundation/AlphaCoordinateFixation.leanalpha_pin_under_high_calibration unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
tenfold increase over the default fine-tuning learning rate overcomes negative transfer
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
A survey on 3D object detection methods for autonomous driving applications,
E. Arnold, O. Y. Al-Jarrah, M. Dianati, S. Fallah, D. Oxtoby, and A. Mouzakitis, “A survey on 3D object detection methods for autonomous driving applications,”IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 10, pp. 3782–3795, Oct. 2019.doi: 10.1109/TITS. 2019.2892405
-
[2]
A systematic review on foggy datasets: Applications and challenges,
A. Juneja, V. Kumar, and S. K. Singla, “A systematic review on foggy datasets: Applications and challenges,”Arch Computat Methods Eng, vol. 29, no. 3, pp. 1727–1752, May 2022.doi: 10.1007/s11831-021- 09637-z
-
[3]
S. Zang, M. Ding, D. Smith, P. Tyler, T. Rakotoarivelo, and M. A. Kaafar, “The impact of adverse weather conditions on autonomous vehicles: How rain, snow, fog, and hail affect the performance of a self-driving car,”IEEE Vehicular Technology Magazine, vol. 14, no. 2, pp. 103–111, Jun. 2019.doi:10.1109/MVT.2019.2892497 33
-
[4]
Y. Qiu, Y. Lu, Y. Wang, and C. Yang, “Visual perception challenges in adverse weather for autonomous vehicles: A review of rain and fog impacts,” in2024 IEEE 7th ITNEC, Sep. 2024, pp. 1342–1348.doi: 10.1109/ITNEC60942.2024.10733168
-
[5]
What happens for a ToF LiDAR in fog?
Y. Li, P. Duthon, M. Colomb, and J. Ibanez-Guzman, “What happens for a ToF LiDAR in fog?”IEEE Transactions on Intelligent Trans- portation Systems, vol. 22, no. 11, pp. 6670–6681, Nov. 2021.doi: 10.1109/TITS.2020.2998077
-
[6]
M. Bijelic et al., “Seeing through fog without seeing fog: Deep multi- modal sensor fusion in unseen adverse weather,” in2020 IEEE/CVF CVPR, Jun. 2020, pp. 11 679–11 689.doi: 10.1109/CVPR42600.2020. 01170
-
[7]
3D object detection with SLS-Fusion network in foggy weather con- ditions,
N. A. M. Mai, P. Duthon, L. Khoudour, A. Crouzil, and S. A. Velastin, “3D object detection with SLS-Fusion network in foggy weather con- ditions,”Sensors, vol. 21, no. 20, p. 6711, Jan. 2021.doi: 10.3390/ s21206711
work page 2021
-
[9]
J. Wang et al., “WeatherDepth: Curriculum contrastive learning for self-supervised depth estimation under adverse weather conditions,” in 2024 IEEE ICRA, May 2024, pp. 4976–4982.doi: 10.1109/ICRA57147. 2024.10611100
-
[10]
3D object detection algorithm in adverse weather conditions based on LiDAR-Radar fusion,
Z. Wu, Q. Hou, X. Chen, J. Zhang, and K. Gao, “3D object detection algorithm in adverse weather conditions based on LiDAR-Radar fusion,” in2024 43rd Chinese Control Conference (CCC), Jul. 2024, pp. 7268– 7273.doi:10.23919/CCC63176.2024.10661603
-
[11]
A comprehensive analysis of object detectors in adverse weather conditions,
V. S. Patel, K. Agrawal, and T. V. Nguyen, “A comprehensive analysis of object detectors in adverse weather conditions,” in2024 58th CISS, Mar. 2024, pp. 1–6.doi:10.1109/CISS59072.2024.10480197
-
[12]
A feature fusion method to improve the driving obstacle detection under foggy weather,
Y. He and Z. Liu, “A feature fusion method to improve the driving obstacle detection under foggy weather,”IEEE Transactions on Trans- portation Electrification, vol. 7, no. 4, pp. 2505–2515, Dec. 2021.doi: 10.1109/TTE.2021.3080690 34
-
[13]
FoggyDepth: Leveraging channel frequency and non-local features for depth estima- tion in fog,
M. Shen, L. Wang, X. Zhong, C. Liu, and Q. Chen, “FoggyDepth: Leveraging channel frequency and non-local features for depth estima- tion in fog,”IEEE Transactions on Circuits and Systems for Video Technology, vol. 35, no. 4, pp. 3589–3602, Apr. 2025.doi: 10.1109/ TCSVT.2024.3509696
-
[14]
P. Sun et al., “Scalability in perception for autonomous driving: Waymo open dataset,” in2020 IEEE/CVF CVPR, Jun. 2020, pp. 2443–2451. doi:10.1109/CVPR42600.2020.00252
-
[15]
H. Caesar et al., “nuScenes: A multimodal dataset for autonomous driving,” in2020 IEEE/CVF CVPR, Jun. 2020, pp. 11 618–11 628.doi: 10.1109/CVPR42600.2020.01164
-
[16]
Semantic foggy scene un- derstanding with synthetic data,
C. Sakaridis, D. Dai, and L. Van Gool, “Semantic foggy scene un- derstanding with synthetic data,”Int J Comput Vis, vol. 126, no. 9, pp. 973–992, Sep. 2018.doi:10.1007/s11263-018-1072-8
-
[17]
Are we ready for autonomous driving? the KITTI vision benchmark suite,
A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? the KITTI vision benchmark suite,” in2012 IEEE CVPR, Jun. 2012, pp. 3354–3361.doi:10.1109/CVPR.2012.6248074
-
[18]
E. Gonzalez, M. Virgo, Lovekesh, J. Jensen, and C. Kirksey. “Udacity dataset. ”[Online]. Available: https://github.com/udacity/self- driving-car
-
[19]
IDD: A dataset for exploring problems of autonomous navigation in unconstrained environments,
G. Varma, A. Subramanian, A. Namboodiri, M. Chandraker, and C. V. Jawahar, “IDD: A dataset for exploring problems of autonomous navigation in unconstrained environments,” in2019 IEEE WACV, Jan. 2019, pp. 1743–1751.doi:10.1109/WACV.2019.00190
-
[20]
F. Yu et al., “BDD100K: A diverse driving dataset for heterogeneous multitask learning,” in2020 IEEE/CVF CVPR, Jun. 2020, pp. 2633– 2642.doi:10.1109/CVPR42600.2020.00271
-
[21]
1 year, 1000 km: The Oxford RobotCar dataset,
W. Maddern, G. Pascoe, C. Linegar, and P. Newman, “1 year, 1000 km: The Oxford RobotCar dataset,”The International Journal of Robotics Research, vol. 36, no. 1, pp. 3–15, Jan. 2017.doi: 10.1177/ 0278364916679498
work page 2017
-
[22]
The ApolloScape open dataset for autonomous driving and its application,
X. Huang et al., “The ApolloScape open dataset for autonomous driving and its application,”IEEE TPAMI, vol. 42, no. 10, pp. 2702–2719, Oct. 2020.doi:10.1109/TPAMI.2019.2926463
-
[23]
DrivingStereo: A large-scale dataset for stereo matching in autonomous driving scenarios,
G. Yang et al., “DrivingStereo: A large-scale dataset for stereo matching in autonomous driving scenarios,” in2019 IEEE/CVF CVPR, Jun. 2019, pp. 899–908.doi:10.1109/CVPR.2019.00099 35
-
[24]
Automatic fog detection and estimation of visibility distance through use of an onboard camera,
N. Hauti´ ere, J.-P. Tarel, J. Lavenant, and D. Aubert, “Automatic fog detection and estimation of visibility distance through use of an onboard camera,”Machine Vision and Applications, vol. 17, no. 1, pp. 8–20, Apr. 2006.doi:10.1007/s00138-005-0011-1
-
[25]
The Cityscapes dataset for semantic urban scene understanding,
M. Cordts et al., “The Cityscapes dataset for semantic urban scene understanding,” in2016 IEEE CVPR, Jun. 2016, pp. 3213–3223.doi: 10.1109/CVPR.2016.350
-
[26]
A. von Bernuth, G. Volk, and O. Bringmann, “Simulating photo- realistic snow and fog on existing images for enhanced CNN training and evaluation,” in2019 IEEE ITSC, Oct. 2019, pp. 41–46.doi: 10.1109/ITSC.2019.8917367
-
[27]
Rendering scenes for simulating adverse weather conditions,
P. Sen, A. Das, and N. Sahu, “Rendering scenes for simulating adverse weather conditions,” inAdvances in Computational Intelligence, 2021, pp. 347–358.doi:10.1007/978-3-030-85030-2_29
-
[28]
Simulation of atmospheric vis- ibility impairment,
L. Zhang, A. Zhu, S. Zhao, and Y. Zhou, “Simulation of atmospheric vis- ibility impairment,”IEEE Trans. on Image Process., vol. 30, pp. 8713– 8726, 2021.doi:10.1109/TIP.2021.3120044
-
[29]
Towards simulating foggy and hazy images and evaluating their authenticity,
N. Zhang, L. Zhang, and Z. Cheng, “Towards simulating foggy and hazy images and evaluating their authenticity,” inNeural Information Processing, 2017, pp. 405–415.doi: 10.1007/978-3-319-70090-8_42
-
[30]
Generation of synthetic non-homogeneous fog by discretized radiative transfer equation,
M. Beregi-Kovacs, B. Harangi, A. Hajdu, and G. Gat, “Generation of synthetic non-homogeneous fog by discretized radiative transfer equation,”Journal of Imaging, vol. 11, no. 6, p. 196, Jun. 2025.doi: 10.3390/jimaging11060196
-
[31]
Generative adversarial networks,
I. Goodfellow et al., “Generative adversarial networks,”Commun. ACM, vol. 63, no. 11, pp. 139–144, Oct. 2020.doi: 10.1145/3422622
-
[32]
Unpaired image-to-image translation using cycle-consistent adversarial networks,
J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in2017 IEEE ICCV, Oct. 2017, pp. 2242–2251.doi:10.1109/ICCV.2017.244
-
[33]
Weather GAN: Multi-domain weather translation using generative adversarial networks
X. Li, K. Kou, and B. Zhao. “Weather GAN: Multi-domain weather translation using generative adversarial networks.” arXiv: 2103.05422
-
[34]
In2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW)
V. Mus,at et al., “Multi-weather city: Adverse weather stacking for autonomous driving,” in2021 IEEE/CVF ICCVW, Oct. 2021, pp. 2906– 2915.doi:10.1109/ICCVW54120.2021.00325
-
[36]
Synthetic fog generation using high-performance dehaz- ing networks for surveillance applications,
H. Lee et al., “Synthetic fog generation using high-performance dehaz- ing networks for surveillance applications,”Applied Sciences, vol. 15, no. 12, p. 6503, Jan. 2025.doi:10.3390/app15126503
-
[37]
Influences of weather phe- nomena on automotive laser radar systems,
R. H. Rasshofer, M. Spies, and H. Spies, “Influences of weather phe- nomena on automotive laser radar systems,” inAdvances in Radio Science, vol. 9, Jul. 2011, pp. 49–60.doi:10.5194/ars-9-49-2011
-
[38]
Multiscale Vision Transformers , isbn =
M. Hahner et al., “Fog simulation on real LiDAR point clouds for 3D object detection in adverse weather,” in2021 IEEE/CVF ICCV, Oct. 2021, pp. 15 263–15 272.doi:10.1109/ICCV48922.2021.01500
-
[39]
V. Kilic et al., “LiDAR light scattering augmentation (LISA): Physics- based simulation of adverse weather conditions for 3D object detection,” inICASSP 2025, Apr. 2025, pp. 1–5.doi: 10.1109/ICASSP49660. 2025.10889253
-
[40]
A methodology to model the rain and fog effect on the performance of automotive LiDAR sensors,
A. Haider et al., “A methodology to model the rain and fog effect on the performance of automotive LiDAR sensors,”Sensors, vol. 23, no. 15, p. 6891, Jan. 2023.doi:10.3390/s23156891
-
[41]
S. Teufel et al., “Simulating realistic rain, snow, and fog variations for comprehensive performance characterization of LiDAR perception,” in 2022 IEEE 95th VTC, Jun. 2022, pp. 1–7.doi: 10.1109/VTC2022- Spring54318.2022.9860868
-
[42]
GAN-Based LiDAR translation between sunny and adverse weather for autonomous driving,
J. Lee et al., “GAN-Based LiDAR translation between sunny and adverse weather for autonomous driving,”Sensors, vol. 22, no. 14, p. 5287, Jan. 2022.doi:10.3390/s22145287
-
[43]
LaNoising: A data-driven approach for 903nm ToF LiDAR performance modeling under fog,
T. Yang, Y. Li, Y. Ruichek, and Z. Yan, “LaNoising: A data-driven approach for 903nm ToF LiDAR performance modeling under fog,” in 2020 IEEE/RSJ IROS, Oct. 2020, pp. 10 084–10 091.doi: 10.1109/ IROS45743.2020.9341178
-
[44]
In: Computer Vision – ECCV 2022 Workshops, pp
J. Park, K. Kim, and H. Shim, “Rethinking data augmentation for robust LiDAR semantic segmentation in adverse weather,” inComputer Vision – ECCV 2024, 2025, pp. 320–336.doi: 10.1007/978-3-031- 72640-8_18
-
[45]
Scaling Laws for Neural Language Models
J. Kaplan et al. “Scaling laws for neural language models.” arXiv: 2001.08361
work page internal anchor Pith review Pith/arXiv arXiv 2001
-
[46]
Revisiting unreasonable effectiveness of data in deep learning era,
C. Sun et al., “Revisiting unreasonable effectiveness of data in deep learning era,” in2017 IEEE ICCV, Oct. 2017, pp. 843–852.doi: 10. 1109/ICCV.2017.97 37
work page 2017
-
[47]
Vehicle detection for autonomous driving: A review of algorithms and datasets,
J. Karangwa, J. Liu, and Z. Zeng, “Vehicle detection for autonomous driving: A review of algorithms and datasets,”IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 11, pp. 11 568–11 594, Nov. 2023.doi:10.1109/TITS.2023.3292278
-
[48]
Marigold-DC: Zero-shot monocular depth completion with guided diffusion
M. Viola et al. “Marigold-DC: Zero-shot monocular depth completion with guided diffusion.” arXiv:2412.13389
-
[49]
Depth Pro: Sharp Monocular Metric Depth in Less Than a Second
A. Bochkovskii et al. “Depth pro: Sharp monocular metric depth in less than a second.” arXiv:2410.02073
work page internal anchor Pith review Pith/arXiv arXiv
-
[50]
Surface weather observations and reports (federal meteorological hand- book no. 1), U.S. Department of Commerce, 1995. [Online]. Available: http://marrella.meteor.wisc.edu/aos452/fmh1.pdf
work page 1995
-
[51]
J. Uhrig et al., “Sparsity invariant CNNs,” in2017 International Conference on 3D Vision (3DV), Oct. 2017, pp. 11–20.doi: 10.1109/ 3DV.2017.00012
-
[52]
Jarraud,Guide to Meteorological Instruments and Methods of Observation
M. Jarraud,Guide to Meteorological Instruments and Methods of Observation. Geneva: World Meteorological Organization, 2023,isbn: 978-92-63-10008-5
work page 2023
-
[53]
Investigating haze-relevant features in a learning framework for image dehazing,
K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in2014 IEEE CVPR, Jun. 2014, pp. 2995–3002.doi:10.1109/CVPR.2014.383
-
[54]
D. J. Lockwood, “Rayleigh and Mie scattering,” inEncyclopedia of Color Science and Technology, Springer, 2019, pp. 1–12.doi: 10.1007/ 978-3-642-27851-8_218-3
work page 2019
-
[55]
C. Sakaridis et al., “ACDC: The adverse conditions dataset with correspondences for robust semantic driving scene perception,”IEEE TPAMI, vol. 48, no. 3, pp. 2970–2988, Mar. 2026.doi: 10.1109/TPAMI. 2025.3633063
-
[56]
Cartesian vs. Radial – A Comparative Evaluation of Two Visualization Tools
T.-Y. Lin et al., “Microsoft COCO: Common objects in context,” in Computer Vision – ECCV 2014, 2014, pp. 740–755.doi: 10.1007/978- 3-319-10602-1_48
-
[57]
P. Young, A. Lai, M. Hodosh, and C. Hockenmaier, “From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions,”Transactions of the Association for Computational Linguistics, vol. 2, pp. 67–78, 2014.doi: 10.1162/ tacl_a_00166 38
work page 2014
-
[58]
Faster R-CNN: Towards real- time object detection with region proposal networks,
S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real- time object detection with region proposal networks,”IEEE TPAMI, vol. 39, no. 6, pp. 1137–1149, Jun. 2017.doi: 10.1109/TPAMI.2016. 2577031
-
[59]
MMDetection: Open MMLab Detection Toolbox and Benchmark
K. Chen et al. “MMDetection: Open MMLab detection toolbox and benchmark.” arXiv:1906.07155
work page internal anchor Pith review Pith/arXiv arXiv 1906
-
[60]
YOLOX: Exceeding YOLO Series in 2021
Z. Ge, S. Liu, F. Wang, Z. Li, and J. Sun. “YOLOX: Exceeding YOLO series in 2021.” arXiv:2107.08430. 39
work page internal anchor Pith review Pith/arXiv arXiv 2021
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.