Recognition: unknown
AusSmoke meets MultiNatSmoke: a fully-labelled diverse smoke segmentation dataset
Pith reviewed 2026-05-08 06:47 UTC · model grok-4.3
The pith
New AusSmoke dataset from Australia joins international data to create a ten-times-larger smoke segmentation benchmark.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We present AusSmoke, a new smoke segmentation dataset collected from Australia to address the data scarcity in this region. Furthermore, we introduce a MultiNational geographically diverse and substantially larger fully-labelled benchmark, called MultiNatSmoke, that consolidates publicly available international datasets with the newly collected Australian imagery, expanding the scale by an order of magnitude over previous collections. Finally, we benchmark smoke segmentation models, demonstrating improved performance and enhanced generalization across diverse geographical contexts.
What carries the argument
The integration of the new AusSmoke Australian smoke images with existing international datasets to form the larger MultiNatSmoke benchmark for training and evaluating smoke segmentation models.
If this is right
- Smoke segmentation models achieve improved accuracy when trained on the expanded dataset.
- Models exhibit better generalization to smoke appearances in varied geographical locations.
- The use of real imagery reduces reliance on synthetic data for training.
- The order-of-magnitude scale increase supports development of more robust detection systems.
Where Pith is reading between the lines
- Dataset merging strategies like this may help in other areas of environmental AI where data is scarce.
- Models could be tested in operational camera systems for real-time wildfire monitoring.
- Additional validation datasets from new continents would strengthen the generalization evidence.
Load-bearing premise
The labels provided for the Australian images and the consolidated datasets are consistent and free of systematic biases that could affect model training or evaluation.
What would settle it
A test showing that models trained on MultiNatSmoke do not outperform those trained on previous datasets when evaluated on smoke images from a new, unseen region.
Figures
read the original abstract
Wildfires are an escalating global concern due to the devastating impacts on the environment, economy, and human health, with notable incidents such as the 2019-2020 Australian bushfires and the 2025 California wildfires underscoring the severity of these events. AI-enabled camera-based smoke detection has emerged as a promising approach for the rapid detection of wildfires. However, existing wildfire smoke segmentation datasets that are used for training detection and segmentation models are limited in scale, geographically constrained, and often rely on synthetic imagery, which hinders effective training and generalization. To overcome these limitations, we present AusSmoke, a new smoke segmentation dataset collected from Australia to address the data scarcity in this region. Furthermore, we introduce a MultiNational geographically diverse and substantially larger fully-labelled benchmark, called MultiNatSmoke, that consolidates publicly available international datasets with the newly collected Australian imagery, expanding the scale by an order of magnitude over previous collections. Finally, we benchmark smoke segmentation models, demonstrating improved performance and enhanced generalization across diverse geographical contexts. The project is available at \href{https://github.com/henryzhao0615/MultiNatSmoke}{Github}.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces AusSmoke, a new fully-labelled smoke segmentation dataset collected from Australia to address regional data scarcity. It further presents MultiNatSmoke, a consolidated multinational benchmark that merges AusSmoke with existing international datasets, expanding the overall scale by an order of magnitude. The authors benchmark smoke segmentation models on these resources and claim improved performance together with enhanced generalization across diverse geographical contexts.
Significance. If the dataset labels prove consistent and the benchmarking protocols are shown to be uniform, the work would supply a substantially larger and geographically broader resource for training smoke segmentation models. This could meaningfully improve the robustness of AI-based early wildfire detection systems, particularly by mitigating the current limitations of small-scale or synthetic datasets and by filling gaps in underrepresented regions such as Australia.
major comments (2)
- Abstract: the headline claim of 'improved performance and enhanced generalization across diverse geographical contexts' is stated without any quantitative results, model architectures, baselines, training details, or performance tables, preventing verification of the central empirical contribution.
- MultiNatSmoke construction and benchmarking sections: the generalization claim requires demonstrated label consistency (no systematic annotation-style differences between AusSmoke and the consolidated international sets) and identical training protocols. No inter-annotator agreement scores, label-harmonization procedure, or ablation confirming fixed hyperparameters (augmentations, epochs, loss weighting) across sources are reported; apparent gains could therefore arise from data volume or protocol artifacts rather than geographical diversity.
minor comments (1)
- Abstract: the statement that MultiNatSmoke 'expands the scale by an order of magnitude over previous collections' would be strengthened by explicit numerical comparison of prior dataset sizes versus the new total.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed review. We address each major comment point by point below, indicating where revisions will be made to improve clarity and rigor.
read point-by-point responses
-
Referee: Abstract: the headline claim of 'improved performance and enhanced generalization across diverse geographical contexts' is stated without any quantitative results, model architectures, baselines, training details, or performance tables, preventing verification of the central empirical contribution.
Authors: We agree that the abstract would benefit from including quantitative context to support the headline claims. In the revised manuscript, we will update the abstract to briefly reference the segmentation models benchmarked, the consistent training protocols applied, and key performance metrics (such as mIoU gains on MultiNatSmoke relative to prior single-region datasets). This will allow immediate verification of the empirical contributions without altering the abstract's length or focus. revision: yes
-
Referee: MultiNatSmoke construction and benchmarking sections: the generalization claim requires demonstrated label consistency (no systematic annotation-style differences between AusSmoke and the consolidated international sets) and identical training protocols. No inter-annotator agreement scores, label-harmonization procedure, or ablation confirming fixed hyperparameters (augmentations, epochs, loss weighting) across sources are reported; apparent gains could therefore arise from data volume or protocol artifacts rather than geographical diversity.
Authors: We acknowledge the validity of this concern and the need for explicit evidence of label consistency and uniform protocols. The manuscript describes aligning AusSmoke annotations with the guidelines of the incorporated international datasets during MultiNatSmoke construction. However, we did not report inter-annotator agreement scores or provide hyperparameter ablations. In the revision, we will add a dedicated subsection on the label harmonization procedure, explicitly confirm that all benchmarking runs used identical hyperparameters, augmentations, epochs, and loss settings across sources (with a summary table), and include an ablation isolating the effect of geographical diversity from data volume. For inter-annotator agreement, we will report any available metrics from the new AusSmoke annotations and discuss consistency with prior datasets based on protocol alignment; where original annotations from external sources are unavailable for recomputation, we will note this limitation. revision: partial
Circularity Check
No circularity: dataset paper with empirical benchmarking only
full rationale
The paper presents a new dataset (AusSmoke) and a consolidated benchmark (MultiNatSmoke) followed by standard model benchmarking. No derivations, equations, predictions, fitted parameters, or first-principles claims appear in the abstract or described content. The work consists of data collection, consolidation of public datasets, and empirical evaluation of segmentation models. No load-bearing steps reduce to self-definition, fitted inputs renamed as predictions, or self-citation chains. The central claims rest on the existence and scale of the collected data plus observed benchmark numbers, which are externally verifiable through the released dataset and code rather than internally forced by construction.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
The global fire atlas of individual fire size, duration, speed and di- rection.Earth System Science Data, 11(2):529–552,
Niels Andela, Douglas C Morton, Louis Giglio, Ro- nan Paugam, Yang Chen, Stijn Hantson, Guido R Van Der Werf, and James T Randerson. The global fire atlas of individual fire size, duration, speed and di- rection.Earth System Science Data, 11(2):529–552,
-
[2]
Unprecedented burn area of australian mega forest fires.Nature Climate Change, 10(3):171– 172, 2020
Matthias M Boer, Víctor Resco de Dios, and Ross A Bradstock. Unprecedented burn area of australian mega forest fires.Nature Climate Change, 10(3):171– 172, 2020. 2
2020
-
[3]
Fire and smoke datasets in 20 years: An in-depth re- view.arXiv preprint arXiv:2503.14552, 2025
Sayed Pedram Haeri Boroujeni, Niloufar Mehrabi, Fatemeh Afghah, Connor Peter McGrath, Danish Bhatkar, Mithilesh Anil Biradar, and Abolfazl Razi. Fire and smoke datasets in 20 years: An in-depth re- view.arXiv preprint arXiv:2503.14552, 2025. 3
-
[4]
Cogent confabulation based expert system for seg- mentation and classification of natural landscape im- ages.Adv
Maja Braovic, Darko Stipanicev, and Damir Krstinic. Cogent confabulation based expert system for seg- mentation and classification of natural landscape im- ages.Adv. Electr. Comput. Eng, 17(2):85–94, 2017. 4
2017
-
[5]
Adaptive estimation of visual smoke detec- tion parameters based on spatial data and fire risk in- dex.Computer vision and image understanding, 118: 184–196, 2014
Marin Bugari ´c, Toni Jakov ˇcevi´c, and Darko Sti- paniˇcev. Adaptive estimation of visual smoke detec- tion parameters based on spatial data and fire risk in- dex.Computer vision and image understanding, 118: 184–196, 2014. 2, 4
2014
-
[6]
Channel and spatial attention based deep object co-segmentation
Jia Chen, Yasong Chen, Weihao Li, Guoqin Ning, Mingwen Tong, and Adrian Hilton. Channel and spatial attention based deep object co-segmentation. Knowledge-Based Systems, 211:106550, 2021. 3
2021
-
[7]
Encoder-decoder with atrous separable convolution for semantic image segmentation
Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. InProceedings of the European con- ference on computer vision (ECCV), pages 801–818,
-
[8]
Masked- attention mask transformer for universal image seg- mentation
Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked- attention mask transformer for universal image seg- mentation. InProceedings of the IEEE/CVF con- ference on computer vision and pattern recognition, pages 1290–1299, 2022. 2, 3, 6
2022
-
[9]
Bowfire: detection of fire in still images by integrating pixel color and tex- ture analysis
Daniel YT Chino, Letricia PS Avalhais, Jose F Ro- drigues, and Agma JM Traina. Bowfire: detection of fire in still images by integrating pixel color and tex- ture analysis. In2015 28th SIBGRAPI conference on graphics, patterns and images, pages 95–102. IEEE,
-
[10]
An automatic fire detec- tion system based on deep convolutional neural net- works for low-power, resource-constrained devices
Pedro Vinicius AB de Venancio, Adriano C Lisboa, and Adriano V Barbosa. An automatic fire detec- tion system based on deep convolutional neural net- works for low-power, resource-constrained devices. Neural Computing and Applications, 34(18):15349– 15368, 2022. 4, 5
2022
-
[11]
Imagenet: A large-scale hierarchi- cal image database
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchi- cal image database. In2009 IEEE conference on com- puter vision and pattern recognition, pages 248–255. Ieee, 2009. 2, 6
2009
-
[12]
Figlib & smokeynet: Dataset and deep learning model for real- time wildland fire smoke detection.Remote Sensing, 14(4):1007, 2022
Anshuman Dewangan, Yash Pande, Hans-Werner Braun, Frank Vernon, Ismael Perez, Ilkay Altintas, Garrison W Cottrell, and Mai H Nguyen. Figlib & smokeynet: Dataset and deep learning model for real- time wildland fire smoke detection.Remote Sensing, 14(4):1007, 2022. 3, 4
2022
-
[13]
Expert per- spective: Wildland fuels management would not have saved us from the january 2025 la fires, 2025
Robert Fitch, Carla D’Antonio, Park Williams, Max Moritz, Shane Dewees, and Alex Hall. Expert per- spective: Wildland fuels management would not have saved us from the january 2025 la fires, 2025. Ac- cessed: 2025-05-16. 2
2025
-
[14]
Preliminary results from a wildfire detection system using deep learning on re- mote camera images.Remote Sensing, 12(1):166,
Kinshuk Govil, Morgan L Welch, J Timothy Ball, and Carlton R Pennypacker. Preliminary results from a wildfire detection system using deep learning on re- mote camera images.Remote Sensing, 12(1):166,
-
[15]
Mask r-cnn
Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. InProceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017. 3
2017
-
[16]
Goss: Towards generalized open-set semantic seg- mentation.The Visual Computer, 40(4):2391–2404,
Jie Hong, Weihao Li, Junlin Han, Jiyang Zheng, Pengfei Fang, Mehrtash Harandi, and Lars Petersson. Goss: Towards generalized open-set semantic seg- mentation.The Visual Computer, 40(4):2391–2404,
-
[17]
Segmentation models py- torch.https : / / github
Pavel Iakubovskii. Segmentation models py- torch.https : / / github . com / qubvel / segmentation_models.pytorch, 2019. 6
2019
-
[18]
Oneformer: One transformer to rule universal image segmenta- tion
Jitesh Jain, Jiachen Li, Mang Tik Chiu, Ali Has- sani, Nikita Orlov, and Humphrey Shi. Oneformer: One transformer to rule universal image segmenta- tion. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2989– 2998, 2023. 2, 6
2023
-
[19]
Video polyp segmentation: A deep learning perspec- tive.Machine Intelligence Research, 19(6):531–549,
Ge-Peng Ji, Guobao Xiao, Yu-Cheng Chou, Deng- Ping Fan, Kai Zhao, Geng Chen, and Luc Van Gool. Video polyp segmentation: A deep learning perspec- tive.Machine Intelligence Research, 19(6):531–549,
-
[20]
9 Deep gradient learning for efficient camouflaged ob- ject detection.Machine Intelligence Research, 20(1): 92–108, 2023
Ge-Peng Ji, Deng-Ping Fan, Yu-Cheng Chou, Dengxin Dai, Alexander Liniger, and Luc Van Gool. 9 Deep gradient learning for efficient camouflaged ob- ject detection.Machine Intelligence Research, 20(1): 92–108, 2023. 3
2023
-
[21]
Colon-x: Advancing intelligent colonoscopy from multimodal understanding to clinical reasoning
Ge-Peng Ji, Jingyi Liu, Deng-Ping Fan, and Nick Barnes. Colon-x: Advancing intelligent colonoscopy from multimodal understanding to clinical reasoning. arXiv preprint arXiv:2512.03667, 2025. 2
-
[22]
Frontiers in intelligent colonoscopy.Machine Intelli- gence Research, 2026
Ge-Peng Ji, Jingyi Liu, Peng Xu, Nick Barnes, Fa- had Shahbaz Khan, Salman Khan, and Deng-Ping Fan. Frontiers in intelligent colonoscopy.Machine Intelli- gence Research, 2026. 2
2026
-
[23]
Segment anything
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. InProceedings of the IEEE/CVF international conference on computer vi- sion, pages 4015–4026, 2023. 5
2023
-
[24]
Deep object co-segmentation
Weihao Li, Omid Hosseini Jafari, and Carsten Rother. Deep object co-segmentation. InAsian Conference on Computer Vision, pages 638–653. Springer, 2018. 2, 3
2018
-
[25]
Rein: Reusing imagenet to improve open-set object detection
Weihao Li, Moshiur Farazi, Jie Hong, and Lars Pe- tersson. Rein: Reusing imagenet to improve open-set object detection. In2023 International Conference on Digital Image Computing: Techniques and Applica- tions (DICTA), pages 523–530. IEEE, 2023. 3
2023
-
[26]
Microsoft coco: Com- mon objects in context
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Com- mon objects in context. InComputer vision–ECCV 2014: 13th European conference, zurich, Switzerland, September 6-12, 2014, proceedings, part v 13, pages 740–755. Springer, 2014. 2
2014
-
[27]
Generalised co-salient object detection.arXiv preprint arXiv:2208.09668,
Jiawei Liu, Jing Zhang, Ruikai Cui, Kaihao Zhang, Weihao Li, and Nick Barnes. Generalised co-salient object detection.arXiv preprint arXiv:2208.09668,
-
[28]
Swin transformer: Hierarchical vision transformer using shifted windows
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. InProceedings of the IEEE/CVF international conference on computer vision, pages 10012–10022, 2021. 6
2021
-
[29]
Decoupled weight decay regularization
Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. InInternational Conference on Learning Representations, 2019. 6
2019
-
[30]
Wildfire smoke dataset.https: //github.com/aiformankind/wildfire- smoke-dataset, 2020
AI For Mankind. Wildfire smoke dataset.https: //github.com/aiformankind/wildfire- smoke-dataset, 2020. Accessed: 2025-05-15. 4
2020
-
[31]
Understanding the black summer bushfires through research: A sum- mary of key findings from the bushfire and natural hazards crc, 2023
Natural Hazards Research Australia. Understanding the black summer bushfires through research: A sum- mary of key findings from the bushfire and natural hazards crc, 2023. Accessed: 2025-05-16. 2
2023
-
[32]
Julius Pesonen, Anna-Maria Raita-Hakola, Jukka Joutsalainen, Teemu Hakala, Waleed Akhtar, Väinö Karjalainen, Niko Koivumäki, Lauri Markelin, Juha Suomalainen, Raquel Alves de Oliveira, et al. Bo- real forest fire: Uav-collected wildfire detection and smoke segmentation dataset.https : / / doi . org/10.23729/fd-72c6cf74-b8eb-3687- 860d-bf93a1ab94c9, 2025. ...
-
[33]
Firespot: A database for smoke detection in early-stage wildfires
Natthaphol Pornpholkullapat, Warit Phankrawee, Per- aphat Boondet, Thin Lai Lai Thein, Phoummixay Si- harath, Jennifer Dela Cruz, Ken T Marata, Kanok- vate Tungpimolrut, and Jessada Karnjana. Firespot: A database for smoke detection in early-stage wildfires. In2023 18th International Joint Symposium on Arti- ficial Intelligence and Natural Language Proces...
2023
-
[34]
Smokebench: Evaluating multimodal large language models for wildfire smoke detection
Tianye Qi, Weihao Li, and Nick Barnes. Smokebench: Evaluating multimodal large language models for wildfire smoke detection. InProceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2026. 3
2026
-
[35]
A-M Raita-Hakola, S Rahkonen, J Suomalainen, L Markelin, R Oliveira, T Hakala, N Koivumäki, E Honkavaara, and I Pölönen. Combining yolo v5 and transfer learning for smoke-based wildfire detection in boreal forests.The International Archives of the Pho- togrammetry, Remote Sensing and Spatial Information Sciences, 48:1771–1778, 2023. 4
2023
-
[36]
You only look once: Unified, real-time object detection
Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. InProceedings of the IEEE con- ference on computer vision and pattern recognition, pages 779–788, 2016. 3
2016
-
[37]
Burned area semantic seg- mentation: A novel dataset and evaluation using con- volutional networks.ISPRS Journal of Photogramme- try and Remote Sensing, 202:565–580, 2023
Tiago FR Ribeiro, Fernando Silva, José Moreira, and Rogério Luís de C Costa. Burned area semantic seg- mentation: A novel dataset and evaluation using con- volutional networks.ISPRS Journal of Photogramme- try and Remote Sensing, 202:565–580, 2023. 3
2023
-
[38]
U-net: Convolutional networks for biomedical image segmentation
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. InInternational Conference on Med- ical Image Computing and Computer-Assisted Inter- vention, pages 234–241. Springer, 2015. 2, 3, 6
2015
-
[39]
Aerial imagery pile burn detection using deep learn- ing: The flame dataset.Computer Networks, 193: 108001, 2021
Alireza Shamsoshoara, Fatemeh Afghah, Abolfazl Razi, Liming Zheng, Peter Z Fulé, and Erik Blasch. Aerial imagery pile burn detection using deep learn- ing: The flame dataset.Computer Networks, 193: 108001, 2021. 3
2021
-
[40]
Gimo: Generative image outpainting for early smoke 10 segmentation
Sahir Shrestha, Weihao Li, Gao Zhu, and Nick Barnes. Gimo: Generative image outpainting for early smoke 10 segmentation. InProceedings of the Synthetic Data for Computer Vision Workshop at CVPR, 2025. 3
2025
-
[41]
SDI-Paste: Synthetic dynamic instance copy-paste
Sahir Shrestha, Weihao Li, Gao Zhu, and Nick Barnes. SDI-Paste: Synthetic dynamic instance copy-paste. In Proceedings of the Synthetic Data for Computer Vi- sion Workshop at CVPR, 2025. 3
2025
-
[42]
Computer vision for wildfire research: An evolving image dataset for processing and analysis.Fire Safety Journal, 92:188– 194, 2017
Tom Toulouse, Lucile Rossi, Antoine Campana, Tur- gay Celik, and Moulay A Akhloufi. Computer vision for wildfire research: An evolving image dataset for processing and analysis.Fire Safety Journal, 92:188– 194, 2017. 3
2017
-
[43]
High-severity wildfires in temperate australian forests have increased in extent and aggregation in recent decades.PloS one, 15(11): e0242484, 2020
Bang Nguyen Tran, Mihai A Tanase, Lauren T Ben- nett, and Cristina Aponte. High-severity wildfires in temperate australian forests have increased in extent and aggregation in recent decades.PloS one, 15(11): e0242484, 2020. 2
2020
-
[44]
Ung dung tri tue nhan tao de phat hien bat thuong trong giam sat rung.Journal of Science and Technology on Information and Communications, 1(4):118–124,
Quang Vinh Vu, Cong Tran, and Dat Anh Tran. Ung dung tri tue nhan tao de phat hien bat thuong trong giam sat rung.Journal of Science and Technology on Information and Communications, 1(4):118–124,
-
[45]
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Trans- formers: State-of-t...
2020
-
[46]
Segformer: Simple and efficient design for semantic segmentation with transformers.Advances in neural information processing systems, 34:12077–12090, 2021
Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anand- kumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers.Advances in neural information processing systems, 34:12077–12090, 2021. 2, 3, 6
2021
-
[47]
Self-training with noisy student im- proves imagenet classification
Qizhe Xie, Minh-Thang Luong, Eduard Hovy, and Quoc V Le. Self-training with noisy student im- proves imagenet classification. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10687–10698, 2020. 5
2020
-
[48]
Transmission-guided bayesian generative model for smoke segmentation
Siyuan Yan, Jing Zhang, and Nick Barnes. Transmission-guided bayesian generative model for smoke segmentation. InProceedings of the AAAI Conference on Artificial Intelligence, pages 3009–3017, 2022. 2, 3, 4, 5, 6
2022
-
[49]
Fosp: Focus and separation network for early smoke segmentation
Lujian Yao, Haitao Zhao, Jingchao Peng, Zhongze Wang, and Kaijie Zhao. Fosp: Focus and separation network for early smoke segmentation. InProceed- ings of the AAAI Conference on Artificial Intelligence, pages 6621–6629, 2024. 2, 3, 4, 5, 6
2024
-
[50]
Technological solutions for living with fire in the age of megafires.One Earth, 7(6):932–935, 2024
Marta Yebra, Robert Mahony, and Robert Debus. Technological solutions for living with fire in the age of megafires.One Earth, 7(6):932–935, 2024. 2
2024
-
[51]
Deep smoke seg- mentation.Neurocomputing, 357:248–260, 2019
Feiniu Yuan, Lin Zhang, Xue Xia, Boyang Wan, Qinghua Huang, and Xuelong Li. Deep smoke seg- mentation.Neurocomputing, 357:248–260, 2019. 2, 3
2019
-
[52]
Wildland forest fire smoke de- tection based on faster r-cnn using synthetic smoke images.Procedia engineering, 211:441–446, 2018
Qi-xing Zhang, Gao-hua Lin, Yong-ming Zhang, Gao Xu, and Jin-jun Wang. Wildland forest fire smoke de- tection based on faster r-cnn using synthetic smoke images.Procedia engineering, 211:441–446, 2018. 4
2018
-
[53]
False alarm rectification for early smoke seg- mentation
Hongjin Zhao, Weihao Li, Ge-Peng Ji, and Nick Barnes. False alarm rectification for early smoke seg- mentation. InProceedings of the IEEE/CVF Win- ter Conference on Applications of Computer Vision (WACV), 2026. 3
2026
-
[54]
Der- meval: A dermatologist-reviewed benchmark for mul- timodal large language models
Hongjin Zhao, Weihao Li, Zhenyue Qin, Ge-Peng Ji, Yang Liu, Tom Gedeon, and Nick Barnes. Der- meval: A dermatologist-reviewed benchmark for mul- timodal large language models. InProceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2026. 2
2026
-
[55]
Towards open-set object detection and discovery
Jiyang Zheng, Weihao Li, Jie Hong, Lars Petersson, and Nick Barnes. Towards open-set object detection and discovery. InProceedings of the IEEE/CVF con- ference on computer vision and pattern recognition, pages 3961–3970, 2022. 3
2022
-
[56]
Scene parsing through ade20k dataset
Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. InProceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 633–641, 2017. 6 11
2017
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.