Recognition: no theorem link
Survey on Disaster Management Datasets for Remote Sensing Based Emergency Applications
Pith reviewed 2026-05-12 01:21 UTC · model grok-4.3
The pith
A survey assembles a reference list of image datasets from satellites and drones to train AI systems for managing disasters at every stage.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
This survey provides a comprehensive overview of publicly available image-based datasets relevant to ML/DL-based disaster management pipelines. Emphasis is placed on datasets that support computer vision and remote sensing tasks across all phases of disaster events including pre-disaster, during, and post-disaster. The goal is to serve as a centralized reference for researchers and practitioners seeking high-quality datasets for rapid development and deployment of remote sensing-driven disaster response solutions.
What carries the argument
The categorized compilation of datasets organized by disaster phase and computer vision task, acting as the centralized reference for data selection in mitigation, preparedness, detection, response, and recovery.
If this is right
- Researchers can identify suitable datasets for specific disaster management phases without extensive separate searches.
- Development of models for rapid detection and situational assessment can proceed more quickly using existing annotated imagery.
- Practitioners gain a single source to support deployment of remote sensing solutions in mitigation through recovery.
- Effort duplication across groups working on similar computer vision tasks for emergencies is reduced.
- Better matching of datasets to UAV or satellite sources improves coverage for pre-event and post-event analysis.
Where Pith is reading between the lines
- Gaps in dataset coverage for certain disaster types or phases could guide targeted collection of new public imagery and annotations.
- This list might encourage community updates over time as new datasets from recent events become available.
- Linking the surveyed datasets to specific model performance benchmarks could highlight which phases most need additional data volume.
- Operational use in real emergencies might expose practical issues like data licensing or format compatibility not covered in the survey.
Load-bearing premise
The compilation is complete and up-to-date, and the listed datasets have sufficient quality and annotation levels to support practical ML/DL pipelines across all disaster phases.
What would settle it
Finding multiple high-quality, relevant remote sensing datasets that were omitted from the survey or discovering that the majority of listed datasets lack annotations detailed enough to train models that generalize to real disaster imagery.
Figures
read the original abstract
Recent natural disasters have highlighted the urgent need for efficient data-driven approaches to disaster management. Machine learning (ML) and deep learning (DL) techniques have shown considerable promise in enhancing the key phases of disaster management including mitigation, preparedness, detection, response, and recovery. A critical enabler of successful ML or DL based applications in remote sensing, however, is the accessibility and quality of annotated datasets. With the growing availability of high-resolution imagery from unmanned aerial vehicles (UAVs) and satellites, computer vision and remote sensing algorithms have become essential tools for rapid detection, situational assessment, and decision-making in disaster scenarios. This survey provides a comprehensive overview of publicly available image-based datasets relevant to ML/DL-based disaster management pipelines. Emphasis is placed on datasets that support computer vision and remote sensing tasks across all phases of disaster events including pre-disaster, during, and post-disaster. The goal of this work is to serve as a centralized reference for researchers and practitioners seeking high-quality datasets for rapid development and deployment of remote sensing-driven disaster response solutions.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript is a survey paper that claims to provide a comprehensive overview of publicly available image-based datasets for machine learning and deep learning applications in remote sensing-based disaster management. It emphasizes datasets supporting computer vision tasks across the phases of mitigation, preparedness, detection, response, and recovery, with the goal of serving as a centralized reference for researchers developing remote sensing-driven solutions.
Significance. A well-curated and methodologically transparent survey of this type could serve as a useful reference resource for the remote sensing and computer vision communities working on disaster applications, particularly if it systematically organizes datasets by disaster phase and task type while confirming public accessibility and annotation quality. The paper does not ship machine-checked proofs, reproducible code, or parameter-free derivations, but its potential value lies in the compiled list itself if completeness can be substantiated.
major comments (1)
- [Abstract and Introduction] Abstract and Introduction: the central claim that the survey provides a 'comprehensive overview' of publicly available datasets is not supported by any description of the curation process. No search protocol, queried databases, keywords, inclusion/exclusion criteria, cutoff date, or coverage statistics (e.g., number of datasets per disaster type or phase) are provided. This is load-bearing because the entire contribution rests on the representativeness of the listed datasets; without this information, selection bias cannot be assessed and the claim cannot be evaluated.
minor comments (2)
- [Abstract] The abstract would be strengthened by including quantitative scope information, such as the total number of datasets reviewed or the breakdown by disaster phase, to allow readers to gauge coverage immediately.
- Ensure that every listed dataset includes explicit statements on current public accessibility, license, and annotation quality (e.g., pixel-level vs. image-level labels) to support the claim that they are suitable for practical ML/DL pipelines.
Simulated Author's Rebuttal
We thank the referee for the detailed and constructive review. We agree that the claim of a 'comprehensive overview' requires explicit methodological transparency to allow assessment of representativeness and potential bias. We address the single major comment below.
read point-by-point responses
-
Referee: [Abstract and Introduction] Abstract and Introduction: the central claim that the survey provides a 'comprehensive overview' of publicly available datasets is not supported by any description of the curation process. No search protocol, queried databases, keywords, inclusion/exclusion criteria, cutoff date, or coverage statistics (e.g., number of datasets per disaster type or phase) are provided. This is load-bearing because the entire contribution rests on the representativeness of the listed datasets; without this information, selection bias cannot be assessed and the claim cannot be evaluated.
Authors: We agree that the absence of a documented curation process weakens the central claim. In the revised manuscript we will insert a new subsection (placed after the Introduction) titled 'Survey Methodology' that explicitly describes: (1) the search protocol, including databases and repositories queried (Google Scholar, IEEE Xplore, arXiv, Kaggle, Hugging Face Datasets, GitHub, and major remote-sensing data portals); (2) the keyword combinations used (e.g., 'disaster dataset remote sensing', 'flood UAV imagery', 'earthquake satellite dataset', 'post-disaster damage assessment dataset'); (3) inclusion criteria (publicly accessible, image-based, annotated for computer-vision tasks, relevance to at least one disaster-management phase) and exclusion criteria (proprietary data, non-image modalities, purely synthetic datasets without real-world validation); (4) the cutoff date for dataset inclusion; and (5) quantitative coverage statistics (total datasets retained, breakdown by disaster type and by phase). These additions will enable readers to evaluate selection bias and will be cross-referenced in the Abstract and Introduction. revision: yes
Circularity Check
No circularity: purely descriptive survey with no derivations or fitted claims
full rationale
This paper is a survey that compiles and describes publicly available image-based datasets for ML/DL-based disaster management. It contains no equations, predictions, first-principles derivations, fitted parameters, or quantitative claims that could reduce to inputs by construction. The central contribution is the curated list itself, presented without any self-referential logic, self-citation load-bearing premises, or renaming of results. No steps match the enumerated circularity patterns; the paper is self-contained as a descriptive reference.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
https://spacenet.ai/sn5-challenge/
Automated road network extraction and route travel time estimation from satellite imagery. https://spacenet.ai/sn5-challenge/
-
[2]
S. K. Abid, R. Roosli, U. Nazir, and N. S. Kamarudin. Ai-enhanced crowdsourcing for disaster management: strengthening community resilience through social media.International Journal of Emergency Medicine, 18(1):201, 2025
work page 2025
-
[3]
A. Aggarwal, M. Mittal, and G. Battineni. Generative adversarial network: An overview of theory and applications.International Journal of Information Management Data Insights, 1(1):100004, 2021
work page 2021
- [4]
-
[5]
J. Akshya and P. Priyadarsini. A hybrid machine learning approach for classifying aerial images of flood-hit areas. In2019 International conference on computational intelligence in data science (ICCIDS), pages 1–5. IEEE, 2019
work page 2019
-
[6]
A. Alam, M. S. Bhat, and M. Maheen. Using landsat satellite data for assessing the land use and land cover change in kashmir valley. GeoJournal, 85(6):1529–1543, 2020
work page 2020
-
[7]
F. Alam, T. Alam, M. A. Hasan, A. Hasnat, M. Imran, and F. Ofli. Medic: a multi-task learning dataset for disaster image classification. Neural Computing and Applications, 35(3):2609–2632, 2023
work page 2023
-
[8]
F. Alam, F. Ofli, and M. Imran. Crisismmd: Multimodal twitter datasets from natural disasters. InProceedings of the international AAAI conference on web and social media, volume 12, 2018
work page 2018
-
[9]
F. Alam, F. Ofli, M. Imran, T. Alam, and U. Qazi. Deep learning benchmarks and datasets for social media image classification for disaster response. In2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), pages 151–158. IEEE, 2020. 17
work page 2020
-
[10]
F. Alam, U. Qazi, M. Imran, and F. Ofli. Humaid: Human-annotated disaster incidents data from twitter with deep learning benchmarks. In Proceedings of the International AAAI Conference on Web and social media, volume 15, pages 933–942, 2021
work page 2021
- [11]
-
[12]
A. A. Aleissaee, A. Kumar, R. M. Anwer, S. Khan, H. Cholakkal, G.- S. Xia, and F. S. Khan. Transformers in remote sensing: A survey. Remote Sensing, 15(7):1860, 2023
work page 2023
-
[13]
J. Ali, R. Khan, N. Ahmad, and I. Maqsood. Random forests and decision trees.International Journal of Computer Science Issues (IJCSI), 9(5):272, 2012
work page 2012
-
[14]
F. Alidoost and H. Arefi. Application of deep learning for emergency response and disaster management. InProceedings of the AGSE Eighth International Summer School and Conference, pages 11–17. University of Tehran, 2017
work page 2017
- [15]
-
[16]
H. Alqahtani, M. Kavakli-Thorne, and G. Kumar. Applications of generative adversarial networks (gans): An updated review.Archives of Computational Methods in Engineering, 28:525–552, 2021
work page 2021
-
[17]
M. S. Amin and H. Ahn. Earthquake disaster avoidance learning system using deep learning.Cognitive Systems Research, 66:221–235, 2021
work page 2021
-
[18]
S. Andreadis, I. Gialampoukidis, A. Karakostas, S. Vrochidis, I. Kom- patsiaris, R. Fiorin, D. Norbiato, and M. Ferri. The flood-related multimedia task at mediaeval 2020. InMediaEval, 2020
work page 2020
-
[19]
M. Aqib, R. Mehmood, A. Albeshri, and A. Alzahrani. Disaster management in smart cities by forecasting traffic plan using deep learning and gpus. InSmart Societies, Infrastructure, Technologies and Applications: First International Conference, SCITA 2017, Jeddah, Saudi Arabia, November 27–29, 2017, Proceedings 1, pages 139–154. Springer, 2018
work page 2017
-
[20]
Arif, A. Omar, S. Ashraf, A. M. Rahman, M. A. Amin, and A. A. Ali. A comparative study on disaster detection from social media images using deep learning. InProceedings of the Global AI Congress 2019, pages 485–499. Springer, 2020
work page 2019
-
[21]
A. Asif, S. Khatoon, M. M. Hasan, M. A. Alshamari, S. Abdou, K. M. Elsayed, and M. Rashwan. Automatic analysis of social media images to identify disaster type and infer appropriate emergency response. Journal of Big Data, 8(1):83, 2021
work page 2021
-
[22]
Y . Bai, B. Adriano, E. Mas, and S. Koshimura. Building damage assessment in the 2015 gorkha, nepal, earthquake using only post-event dual polarization synthetic aperture radar imagery.Earthquake Spectra, 33, 12 2017
work page 2015
-
[23]
Y . Bai, H. Sezen, and A. Yilmaz. End-to-end deep learning methods for automated damage detection in extreme events at various scales. In2020 25th International Conference on Pattern Recognition (ICPR), pages 6640–6647. IEEE, 2021
work page 2021
-
[24]
B. Barz, K. Schr ¨oter, A.-C. Kra, and J. Denzler. Finding relevant flood images on twitter using content-based filters. InPattern Recognition. ICPR International Workshops and Challenges: Virtual Event, January 10–15, 2021, Proceedings, Part VI, pages 5–14. Springer, 2021
work page 2021
- [25]
-
[26]
Y . Bazi, L. Bashmal, M. M. A. Rahhal, R. A. Dayil, and N. A. Ajlan. Vision transformers for remote sensing image classification.Remote Sensing, 13(3):516, 2021
work page 2021
-
[27]
Y . Bengio, A. C. Courville, and P. Vincent. Unsupervised feature learning and deep learning: A review and new perspectives.CoRR, abs/1206.5538, 1(2665):2012, 2012
-
[28]
B. Benjamin, H. Patrick, S. Christian, S. Venkat, D. Andreas, and B. Damian. The multimedia satellite task at mediaeval 2017. 2017
work page 2017
-
[29]
A. Bhoi, S. P. Pujari, and R. C. Balabantaray. A deep learning-based social media text analysis framework for disaster resource management. Social Network Analysis and Mining, 10:1–14, 2020
work page 2020
-
[30]
D. Bonafilia, B. Tellman, T. Anderson, and E. Issenberg. Sen1floods11: A georeferenced dataset to train and test deep learning flood algorithms for sentinel-1. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 210–211, 2020
work page 2020
-
[31]
N. I. Bountos, I. Papoutsis, D. Michail, A. Karavias, P. Elias, and I. Parcharidis. Hephaestus: A large scale multitask dataset towards insar understanding. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1453–1462, 2022
work page 2022
- [32]
-
[33]
P. A. Burrough, R. A. McDonnell, and C. D. Lloyd.Principles of geographical information systems. Oxford University Press, USA, 2015
work page 2015
-
[34]
J. Butler, S. Brown, R. Saunders, B. Johnson, S. Biggar, E. Zalewski, B. Markham, P. Gracey, J. Young, and R. Barnes. Radiometric measurement comparison on the integrating sphere source used to calibrate the moderate resolution imaging spectroradiometer (modis) and the landsat 7 enhanced thematic mapper plus (etm+), 2003-05-01 00:05:00 2003
work page 2003
-
[35]
P. Carri ´on-Mero, N. Montalv ´an-Burbano, F. Morante-Carballo, A. Quesada-Rom ´an, and B. Apolo-Masache. Worldwide research trends in landslide science.International Journal of Environmental Research and Public Health, 18(18), 2021
work page 2021
-
[36]
P. Chaudhary, S. D’Aronco, J. P. Leit˜ao, K. Schindler, and J. D. Wegner. Water level prediction from social media images with a multi-task ranking approach.ISPRS Journal of Photogrammetry and Remote Sensing, 167:252–262, 2020
work page 2020
-
[37]
N. Chaudhuri and I. Bose. Exploring the role of deep neural net- works for post-disaster decision support.Decision Support Systems, 130:113234, 2020
work page 2020
-
[38]
H. Chen, J. Song, O. Dietrich, C. Broni-Bediako, W. Xuan, J. Wang, X. Shao, Y . Wei, J. Xia, C. Lan, et al. Bright: A globally dis- tributed multimodal building damage assessment dataset with very- high-resolution for all-weather disaster response.Earth System Science Data Discussions, 2025:1–51, 2025
work page 2025
- [39]
- [40]
- [41]
-
[42]
D. Y . Chino, L. P. Avalhais, J. F. Rodrigues, and A. J. Traina. Bowfire: detection of fire in still images by integrating pixel color and texture analysis. In2015 28th SIBGRAPI conference on graphics, patterns and images, pages 95–102. IEEE, 2015
work page 2015
-
[43]
R. Y . Choi, A. S. Coyner, J. Kalpathy-Cramer, M. F. Chiang, and J. P. Campbell. Introduction to machine learning, neural networks, and deep learning.Translational vision science & technology, 9(2):14–14, 2020
work page 2020
-
[44]
T. Chowdhury, M. Rahnemoonfar, R. Murphy, and O. Fernandes. Comprehensive semantic segmentation on high resolution uav imagery for natural disaster damage assessment. In2020 IEEE International Conference on Big Data (Big Data), pages 3904–3913. IEEE, 2020
work page 2020
-
[45]
G. Christie, N. Fendley, J. Wilson, and R. Mukherjee. Functional map of the world. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6172–6180, 2018
work page 2018
-
[46]
L. Colomba, A. Farasin, S. Monaco, S. Greco, P. Garza, D. Apiletti, E. Baralis, and T. Cerquitelli. A dataset for burned area delineation and severity estimation from satellite imagery. InProceedings of the 31st ACM International Conference on Information & Knowledge Management, pages 3893–3897, 2022
work page 2022
-
[47]
N. R. Council et al.Improving disaster management: the role of IT in mitigation, preparedness, response, and recovery. National Academies Press, 2007
work page 2007
-
[48]
A. Creswell, T. White, V . Dumoulin, K. Arulkumaran, B. Sengupta, and A. A. Bharath. Generative adversarial networks: An overview. IEEE signal processing magazine, 35(1):53–65, 2018
work page 2018
-
[49]
F.-A. Croitoru, V . Hondru, R. T. Ionescu, and M. Shah. Diffusion models in vision: A survey.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):10850–10869, 2023
work page 2023
-
[50]
S. S. Dar, M. Z. U. Rehman, K. Bais, M. A. Haseeb, and N. Kumar. A social context-aware graph-based multimodal attentive learning frame- work for disaster content classification during emergencies.Expert Systems with Applications, 259:125337, 2025
work page 2025
-
[51]
I. Demir, K. Koperski, D. Lindenbaum, G. Pang, J. Huang, S. Basu, F. Hughes, D. Tuia, and R. Raskar. Deepglobe 2018: A challenge to parse the earth through satellite images. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 172–181, 2018
work page 2018
-
[52]
J. Devaraj, S. Ganesan, R. M. Elavarasan, and U. Subramaniam. A novel deep learning based model for tropical intensity estimation and post-disaster management of hurricanes.Applied Sciences, 11(9):4129, 2021
work page 2021
-
[53]
A. Dewangan, Y . Pande, H.-W. Braun, F. Vernon, I. Perez, I. Altintas, G. W. Cottrell, and M. H. Nguyen. Figlib & smokeynet: Dataset and deep learning model for real-time wildland fire smoke detection. 18 Remote Sensing, 14(4):1007, 2022
work page 2022
-
[54]
Q. Dong, S. Gong, and X. Zhu. Class rectification hard mining for imbalanced deep learning. InProceedings of the IEEE international conference on computer vision, pages 1851–1860, 2017
work page 2017
-
[55]
Z. Dong, Y . Liu, Y . Wang, Y . Feng, Y . Chen, and Y . Wang. Enteromor- pha prolifera detection in high-resolution remote sensing imagery based on boundary-assisted dual-path convolutional neural networks.IEEE Transactions on Geoscience and Remote Sensing, 61:1–15, 2023
work page 2023
-
[56]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale.arXiv preprint arXiv:2010.11929, 2020
work page internal anchor Pith review Pith/arXiv arXiv 2010
-
[57]
S. Dotel, A. Shrestha, A. Bhusal, R. Pathak, A. Shakya, and S. P. Panday. Disaster assessment from satellite imagery by analysing topographical features using deep learning. InProceedings of the 2020 2nd International Conference on Image, Video and Signal Processing, pages 86–92, 2020
work page 2020
- [58]
-
[59]
F. T. J. Faria, M. B. Moin, B. K. Rafa, S. Saha, M. M. Rahman, K. M. Hasib, and M. Mridha. Banglacalamitymmd: A comprehensive benchmark dataset for multimodal disaster identification in the low- resource bangla language.International Journal of Disaster Risk Reduction, page 105800, 2025
work page 2025
-
[60]
M. Firat and M. Gungor. Generalized regression neural networks and feed forward neural networks for prediction of scour depth around bridge piers.Advances in Engineering Software, 40(8):731–737, 2009
work page 2009
-
[61]
L. Floridi, M. Holweg, M. Taddeo, J. Amaya, J. M ¨okander, and Y . Wen. Capai-a procedure for conducting conformity assessment of ai systems in line with the eu artificial intelligence act.Available at SSRN 4064091, 2022
work page 2022
- [62]
- [63]
- [64]
-
[65]
G. Gao, X. Ye, S. Li, X. Huang, H. Ning, D. Retchless, and Z. Li. Ex- ploring flood mitigation governance by estimating first-floor elevation via deep learning and google street view in coastal texas.Environment and Planning B: Urban Analytics and City Science, 51(2):296–313, 2024
work page 2024
-
[66]
S. Ghaffarian, F. R. Taghikhah, and H. R. Maier. Explainable artificial intelligence in disaster risk management: Achievements and prospective futures.International Journal of Disaster Risk Reduction, 98:104123, 2023
work page 2023
-
[67]
Z. Ghahramani. Unsupervised learning. InSummer school on machine learning, pages 72–112. Springer, 2003
work page 2003
-
[68]
P. Ghamisi, W. Yu, A. Marinoni, C. M. Gevaert, C. Persello, S. Selvaku- maran, M. Girotto, B. P. Horton, P. Rufin, P. Hostert, et al. Responsible ai for earth observation.arXiv preprint arXiv:2405.20868, 2024
-
[69]
O. Ghorbanzadeh, Y . Xu, P. Ghamisi, M. Kopp, and D. Kreil. Land- slide4sense: Reference benchmark data and deep learning models for landslide detection.arXiv preprint arXiv:2206.00515, 2022
-
[70]
P. Giannakeris, K. Avgerinakis, A. Karakostas, S. Vrochidis, and I. Kompatsiaris. People and vehicles in danger-a fire and flood detection system in social media. In2018 IEEE 13th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), pages 1–5. IEEE, 2018
work page 2018
-
[71]
R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 580–587, 2014
work page 2014
-
[72]
I. Goodfellow, Y . Bengio, and A. Courville.Deep learning. MIT press, 2016
work page 2016
-
[73]
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y . Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014
work page 2014
-
[74]
D. Gozen and S. Ozer. Visual object tracking in drone images with deep reinforcement learning. In2020 25th International Conference on Pattern Recognition (ICPR), pages 10082–10089. IEEE, 2021
work page 2021
-
[75]
D. Guha-Sapir and P. Hoyois. Estimating populations affected by disasters: A review of methodological issues and research gaps.Centre for Research on the Epidemiology of Disasters (CRED), 2015
work page 2015
-
[76]
J. Gui, Z. Sun, Y . Wen, D. Tao, and J. Ye. A review on generative adversarial networks: Algorithms, theory, and applications.IEEE transactions on knowledge and data engineering, 35(4):3313–3332, 2021
work page 2021
-
[77]
Y . Guo, C. Wang, S. X. Yu, F. McKenna, and K. H. Law. Adaln: a vision transformer for multidomain learning and predisaster building information extraction from images.Journal of Computing in Civil Engineering, 36(5):04022024, 2022
work page 2022
-
[78]
R. Gupta, B. Goodman, N. Patel, R. Hosfelt, S. Sajeev, E. Heim, J. Doshi, K. Lucas, H. Choset, and M. Gaston. Creating xbd: A dataset for assessing building damage from satellite imagery. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 10–17, 2019
work page 2019
- [79]
-
[80]
R. H ¨ansch, J. Arndt, D. Lunga, M. Gibb, T. Pedelose, A. Boedihardjo, D. Petrie, and T. M. Bacastow. Spacenet 8-the detection of flooded roads and buildings. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1472–1480, 2022
work page 2022
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.