pith. machine review for the scientific record. sign in

arxiv: 2605.08448 · v1 · submitted 2026-05-08 · 💻 cs.AI · cs.CL

Recognition: 1 theorem link

· Lean Theorem

LLM-guided Semi-Supervised Approaches for Social Media Crisis Data Classification

Authors on Pith no claims yet

Pith reviewed 2026-05-12 00:48 UTC · model grok-4.3

classification 💻 cs.AI cs.CL
keywords LLM-guided semi-supervised learningcrisis tweet classificationco-trainingsocial mediadisaster managementlow-resource settingsmacro F1
0
0 comments X

The pith

LLM-guided co-training outperforms standard semi-supervised methods for classifying crisis tweets with only 5 to 25 labeled examples per class.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper conducts the first empirical test of large language model guided semi-supervised learning on crisis-related social media posts. It shows that one specific approach, LG-CoTrain, delivers higher average Macro F1 scores than classical baselines when labeled data per class is limited to 5, 10, or 25 examples. A second method, VerifyMatch, matches performance while producing well-calibrated probabilities. The advantage shrinks once more labels are supplied, at which point self-training becomes competitive. Smaller models trained under LLM guidance can exceed the accuracy of large language models run directly in zero-shot mode.

Core claim

LG-CoTrain achieves the highest averaged Macro F1 across events in settings with 5, 10, and 25 labeled examples per class, outperforming classical semi-supervised baselines. VerifyMatch demonstrates strong calibration while matching performance. As the number of labeled examples grows, the performance gap narrows and self-training becomes competitive. Compact semi-supervised models can outperform very large LLMs in zero-shot settings, enabling knowledge transfer for deployable systems in real-world applications.

What carries the argument

LLM-guided Co-Training (LG-CoTrain), an approach that uses outputs from large language models to iteratively label and train models on unlabeled crisis tweets within a co-training framework.

Load-bearing premise

The performance gains seen on the tested crisis events will continue to hold for new events and different tweet distributions.

What would settle it

Repeating the experiments on a fresh collection of crisis events and observing no statistically significant Macro F1 advantage for LG-CoTrain in the 5-, 10-, or 25-label regimes would falsify the central result.

Figures

Figures reproduced from arXiv: 2605.08448 by Anh Tran, Bharaneeshwar Balasubramaniyam, Cornelia Caragea, Doina Caragea, Hongmin Li, Jacob Ativo, Khushboo Gupta.

Figure 1
Figure 1. Figure 1: Per-event Macro-F1 scores for all methods across the 10 HumAID disaster events under different label [PITH_FULL_IMAGE:figures/full_fig_p008_1.png] view at source ↗
read the original abstract

Semi-supervised learning approaches have been investigated as a means to enhance the analysis of social media data in disaster management contexts. In this work, we present the first empirical evaluation of large language model (LLM) guided semi-supervised learning for crisis related tweet classification. We compare two recent LLM assisted semi-supervised methods, VerifyMatch and LLM guided Co-Training ( LG-CoTrain), against established semi-supervised baselines. Our results show that LG-CoTrain significantly outperforms classical semi-supervised approaches in low resource settings with 5, 10 and 25 labeled examples per class, achieving the highest averaged Macro F1 across events. VerifyMatch achieves competitive performance while also demonstrating strong calibration properties. As the number of labeled examples increases, the performance gap narrows and Self Training emerges as a strong baseline. We further observe that compact semi-supervised models can, in some cases, outperform very large LLMs operating in zero-shot settings. This finding highlights the potential of transferring knowledge from LLMs into smaller and more deployable models through LLM guided semi-supervised learning, offering a practical pathway for real world disaster response applications. Our project repository on Github is here.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper conducts the first empirical evaluation of LLM-guided semi-supervised learning for classifying crisis-related tweets on social media. It compares two LLM-assisted methods (VerifyMatch and LG-CoTrain) against classical semi-supervised baselines (including Self-Training) across low-resource regimes with 5, 10, and 25 labeled examples per class. The central claim is that LG-CoTrain achieves the highest averaged Macro F1 and significantly outperforms the baselines in these settings, with VerifyMatch showing competitive results and strong calibration; performance gaps narrow as labeled data increases, and compact models sometimes exceed zero-shot large LLMs. A GitHub repository is provided for reproducibility.

Significance. If the reported outperformance is substantiated with per-event variance, statistical tests, and reproducible experimental details, the work would be significant for disaster management applications. It demonstrates a practical pathway for transferring knowledge from LLMs to smaller, deployable models via semi-supervised learning in low-resource crisis data scenarios, addressing a key challenge in real-time social media analysis during events.

major comments (3)
  1. [Abstract and Results] Abstract and Results: The claim that 'LG-CoTrain significantly outperforms classical semi-supervised approaches' relies on averaged Macro F1 scores but provides no per-event breakdowns, standard deviations, or formal statistical tests (e.g., paired t-test, McNemar, or bootstrap confidence intervals on differences). This undermines the generalization across crisis events and the use of 'significantly' without evidence that gains are consistent rather than driven by aggregation.
  2. [Experimental Setup] Experimental Setup: No details are given on the number of independent runs, random seeds, data split procedures, or error bars for the performance numbers at 5/10/25 labels per class. Without these, it is impossible to assess whether post-hoc choices or event-specific variance affect the central outperformance claim.
  3. [Results] §4 (or equivalent Results section): The assumption that gains generalize across events lacks supporting tables or analysis showing consistent directionality per event rather than cancellation in the average; this is load-bearing for the low-resource regime claims.
minor comments (2)
  1. [Abstract] The abstract mentions a GitHub repository but does not specify the exact commit or ensure all code, data splits, and hyperparameters are included for full reproducibility.
  2. [Methods] Notation for 'Macro F1' and 'low resource settings' could be clarified with explicit definitions or references to prior crisis tweet datasets used.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback, which has helped us strengthen the manuscript. We address each major comment below and have made revisions to incorporate per-event analyses, statistical tests, experimental details, and error bars. These changes substantiate our claims while maintaining the core contributions.

read point-by-point responses
  1. Referee: [Abstract and Results] Abstract and Results: The claim that 'LG-CoTrain significantly outperforms classical semi-supervised approaches' relies on averaged Macro F1 scores but provides no per-event breakdowns, standard deviations, or formal statistical tests (e.g., paired t-test, McNemar, or bootstrap confidence intervals on differences). This undermines the generalization across crisis events and the use of 'significantly' without evidence that gains are consistent rather than driven by aggregation.

    Authors: We agree that the original use of 'significantly' was insufficiently supported without statistical evidence and per-event data. In the revised manuscript, we have added a new table (Table 3) reporting per-event Macro F1 scores for all methods across the 5/10/25 label regimes. We re-ran experiments over 5 independent runs with distinct random seeds and now report mean Macro F1 with standard deviations as error bars in all tables and figures. We also added paired t-tests (with p-values) comparing LG-CoTrain to the strongest baseline in each setting, plus bootstrap confidence intervals (1000 resamples) on the performance differences. These results show consistent outperformance in the majority of events, allowing us to retain a qualified claim of significance while updating the abstract to avoid overstatement. revision: yes

  2. Referee: [Experimental Setup] Experimental Setup: No details are given on the number of independent runs, random seeds, data split procedures, or error bars for the performance numbers at 5/10/25 labels per class. Without these, it is impossible to assess whether post-hoc choices or event-specific variance affect the central outperformance claim.

    Authors: We acknowledge the importance of these details for assessing robustness. The revised Experimental Setup section now explicitly states that all results are averaged over 5 independent runs using fixed random seeds (42, 123, 456, 789, 1011). Data splits were generated via stratified sampling to enforce exactly 5/10/25 labeled examples per class while preserving event proportions; the same splits were used across all methods for fairness. All tables and figures now include error bars denoting standard deviation across runs. The GitHub repository has been updated with the exact split-generation code, seed values, and reproduction scripts. revision: yes

  3. Referee: [Results] §4 (or equivalent Results section): The assumption that gains generalize across events lacks supporting tables or analysis showing consistent directionality per event rather than cancellation in the average; this is load-bearing for the low-resource regime claims.

    Authors: We have added a dedicated per-event analysis subsection in the Results section, including Table 4 that breaks down Macro F1 by individual crisis event (e.g., Hurricane Harvey, Nepal Earthquake, Queensland Floods) for each label count. This table demonstrates that LG-CoTrain outperforms or matches the best baseline in 8 of 10 events at 5 labels, with similar or better consistency at 10 and 25 labels. We include a short discussion of the two events with smaller gains, attributing them to domain-specific factors rather than aggregation artifacts. This per-event view directly supports the generalization of the low-resource findings. revision: yes

Circularity Check

0 steps flagged

No circularity: purely empirical method comparison on held-out data

full rationale

The manuscript reports an experimental comparison of LG-CoTrain, VerifyMatch, and classical semi-supervised baselines on crisis-tweet classification tasks. Performance is measured by averaged Macro F1 under fixed low-label regimes (5/10/25 examples per class) and on held-out test sets. No derivation chain, first-principles prediction, or mathematical model is claimed; results are obtained by running the algorithms on the data and reporting aggregate scores. Consequently none of the enumerated circularity patterns (self-definitional, fitted-input-called-prediction, self-citation load-bearing, etc.) can apply. The work is self-contained against external benchmarks and does not reduce any claimed result to its own inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

This is an empirical machine learning study. No new mathematical axioms or invented physical entities are introduced. Standard assumptions include that the tweet data distribution allows semi-supervised learning to improve over supervised baselines and that LLM outputs can serve as reliable pseudo-labels or match signals.

axioms (1)
  • domain assumption Semi-supervised learning assumptions hold for crisis tweet data (unlabeled data shares structure with labeled data)
    Implicit in all semi-supervised methods evaluated; required for any performance gain over supervised baselines.

pith-pipeline@v0.9.0 · 5522 in / 1167 out tokens · 55530 ms · 2026-05-12T00:48:22.075892+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

297 extracted references · 297 canonical work pages · 3 internal anchors

  1. [1]

    An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models

    Liu, Xueqing and Wang, Chi. An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021. doi:10.18653/v1/2021.acl-long.178

  2. [2]

    CrisisLex:

    Alexandra Olteanu and Carlos Castillo and Fernando Diaz and Sarah Vieweg , editor =. CrisisLex:. Proceedings of the Eighth International Conference on Weblogs and Social Media,. 2014 , timestamp =

  3. [3]

    2023 , eprint=

    SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised Learning , author=. 2023 , eprint=

  4. [6]

    2022 , url =

    Christian Reuter , title =. 2022 , url =

  5. [7]

    , author=

    The evolving role of the public information officer: An examination of social media in emergency management. , author=. Journal of Homeland Security & Emergency Management , volume=

  6. [8]

    Risk, Hazards & Crisis in Public Policy , volume =

    Wolbers, Jeroen and Kuipers, Sanneke and Boin, Arjen , title =. Risk, Hazards & Crisis in Public Policy , volume =. doi:https://doi.org/10.1002/rhc3.12244 , url =

  7. [9]

    Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence,

    Yang Zhang and Ruohan Zong and Lanyu Shang and Huimin Zeng and Zhenrui Yue and Na Wei and Dong Wang , title =. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence,. 2023 , url =. doi:10.24963/IJCAI.2023/701 , timestamp =

  8. [10]

    2025 , isbn =

    Imran, Muhammad and Ziaullah, Abdul Wahab and Chen, Kai and Ofli, Ferda , title =. 2025 , isbn =. doi:10.1145/3696410.3714511 , booktitle =

  9. [11]

    2025 , eprint=

    Engage and Mobilize! Understanding Evolving Patterns of Social Media Usage in Emergency Management , author=. 2025 , eprint=

  10. [12]

    18th International Conference on Information Systems for Crisis Response and Management,

    Hongmin Li and Doina Caragea and Cornelia Caragea , title =. 18th International Conference on Information Systems for Crisis Response and Management,. 2021 , url =

  11. [13]

    2018 , eprint=

    Domain Adaptation with Adversarial Training and Graph Embeddings , author=. 2018 , eprint=

  12. [14]

    Disaster Tweet Classification Using Fine-Tuned Deep Learning Models Versus Zero and Few-Shot Large Language Models

    Taghian Dinani, Soudabeh and Caragea, Doina and Gyawali, Nikesh. Disaster Tweet Classification Using Fine-Tuned Deep Learning Models Versus Zero and Few-Shot Large Language Models. Data Management Technologies and Applications. 2024

  13. [15]

    2024 , eprint=

    Zero-Shot Classification of Crisis Tweets Using Instruction-Finetuned Large Language Models , author=. 2024 , eprint=

  14. [16]

    2025 , eprint=

    CrisisSense-LLM: Instruction Fine-Tuned Large Language Model for Multi-label Social Media Text Classification in Disaster Informatics , author=. 2025 , eprint=

  15. [17]

    , title =

    Shrestha, T. , title =. 2025 , school =

  16. [18]

    2024 , url =

    Andrea Salfinger and Lauro Snidaro , title =. 2024 , url =. doi:10.1109/COGSIMA61085.2024.10553903 , timestamp =

  17. [19]

    Findings of the Association for Computational Linguistics: ACL 2025 , pages=

    Harnessing large language models for disaster management: A survey , author=. Findings of the Association for Computational Linguistics: ACL 2025 , pages=

  18. [20]

    Large Language Models in Crisis Informatics for Zero and Few-Shot Classification , year =

    S\'. Large Language Models in Crisis Informatics for Zero and Few-Shot Classification , year =. doi:10.1145/3736160 , journal =

  19. [21]

    arXiv preprint arXiv:2601.13839 , year=

    DisasterVQA: A Visual Question Answering Benchmark Dataset for Disaster Scenes , author=. arXiv preprint arXiv:2601.13839 , year=

  20. [22]

    Neural Computing and Applications , volume=

    MEDIC: a multi-task learning dataset for disaster image classification , author=. Neural Computing and Applications , volume=. 2023 , publisher=

  21. [23]

    , author=

    Extracting information nuggets from disaster-related messages in social media. , author=. Iscram , volume=

  22. [24]

    Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16) , pages=

    Twitter as a lifeline: Human-annotated twitter corpora for NLP of crisis-related messages , author=. Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16) , pages=

  23. [25]

    2025 , url=

    OpenAI , title=. 2025 , url=

  24. [26]

    Advances in neural information processing systems , volume=

    Attention is all you need , author=. Advances in neural information processing systems , volume=

  25. [27]

    GPT-4o System Card

    Gpt-4o system card , author=. arXiv preprint arXiv:2410.21276 , year=

  26. [28]

    OpenAI GPT-5 System Card

    Openai gpt-5 system card , author=. arXiv preprint arXiv:2601.03267 , year=

  27. [29]

    Advances in neural information processing systems , volume=

    Language models are few-shot learners , author=. Advances in neural information processing systems , volume=

  28. [30]

    Bayesian active learning for classification and preferenc e learning,

    Bayesian active learning for classification and preference learning , author=. arXiv preprint arXiv:1112.5745 , year=

  29. [31]

    Findings of the Association for Computational Linguistics: EMNLP 2022 , pages=

    Leveraging training dynamics and self-training for text classification , author=. Findings of the Association for Computational Linguistics: EMNLP 2022 , pages=

  30. [32]

    Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages=

    Dataset cartography: Mapping and diagnosing datasets with training dynamics , author=. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages=

  31. [33]

    Advances in Neural Information Processing Systems , volume=

    Identifying mislabeled data using the area under the margin ranking , author=. Advances in Neural Information Processing Systems , volume=

  32. [34]

    Findings of the association for computational linguistics: EMNLP 2023 , pages=

    Decrisismb: Debiased semi-supervised learning for crisis tweet classification via memory bank , author=. Findings of the association for computational linguistics: EMNLP 2023 , pages=

  33. [35]

    Advances in neural information processing systems , volume=

    Fixmatch: Simplifying semi-supervised learning with consistency and confidence , author=. Advances in neural information processing systems , volume=

  34. [36]

    Proceedings of the 29th international conference on computational linguistics , pages=

    Multimodal semi-supervised learning for disaster tweet classification , author=. Proceedings of the 29th international conference on computational linguistics , pages=

  35. [37]

    Proceedings of the eleventh annual conference on Computational learning theory , pages=

    Combining labeled and unlabeled data with co-training , author=. Proceedings of the eleventh annual conference on Computational learning theory , pages=

  36. [38]

    Proceedings of the International ISCRAM Conference , year=

    Calibrated Semi-Supervised Models for Disaster Response based on Training Dynamics , author=. Proceedings of the International ISCRAM Conference , year=

  37. [39]

    Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations , pages=

    BERTweet: A pre-trained language model for English Tweets , author=. Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations , pages=

  38. [40]

    arXiv preprint arXiv:2502.12584 , year=

    Enhancing Semi-supervised Learning with Zero-shot Pseudolabels , author=. arXiv preprint arXiv:2502.12584 , year=

  39. [41]

    Proceedings of the 20th International ISCRAM Conference , year=

    Semi-supervised few-shot learning for fine-grained disaster tweet classification , author=. Proceedings of the 20th International ISCRAM Conference , year=

  40. [42]

    Proceedings of the International AAAI Conference on Web and social media , volume=

    Humaid: Human-annotated disaster incidents data from twitter with deep learning benchmarks , author=. Proceedings of the International AAAI Conference on Web and social media , volume=

  41. [43]

    2025 , isbn =

    Weitzel, Derek and Graves, Ashton and Albin, Sam and Zhu, Huijun and Wuerthwein, Frank and Tatineni, Mahidhar and Mishin, Dmitry and Khoda, Elham and Sada, Mohammad and Smarr, Larry and DeFanti, Thomas and Graham, John , title =. 2025 , isbn =. doi:10.1145/3708035.3736060 , booktitle =

  42. [44]

    mixup: Beyond Empirical Risk Minimization

    mixup: Beyond empirical risk minimization , author=. arXiv preprint arXiv:1710.09412 , year=

  43. [45]

    Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages=

    On the calibration of pre-trained language models using mixup guided by area under the margin and saliency , author=. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages=

  44. [46]

    IEEE Transactions on Information Theory , volume=

    Probability of error of some adaptive pattern-recognition machines , author=. IEEE Transactions on Information Theory , volume=. 1965 , publisher=

  45. [47]

    Advances in Neural Information Processing Systems , volume=

    Uncertainty-aware self-training for few-shot text classification , author=. Advances in Neural Information Processing Systems , volume=

  46. [48]

    Advances in neural information processing systems , volume=

    Semi-supervised learning with ladder networks , author=. Advances in neural information processing systems , volume=

  47. [49]

    Advances in neural information processing systems , volume=

    Mixmatch: A holistic approach to semi-supervised learning , author=. Advances in neural information processing systems , volume=

  48. [50]

    Proceedings of the international AAAI conference on web and social media , volume=

    Graph based semi-supervised learning with convolution neural networks to classify crisis related tweets , author=. Proceedings of the international AAAI conference on web and social media , volume=

  49. [51]

    Proceedings of the ACM Web Conference 2024 , pages=

    Contrastive learning for multimodal classification of crisis related tweets , author=. Proceedings of the ACM Web Conference 2024 , pages=

  50. [52]

    Analysis of Social Media Data using Multimodal Deep Learning for Disaster Response , Year =

    Ferda Ofli and Firoj Alam and Muhammad Imran , Booktitle =. Analysis of Social Media Data using Multimodal Deep Learning for Disaster Response , Year =

  51. [53]

    , year =

    Guo, Dongping and Tran Anh and Xiao, Xinli and Li, Hongmin and Caragea, Doina , title =. , year =

  52. [54]

    Proceedings of the International AAAI conference on web and social media , volume=

    CrisisBench: Benchmarking crisis-related social media datasets for humanitarian information processing , author=. Proceedings of the International AAAI conference on web and social media , volume=

  53. [55]

    Proceedings of the 12th International AAAI Conference on Web and Social Media (ICWSM) , year =

    Alam, Firoj and Ofli, Ferda and Imran, Muhammad , title =. Proceedings of the 12th International AAAI Conference on Web and Social Media (ICWSM) , year =

  54. [56]

    Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages=

    JointMatch: A unified approach for diverse and collaborative pseudo-labeling to semi-supervised text classification , author=. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing , pages=

  55. [57]

    Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing , pages=

    MultiMatch: Multihead Consistency Regularization Matching for Semi-Supervised Text Classification , author=. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing , pages=

  56. [58]

    Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages=

    VerifyMatch: A semi-supervised learning paradigm for natural language inference with confidence-aware MixUp , author=. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages=

  57. [59]

    LLM-Guided Co-Training for Text Classification

    Rahman, Md Mezbaur and Caragea, Cornelia. LLM-Guided Co-Training for Text Classification. Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing. 2025

  58. [60]

    Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining , pages=

    Optuna: A next-generation hyperparameter optimization framework , author=. Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining , pages=

  59. [61]

    arXiv preprint arXiv:2205.07246 , year=

    Freematch: Self-adaptive thresholding for semi-supervised learning , author=. arXiv preprint arXiv:2205.07246 , year=

  60. [62]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    MarginMatch: Improving semi-supervised learning with pseudo-margins , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  61. [63]

    CrisisBERT: a Robust Transformer for Crisis Classification and Contextual Crisis Embedding , journal =

    Junhua Liu and Trisha Singhal and Luci. CrisisBERT: a Robust Transformer for Crisis Classification and Contextual Crisis Embedding , journal =. 2020 , url =

  62. [64]

    CoRR , volume =

    Zijun Sun and Chun Fan and Xiaofei Sun and Yuxian Meng and Fei Wu and Jiwei Li , title =. CoRR , volume =. 2020 , url =

  63. [65]

    CoRR , volume =

    Yinhan Liu and Myle Ott and Naman Goyal and Jingfei Du and Mandar Joshi and Danqi Chen and Omer Levy and Mike Lewis and Luke Zettlemoyer and Veselin Stoyanov , title =. CoRR , volume =. 2019 , url =

  64. [66]

    8th International Conference on Learning Representations,

    Junxian He and Jiatao Gu and Jiajun Shen and Marc'Aurelio Ranzato , title =. 8th International Conference on Learning Representations,. 2020 , url =

  65. [67]

    2020 , MONTH = May, KEYWORDS =

    Coche, Julien and Montarnal, Aurelie and Tapia, Andrea and Benaben, Frederick , URL =. 2020 , MONTH = May, KEYWORDS =

  66. [68]

    Halse and Jess Kropczynski and Andrea H

    Rob Grace and Shane E. Halse and Jess Kropczynski and Andrea H. Tapia and Fred Fonseca , editor =. Integrating Social Media in Emergency Dispatch via Distributed Sensemaking , booktitle =. 2019 , url =

  67. [69]

    Analysis of Detection Models for Disaster-Related Tweets , doi =

    Wiegmann, Matti and Kersten, Jens and Klan, Friederike and Potthast, Martin and Stein, Benno , year =. Analysis of Detection Models for Disaster-Related Tweets , doi =. Proceedings of the 17th International Conference on Information Systems for Crisis Response and Management, – Blacksburg, VA, USA May 2020 , publisher =

  68. [70]

    Kruspe and Matti Wiegmann and Friederike Klan , editor =

    Jens Kersten and Anna M. Kruspe and Matti Wiegmann and Friederike Klan , editor =. Robust filtering of crisis-related tweets , booktitle =. 2019 , url =

  69. [71]

    15th International Conference on Information Systems for Crisis Response and Management , month =

    Crisis Event Extraction Service (CREES) - Automatic Detection and Classification of Crisis-related Content on Social Media , author =. 15th International Conference on Information Systems for Crisis Response and Management , month =. 2018 , url =

  70. [72]

    Sheng and Salil S

    Xiaodong Ning and Lina Yao and Boualem Benatallah and Yihong Zhang and Quan Z. Sheng and Salil S. Kanhere , title =. 2019 , url =. doi:10.1145/3300229 , timestamp =

  71. [73]

    Low-supervision urgency detection and transfer in short crisis messages , booktitle =

    Mayank Kejriwal and Peilin Zhou , editor =. Low-supervision urgency detection and transfer in short crisis messages , booktitle =. 2019 , url =. doi:10.1145/3341161.3342936 , timestamp =

  72. [74]

    Rapid relevance classification of social media posts in disasters and emergencies:

    Marc. Rapid relevance classification of social media posts in disasters and emergencies:. Inf. Process. Manag. , volume =. 2020 , url =. doi:10.1016/j.ipm.2019.102132 , timestamp =

  73. [75]

    Muhammad Imran and Ferda Ofli and Doina Caragea and Antonio Torralba , title =. Inf. Process. Manag. , volume =. 2020 , url =. doi:10.1016/j.ipm.2020.102261 , timestamp =

  74. [76]

    Seyed Hossein Ghafarian and Hadi Sadoghi Yazdi , title =. Inf. Process. Manag. , volume =. 2020 , url =. doi:10.1016/j.ipm.2019.102145 , timestamp =

  75. [77]

    Roxanne Hiltz and Amanda Lee Hughes and Muhammad Imran and Linda Plotnick and Robert Power and Murray Turoff , title =

    S. Roxanne Hiltz and Amanda Lee Hughes and Muhammad Imran and Linda Plotnick and Robert Power and Murray Turoff , title =. International Journal of Disaster Risk Reduction , volume =. 2020 , issn =. doi:https://doi.org/10.1016/j.ijdrr.2019.101367 , url =

  76. [79]

    2020 , url =

    Chao Fan and Fangsheng Wu and Ali Mostafavi , title =. 2020 , url =. doi:10.1109/ACCESS.2020.2965550 , timestamp =

  77. [80]

    Keyphrase Extraction from Disaster-related Tweets , booktitle =

    Jishnu Ray Chowdhury and Cornelia Caragea and Doina Caragea , editor =. Keyphrase Extraction from Disaster-related Tweets , booktitle =. 2019 , url =. doi:10.1145/3308558.3313696 , timestamp =

  78. [81]

    The Thirty-Fourth

    Jishnu Ray Chowdhury and Cornelia Caragea and Doina Caragea , title =. The Thirty-Fourth. 2020 , url =

  79. [82]

    Detecting Perceived Emotions in Hurricane Disasters , booktitle =

    Shrey Desai and Cornelia Caragea and Junyi Jessy Li , editor =. Detecting Perceived Emotions in Hurricane Disasters , booktitle =. 2020 , url =

  80. [83]

    Fang Yao and Yan Wang , title =. Comput. Environ. Urban Syst. , volume =. 2020 , url =. doi:10.1016/j.compenvurbsys.2020.101522 , timestamp =

Showing first 80 references.