LLMs classify Gaza War headlines as strongly negative while fine-tuned Arabic BERT models favor neutral labels, producing measurable non-random divergences in sentiment distributions.
ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Sentiment Classification of Gaza War Headlines: A Comparative Analysis of Large Language Models and Arabic Fine-Tuned BERT Models
LLMs classify Gaza War headlines as strongly negative while fine-tuned Arabic BERT models favor neutral labels, producing measurable non-random divergences in sentiment distributions.