Mid-training LLMs on subdomain clinical text after general pre-training improves radiology report summarization on ROUGE-L and RadGraph-F1 metrics over direct fine-tuning.
pre-training, fine-tuning
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Improving Automatic Summarization of Radiology Reports through Mid-Training of Large Language Models
Mid-training LLMs on subdomain clinical text after general pre-training improves radiology report summarization on ROUGE-L and RadGraph-F1 metrics over direct fine-tuning.