MoRI improves LLM scientific ideation by training models via SFT to generate motivations followed by composite RL rewards for entropy-aware information gain and contrastive semantic alignment, leading to higher novelty, rigor, and feasibility than baselines.
Hint-Before-Solving
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
MoRI: Learning Motivation-Grounded Reasoning for Scientific Ideation in Large Language Models
MoRI improves LLM scientific ideation by training models via SFT to generate motivations followed by composite RL rewards for entropy-aware information gain and contrastive semantic alignment, leading to higher novelty, rigor, and feasibility than baselines.