A preference fine-tuning method for LLMs that combines context augmentation, theory-driven preference pair construction, curriculum learning, and a density estimation support constraint to produce domain-aligned review responses with reduced hallucinations and over-conservatism.
Beltran-Hernandez CC, Petit D, Ramirez-Alpizar IG, Harada K (2022) Accelerating Robot Learning of Contact-Rich Manipulations: A Curriculum Learning Study
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.AI 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Align Generative Artificial Intelligence with Human Preferences: A Novel Large Language Model Fine-Tuning Method for Online Review Management
A preference fine-tuning method for LLMs that combines context augmentation, theory-driven preference pair construction, curriculum learning, and a density estimation support constraint to produce domain-aligned review responses with reduced hallucinations and over-conservatism.