SORT turns all-wrong prompts into selective learning signals by weighting tokens more predictable under plan guidance from reference solutions, improving over GRPO on reasoning benchmarks especially for weaker models.
Title resolution pending
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
years
2026 2verdicts
UNVERDICTED 2representative citing papers
ExpThink applies experience-tracked rewards and correct-count normalized advantages in RL to compress CoT reasoning, cutting length up to 77% while raising accuracy and efficiency ratio on math benchmarks.
citing papers explorer
-
Selective Off-Policy Reference Tuning with Plan Guidance
SORT turns all-wrong prompts into selective learning signals by weighting tokens more predictable under plan guidance from reference solutions, improving over GRPO on reasoning benchmarks especially for weaker models.
-
ExpThink: Experience-Guided Reinforcement Learning for Adaptive Chain-of-Thought Compression
ExpThink applies experience-tracked rewards and correct-count normalized advantages in RL to compress CoT reasoning, cutting length up to 77% while raising accuracy and efficiency ratio on math benchmarks.