Prune-OPD dynamically prunes unreliable teacher rewards in on-policy distillation by monitoring prefix drift via top-k overlap, reducing training time 37.6-68% on AMC/AIME/HMMT while preserving or improving performance.
Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers.Advances in Neural Information Processing Systems, 33:5776–5788
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.LG 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Prune-OPD: Efficient and Reliable On-Policy Distillation for Long-Horizon Reasoning
Prune-OPD dynamically prunes unreliable teacher rewards in on-policy distillation by monitoring prefix drift via top-k overlap, reducing training time 37.6-68% on AMC/AIME/HMMT while preserving or improving performance.