TraceGuard formulates antidistillation as a detectability-constrained Stackelberg game and poisons sparsely located thought anchors via branching-token detection to degrade student models while preserving trace quality.
findings-emnlp.913/
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
years
2026 2verdicts
UNVERDICTED 2representative citing papers
DialectLLM generates parallel multi-dialect dialog data and a 50k-dialog benchmark showing frontier LLMs achieve under 70% accuracy on dialect tasks while the generated data can improve post-training.
citing papers explorer
-
Hiding in Plain Sight: Detectability-Aware Antidistillation of Reasoning Models
TraceGuard formulates antidistillation as a detectability-constrained Stackelberg game and poisons sparsely located thought anchors via branching-token detection to degrade student models while preserving trace quality.
-
DialectLLM: A Dialect-Aware Dialog[ue] Generation Framework Beyond Standard American English
DialectLLM generates parallel multi-dialect dialog data and a 50k-dialog benchmark showing frontier LLMs achieve under 70% accuracy on dialect tasks while the generated data can improve post-training.