pith. machine review for the scientific record. sign in

arxiv: 1606.07947 · v4 · submitted 2016-06-25 · 💻 cs.CL · cs.LG· cs.NE

Recognition: unknown

Sequence-Level Knowledge Distillation

Authors on Pith no claims yet
classification 💻 cs.CL cs.LGcs.NE
keywords distillationknowledgemodelperformanceteacherappliedapplyingapproaches
0
0 comments X
read the original abstract

Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. PACED: Distillation and On-Policy Self-Distillation at the Frontier of Student Competence

    cs.AI 2026-03 conditional novelty 7.0

    PACED applies student pass-rate weighting w(p)=p(1-p) to distillation, concentrating on the zone of proximal development and delivering up to +8.2 gains on AIME tasks with reduced forgetting.

  2. Accelerating Large Language Model Decoding with Speculative Sampling

    cs.CL 2023-02 accept novelty 7.0

    Speculative sampling accelerates LLM decoding 2-2.5x by letting a draft model propose short sequences that the target model scores in parallel, then applies modified rejection sampling to keep the exact target distribution.

  3. TIP: Token Importance in On-Policy Distillation

    cs.LG 2026-04 conditional novelty 6.0

    In on-policy distillation, tokens with high student entropy or low entropy plus high teacher divergence provide dense corrective signal, allowing effective training on under 20% of tokens across math and planning tasks.

  4. Near-Policy: Accelerating On-Policy Distillation via Asynchronous Generation and Selective Packing

    cs.LG 2026-05 unverdicted novelty 5.0

    NPD accelerates on-policy distillation 8.1 times faster than baselines by using asynchronous SFT with Δ-IFD filtering, outperforming standard SFT and enabling a 1B model to achieve 68.73% SOTA score.