pith. machine review for the scientific record. sign in

arxiv: 1804.10959 · v1 · submitted 2018-04-29 · 💻 cs.CL

Recognition: unknown

Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates

Authors on Pith no claims yet
classification 💻 cs.CL
keywords subwordmultipleregularizationsegmentationmodelneuralpossiblesegmentations
0
0 comments X
read the original abstract

Subword units are an effective way to alleviate the open vocabulary problems in neural machine translation (NMT). While sentences are usually converted into unique subword sequences, subword segmentation is potentially ambiguous and multiple segmentations are possible even with the same vocabulary. The question addressed in this paper is whether it is possible to harness the segmentation ambiguity as a noise to improve the robustness of NMT. We present a simple regularization method, subword regularization, which trains the model with multiple subword segmentations probabilistically sampled during training. In addition, for better subword sampling, we propose a new subword segmentation algorithm based on a unigram language model. We experiment with multiple corpora and report consistent improvements especially on low resource and out-of-domain settings.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 5 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. How Tokenization Limits Phonological Knowledge Representation in Language Models and How to Improve Them

    cs.CL 2026-04 unverdicted novelty 7.0

    Subword tokenization impairs phonological knowledge encoding in LMs, but an IPA-based fine-tuning method restores it with minimal impact on other capabilities.

  2. HeceTokenizer: A Syllable-Based Tokenization Approach for Turkish Retrieval

    cs.CL 2026-04 unverdicted novelty 7.0

    A syllable-based tokenizer for Turkish enables a tiny 1.5M-parameter model to reach 50.3% Recall@5 on TQuAD retrieval, beating a much larger morphology baseline.

  3. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

    cs.LG 2019-10 unverdicted novelty 7.0

    T5 casts all NLP tasks as text-to-text generation, systematically explores pre-training choices, and reaches strong performance on summarization, QA, classification and other tasks via large-scale training on the Colo...

  4. CoCa: Contrastive Captioners are Image-Text Foundation Models

    cs.CV 2022-05 accept novelty 6.0

    CoCa unifies contrastive and generative pretraining in one image-text model to reach 86.3% zero-shot ImageNet accuracy and new state-of-the-art results on multiple downstream benchmarks.

  5. Lost in Translation? Exploring the Shift in Grammatical Gender from Latin to Occitan

    cs.CL 2026-05 unverdicted novelty 5.0

    An interpretable deep learning framework with a new tokenizer is used to quantify how grammatical gender information is distributed between lemmas and sentential context during the Latin-to-Occitan transition.