pith. machine review for the scientific record. sign in

arxiv: 1805.08241 · v1 · submitted 2018-05-21 · 💻 cs.CL

Recognition: unknown

Sparse and Constrained Attention for Neural Machine Translation

Authors on Pith no claims yet
classification 💻 cs.CL
keywords attentionconstrainedsparsesourcetranslationwordsaddressallocates
0
0 comments X
read the original abstract

In NMT, words are sometimes dropped from the source or generated repeatedly in the translation. We explore novel strategies to address the coverage problem that change only the attention transformation. Our approach allocates fertilities to source words, used to bound the attention each word can receive. We experiment with various sparse and constrained attention transformations and propose a new one, constrained sparsemax, shown to be differentiable and sparse. Empirical evaluation is provided in three languages pairs.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Empty SPACE: Cross-Attention Sparsity for Concept Erasure in Diffusion Models

    cs.LG 2026-05 unverdicted novelty 5.0

    SPACE induces sparsity in cross-attention parameters via closed-form iterative updates to erase target concepts more effectively than dense baselines in large diffusion models.