pith. machine review for the scientific record. sign in

arxiv: 1601.04811 · v6 · submitted 2016-01-19 · 💻 cs.CL

Recognition: unknown

Modeling Coverage for Neural Machine Translation

Authors on Pith no claims yet
classification 💻 cs.CL
keywords attentioncoveragetranslationalignmentmachineneuralqualityvector
0
0 comments X
read the original abstract

Attention mechanism has enhanced state-of-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.