pith. machine review for the scientific record. sign in

arxiv: 1607.05108 · v1 · submitted 2016-07-18 · 💻 cs.NE · cs.CL

Recognition: unknown

Neural Machine Translation with Recurrent Attention Modeling

Authors on Pith no claims yet
classification 💻 cs.NE cs.CL
keywords attentiontranslationattendedmodelingpreviousrecurrentwordwords
0
0 comments X
read the original abstract

Knowing which words have been attended to in previous time steps while generating a translation is a rich source of information for predicting what words will be attended to in the future. We improve upon the attention model of Bahdanau et al. (2014) by explicitly modeling the relationship between previous and subsequent attention levels for each word using one recurrent network per input word. This architecture easily captures informative features, such as fertility and regularities in relative distortion. In experiments, we show our parameterization of attention improves translation quality.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.