pith. machine review for the scientific record. sign in

arxiv: 1808.07561 · v2 · submitted 2018-08-22 · 💻 cs.CL · cs.AI· cs.LG

Recognition: unknown

Training Deeper Neural Machine Translation Models with Transparent Attention

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.LG
keywords modelsdeeperattentionmachinetranslationapplicationsarchitecturesattempt
0
0 comments X
read the original abstract

While current state-of-the-art NMT models, such as RNN seq2seq and Transformers, possess a large number of parameters, they are still shallow in comparison to convolutional models used for both text and vision applications. In this work we attempt to train significantly (2-3x) deeper Transformer and Bi-RNN encoders for machine translation. We propose a simple modification to the attention mechanism that eases the optimization of deeper models, and results in consistent gains of 0.7-1.1 BLEU on the benchmark WMT'14 English-German and WMT'15 Czech-English tasks for both architectures.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.