pith. machine review for the scientific record. sign in

arxiv: 1712.01807 · v1 · submitted 2017-12-05 · 💻 cs.CL · eess.AS· stat.ML

Recognition: unknown

Improving the Performance of Online Neural Transducer Models

Authors on Pith no claims yet
classification 💻 cs.CL eess.ASstat.ML
keywords modelmodelsonlineperformancesearchexploreimprovementsneural
0
0 comments X
read the original abstract

Having a sequence-to-sequence model which can operate in an online fashion is important for streaming applications such as Voice Search. Neural transducer is a streaming sequence-to-sequence model, but has shown a significant degradation in performance compared to non-streaming models such as Listen, Attend and Spell (LAS). In this paper, we present various improvements to NT. Specifically, we look at increasing the window over which NT computes attention, mainly by looking backwards in time so the model still remains online. In addition, we explore initializing a NT model from a LAS-trained model so that it is guided with a better alignment. Finally, we explore including stronger language models such as using wordpiece models, and applying an external LM during the beam search. On a Voice Search task, we find with these improvements we can get NT to match the performance of LAS.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.