pith. machine review for the scientific record. sign in

arxiv: 1807.04978 · v2 · submitted 2018-07-13 · 📡 eess.AS · cs.CL· cs.SD

Recognition: unknown

Hybrid CTC-Attention based End-to-End Speech Recognition using Subword Units

Authors on Pith no claims yet
classification 📡 eess.AS cs.CLcs.SD
keywords unitssubwordctc-attentionhybridsystemcharactersend-to-endmodeling
0
0 comments X
read the original abstract

In this paper, we present an end-to-end automatic speech recognition system, which successfully employs subword units in a hybrid CTC-Attention based system. The subword units are obtained by the byte-pair encoding (BPE) compression algorithm. Compared to using words as modeling units, using characters or subword units does not suffer from the out-of-vocabulary (OOV) problem. Furthermore, using subword units further offers a capability in modeling longer context than using characters. We evaluate different systems over the LibriSpeech 1000h dataset. The subword-based hybrid CTC-Attention system obtains 6.8% word error rate (WER) on the test_clean subset without any dictionary or external language model. This represents a significant improvement (a 12.8% WER relative reduction) over the character-based hybrid CTC-Attention system.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.