pith. machine review for the scientific record. sign in

arxiv: 1712.07108 · v1 · submitted 2017-12-19 · 💻 cs.CL · cs.SD· eess.AS· stat.ML

Recognition: unknown

Improved Regularization Techniques for End-to-End Speech Recognition

Authors on Pith no claims yet
classification 💻 cs.CL cs.SDeess.ASstat.ML
keywords end-to-endmodelsspeechdatadropoutaugmentationimportantinvestigate
0
0 comments X
read the original abstract

Regularization is important for end-to-end speech models, since the models are highly flexible and easy to overfit. Data augmentation and dropout has been important for improving end-to-end models in other domains. However, they are relatively under explored for end-to-end speech models. Therefore, we investigate the effectiveness of both methods for end-to-end trainable, deep speech recognition models. We augment audio data through random perturbations of tempo, pitch, volume, temporal alignment, and adding random noise.We further investigate the effect of dropout when applied to the inputs of all layers of the network. We show that the combination of data augmentation and dropout give a relative performance improvement on both Wall Street Journal (WSJ) and LibriSpeech dataset of over 20%. Our model performance is also competitive with other end-to-end speech models on both datasets.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.