pith. machine review for the scientific record. sign in

arxiv: 1803.08240 · v1 · submitted 2018-03-22 · 💻 cs.CL · cs.AI· cs.NE

Recognition: unknown

An Analysis of Neural Language Modeling at Multiple Scales

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.NE
keywords languagecharacter-levelenwik8lstmsmodelingqrnnsresultsstate-of-the-art
0
0 comments X
read the original abstract

Many of the leading approaches in language modeling introduce novel, complex and specialized architectures. We take existing state-of-the-art word level language models based on LSTMs and QRNNs and extend them to both larger vocabularies as well as character-level granularity. When properly tuned, LSTMs and QRNNs achieve state-of-the-art results on character-level (Penn Treebank, enwik8) and word-level (WikiText-103) datasets, respectively. Results are obtained in only 12 hours (WikiText-103) to 2 days (enwik8) using a single modern GPU.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.