pith. machine review for the scientific record. sign in

arxiv: 1511.02301 · v4 · submitted 2015-11-07 · 💻 cs.CL

Recognition: unknown

The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations

Authors on Pith no claims yet
classification 💻 cs.CL
keywords wordsmodelslanguagepredictingstate-of-the-artbookschildrencontent
0
0 comments X
read the original abstract

We introduce a new test of how well language models capture meaning in children's books. Unlike standard language modelling benchmarks, it distinguishes the task of predicting syntactic function words from that of predicting lower-frequency words, which carry greater semantic content. We compare a range of state-of-the-art models, each with a different way of encoding what has been previously read. We show that models which store explicit representations of long-term contexts outperform state-of-the-art neural language models at predicting semantic content words, although this advantage is not observed for syntactic function words. Interestingly, we find that the amount of text encoded in a single memory representation is highly influential to the performance: there is a sweet-spot, not too big and not too small, between single words and full sentences that allows the most meaningful information in a text to be effectively retained and recalled. Further, the attention over such window-based memories can be trained effectively through self-supervision. We then assess the generality of this principle by applying it to the CNN QA benchmark, which involves identifying named entities in paraphrased summaries of news articles, and achieve state-of-the-art performance.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Reformer: The Efficient Transformer

    cs.LG 2020-01 accept novelty 8.0

    Reformer matches standard Transformer accuracy on long sequences while using far less memory and running faster via LSH attention and reversible residual layers.

  2. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension

    cs.CL 2017-05 accept novelty 8.0

    TriviaQA is a new large-scale dataset for reading comprehension that features complex compositional questions, high lexical variability, and cross-sentence reasoning requirements, where current baselines reach only 40...

  3. Language Models as Knowledge Bases?

    cs.CL 2019-09 accept novelty 7.0

    BERT stores relational knowledge extractable via cloze queries without fine-tuning and matches supervised baselines on open-domain QA tasks.

  4. LLM-PRISM: Characterizing Silent Data Corruption from Permanent GPU Faults in LLM Training

    cs.AR 2026-04 unverdicted novelty 6.0

    LLMs resist low-frequency permanent GPU faults but certain datapaths and precision formats trigger catastrophic training divergence even at moderate fault rates.