pith. machine review for the scientific record. sign in

arxiv: 1610.08462 · v1 · submitted 2016-10-26 · 💻 cs.CL

Recognition: unknown

Distraction-Based Neural Networks for Document Summarization

Authors on Pith no claims yet
classification 💻 cs.CL
keywords modelsdocumentsneuralmodelingsummarizationaimsattentioncontent
0
0 comments X
read the original abstract

Distributed representation learned with neural networks has recently shown to be effective in modeling natural languages at fine granularities such as words, phrases, and even sentences. Whether and how such an approach can be extended to help model larger spans of text, e.g., documents, is intriguing, and further investigation would still be desirable. This paper aims to enhance neural network models for such a purpose. A typical problem of document-level modeling is automatic summarization, which aims to model documents in order to generate summaries. In this paper, we propose neural models to train computers not just to pay attention to specific regions and content of input documents with attention models, but also distract them to traverse between different content of a document so as to better grasp the overall meaning for summarization. Without engineering any features, we train the models on two large datasets. The models achieve the state-of-the-art performance, and they significantly benefit from the distraction modeling, particularly when input documents are long.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.