pith. machine review for the scientific record. sign in

arxiv: 1805.11004 · v1 · submitted 2018-05-28 · 💻 cs.CL · cs.AI· cs.LG

Recognition: unknown

Soft Layer-Specific Multi-Task Summarization with Entailment and Question Generation

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.LG
keywords documententailmentgenerationmodelmulti-tasksummarizationabstractiveanalysis
0
0 comments X
read the original abstract

An accurate abstractive summary of a document should contain all its salient information and should be logically entailed by the input document. We improve these important aspects of abstractive summarization via multi-task learning with the auxiliary tasks of question generation and entailment generation, where the former teaches the summarization model how to look for salient questioning-worthy details, and the latter teaches the model how to rewrite a summary which is a directed-logical subset of the input document. We also propose novel multi-task architectures with high-level (semantic) layer-specific sharing across multiple encoder and decoder layers of the three tasks, as well as soft-sharing mechanisms (and show performance ablations and analysis examples of each contribution). Overall, we achieve statistically significant improvements over the state-of-the-art on both the CNN/DailyMail and Gigaword datasets, as well as on the DUC-2002 transfer setup. We also present several quantitative and qualitative analysis studies of our model's learned saliency and entailment skills.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.