pith. machine review for the scientific record. sign in

arxiv: 1807.01270 · v5 · submitted 2018-07-03 · 💻 cs.CL · cs.AI

Recognition: unknown

Reaching Human-level Performance in Automatic Grammatical Error Correction: An Empirical Study

Authors on Pith no claims yet
classification 💻 cs.CL cs.AI
keywords fluencyinferencecorrectionerrorlearningperformancesentenceseq2seq
0
0 comments X
read the original abstract

Neural sequence-to-sequence (seq2seq) approaches have proven to be successful in grammatical error correction (GEC). Based on the seq2seq framework, we propose a novel fluency boost learning and inference mechanism. Fluency boosting learning generates diverse error-corrected sentence pairs during training, enabling the error correction model to learn how to improve a sentence's fluency from more instances, while fluency boosting inference allows the model to correct a sentence incrementally with multiple inference steps. Combining fluency boost learning and inference with convolutional seq2seq models, our approach achieves the state-of-the-art performance: 75.72 (F_{0.5}) on CoNLL-2014 10 annotation dataset and 62.42 (GLEU) on JFLEG test set respectively, becoming the first GEC system that reaches human-level performance (72.58 for CoNLL and 62.37 for JFLEG) on both of the benchmarks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.