pith. machine review for the scientific record. sign in

arxiv: 1704.04451 · v3 · submitted 2017-04-14 · 💻 cs.CL · cs.AI· cs.LG

Recognition: unknown

Optimizing Differentiable Relaxations of Coreference Evaluation Metrics

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.LG
keywords coreferencelearningdifferentiableevaluationmetricsoptimizeperformancereinforcement
0
0 comments X
read the original abstract

Coreference evaluation metrics are hard to optimize directly as they are non-differentiable functions, not easily decomposable into elementary decisions. Consequently, most approaches optimize objectives only indirectly related to the end goal, resulting in suboptimal performance. Instead, we propose a differentiable relaxation that lends itself to gradient-based optimisation, thus bypassing the need for reinforcement learning or heuristic modification of cross-entropy. We show that by modifying the training objective of a competitive neural coreference system, we obtain a substantial gain in performance. This suggests that our approach can be regarded as a viable alternative to using reinforcement learning or more computationally expensive imitation learning.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.