pith. machine review for the scientific record. sign in

arxiv: 1708.02383 · v1 · submitted 2017-08-08 · 💻 cs.CL · cs.AI· cs.LG

Recognition: unknown

Learning how to Active Learn: A Deep Reinforcement Learning Approach

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.LG
keywords learningactivedatapolicyselectionheuristiclearnedmethod
0
0 comments X
read the original abstract

Active learning aims to select a small subset of data for annotation such that a classifier learned on the data is highly accurate. This is usually done using heuristic selection methods, however the effectiveness of such methods is limited and moreover, the performance of heuristics varies between datasets. To address these shortcomings, we introduce a novel formulation by reframing the active learning as a reinforcement learning problem and explicitly learning a data selection policy, where the policy takes the role of the active learning heuristic. Importantly, our method allows the selection policy learned using simulation on one language to be transferred to other languages. We demonstrate our method using cross-lingual named entity recognition, observing uniform improvements over traditional active learning.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Labeled TrustSet Guided: Batch Active Learning with Reinforcement Learning

    cs.LG 2026-04 unverdicted novelty 5.0

    BRAL-T uses TrustSet-guided reinforcement learning for batch active learning and reports state-of-the-art results on 10 image classification benchmarks plus 2 fine-tuning tasks.