pith. machine review for the scientific record. sign in

arxiv: 1610.08123 · v4 · submitted 2016-10-25 · 💻 cs.LG · stat.ML

Recognition: unknown

Socratic Learning: Augmenting Generative Models to Incorporate Latent Subsets in Training Data

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords traininggenerativemodeldatamodelssourceslearningsubsets
0
0 comments X
read the original abstract

A challenge in training discriminative models like neural networks is obtaining enough labeled training data. Recent approaches use generative models to combine weak supervision sources, like user-defined heuristics or knowledge bases, to label training data. Prior work has explored learning accuracies for these sources even without ground truth labels, but they assume that a single accuracy parameter is sufficient to model the behavior of these sources over the entire training set. In particular, they fail to model latent subsets in the training data in which the supervision sources perform differently than on average. We present Socratic learning, a paradigm that uses feedback from a corresponding discriminative model to automatically identify these subsets and augments the structure of the generative model accordingly. Experimentally, we show that without any ground truth labels, the augmented generative model reduces error by up to 56.06% for a relation extraction task compared to a state-of-the-art weak supervision technique that utilizes generative models.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Transformer Feed-Forward Layers Are Key-Value Memories

    cs.CL 2020-12 conditional novelty 8.0

    Transformer feed-forward layers act as key-value memories storing textual patterns and their associated output distributions.

  2. Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization

    cs.LG 2019-11 conditional novelty 6.0

    Increased regularization is required for group DRO to achieve good worst-group generalization in overparameterized neural networks.