pith. machine review for the scientific record. sign in

arxiv: 1904.11238 · v2 · submitted 2019-04-25 · 💻 cs.CV

Recognition: unknown

Unsupervised Label Noise Modeling and Loss Correction

Authors on Pith no claims yet
classification 💻 cs.CV
keywords losslabelmixturenoisecorrectfurthermislabelledmodel
0
0 comments X
read the original abstract

Despite being robust to small amounts of label noise, convolutional neural networks trained with stochastic gradient methods have been shown to easily fit random labels. When there are a mixture of correct and mislabelled targets, networks tend to fit the former before the latter. This suggests using a suitable two-component mixture model as an unsupervised generative model of sample loss values during training to allow online estimation of the probability that a sample is mislabelled. Specifically, we propose a beta mixture to estimate this probability and correct the loss by relying on the network prediction (the so-called bootstrapping loss). We further adapt mixup augmentation to drive our approach a step further. Experiments on CIFAR-10/100 and TinyImageNet demonstrate a robustness to label noise that substantially outperforms recent state-of-the-art. Source code is available at https://git.io/fjsvE

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Sharpness-Aware Minimization for Efficiently Improving Generalization

    cs.LG 2020-10 conditional novelty 6.0

    SAM solves a min-max problem to locate flat low-loss regions, improving generalization on CIFAR, ImageNet and label-noise tasks.