pith. machine review for the scientific record. sign in

arxiv: 1805.07836 · v4 · submitted 2018-05-20 · 💻 cs.LG · cs.CV· stat.ML

Recognition: unknown

Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels

Authors on Pith no claims yet
classification 💻 cs.LG cs.CVstat.ML
keywords lossdatasetsdnnslabelsnoisyperformancecrossdeep
0
0 comments X
read the original abstract

Deep neural networks (DNNs) have achieved tremendous success in a variety of applications across many disciplines. Yet, their superior performance comes with the expensive cost of requiring correctly annotated large-scale datasets. Moreover, due to DNNs' rich capacity, errors in training labels can hamper performance. To combat this problem, mean absolute error (MAE) has recently been proposed as a noise-robust alternative to the commonly-used categorical cross entropy (CCE) loss. However, as we show in this paper, MAE can perform poorly with DNNs and challenging datasets. Here, we present a theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE. Proposed loss functions can be readily applied with any existing DNN architecture and algorithm, while yielding good performance in a wide range of noisy label scenarios. We report results from experiments conducted with CIFAR-10, CIFAR-100 and FASHION-MNIST datasets and synthetically generated noisy labels.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Sharpness-Aware Minimization for Efficiently Improving Generalization

    cs.LG 2020-10 conditional novelty 6.0

    SAM solves a min-max problem to locate flat low-loss regions, improving generalization on CIFAR, ImageNet and label-noise tasks.