pith. machine review for the scientific record. sign in

arxiv: 1605.00055 · v1 · submitted 2016-04-30 · 💻 cs.CV

Recognition: unknown

DisturbLabel: Regularizing CNN on the Loss Layer

Authors on Pith no claims yet
classification 💻 cs.CV
keywords disturblabeltrainingaveragingincorrectlabelslayerlossmodel
0
0 comments X
read the original abstract

During a long period of time we are combating over-fitting in the CNN training process with model regularization, including weight decay, model averaging, data augmentation, etc. In this paper, we present DisturbLabel, an extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration. Although it seems weird to intentionally generate incorrect training labels, we show that DisturbLabel prevents the network training from over-fitting by implicitly averaging over exponentially many networks which are trained with different label sets. To the best of our knowledge, DisturbLabel serves as the first work which adds noises on the loss layer. Meanwhile, DisturbLabel cooperates well with Dropout to provide complementary regularization functions. Experiments demonstrate competitive recognition results on several popular image recognition datasets.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.