pith. machine review for the scientific record. sign in

arxiv: 1509.01240 · v2 · submitted 2015-09-03 · 💻 cs.LG · math.OC· stat.ML

Recognition: unknown

Train faster, generalize better: Stability of stochastic gradient descent

Authors on Pith no claims yet
classification 💻 cs.LG math.OCstat.ML
keywords convexgradientstochasticcasegeneralizemodelsnon-convexoptimization
0
0 comments X
read the original abstract

We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

    cs.LG 2015-11 accept novelty 8.0

    DCGANs with architectural constraints learn a hierarchy of representations from object parts to scenes in both generator and discriminator across image datasets.

  2. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima

    cs.LG 2016-09 unverdicted novelty 6.0

    Large-batch methods converge to sharp minima causing a generalization gap, while small-batch methods reach flat minima due to inherent gradient noise.