pith. machine review for the scientific record. sign in

arxiv: 1611.01838 · v5 · submitted 2016-11-06 · 💻 cs.LG · stat.ML

Recognition: unknown

Entropy-SGD: Biasing Gradient Descent Into Wide Valleys

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords energyentropy-sgdgeneralizationlandscapelocalalgorithmeigenvalueserror
0
0 comments X
read the original abstract

This paper proposes a new optimization algorithm called Entropy-SGD for training deep neural networks that is motivated by the local geometry of the energy landscape. Local extrema with low generalization error have a large proportion of almost-zero eigenvalues in the Hessian with very few positive or negative eigenvalues. We leverage upon this observation to construct a local-entropy-based objective function that favors well-generalizable solutions lying in large flat regions of the energy landscape, while avoiding poorly-generalizable solutions located in the sharp valleys. Conceptually, our algorithm resembles two nested loops of SGD where we use Langevin dynamics in the inner loop to compute the gradient of the local entropy before each update of the weights. We show that the new objective has a smoother energy landscape and show improved generalization over SGD using uniform stability, under certain assumptions. Our experiments on convolutional and recurrent networks demonstrate that Entropy-SGD compares favorably to state-of-the-art techniques in terms of generalization error and training time.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 4 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Estimating Implicit Regularization in Deep Learning

    stat.ML 2026-05 unverdicted novelty 7.0

    Gradient matching empirically recovers implicit regularization effects such as l2 penalties from early stopping and dropout in neural networks.

  2. Sharpness-Aware Pretraining Mitigates Catastrophic Forgetting

    cs.LG 2026-05 unverdicted novelty 6.0

    Sharpness-aware pretraining and related flat-minima interventions reduce catastrophic forgetting by up to 80% after post-training across 20M-150M models and by 31-40% at 1B scale.

  3. Sharpness-Aware Minimization for Efficiently Improving Generalization

    cs.LG 2020-10 conditional novelty 6.0

    SAM solves a min-max problem to locate flat low-loss regions, improving generalization on CIFAR, ImageNet and label-noise tasks.

  4. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima

    cs.LG 2016-09 unverdicted novelty 6.0

    Large-batch methods converge to sharp minima causing a generalization gap, while small-batch methods reach flat minima due to inherent gradient noise.