pith. machine review for the scientific record. sign in

arxiv: 1711.04623 · v3 · submitted 2017-11-13 · 💻 cs.LG · cs.AI· cs.CV· stat.ML

Recognition: unknown

Three Factors Influencing Minima in SGD

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.CVstat.ML
keywords batchlearningminimaratesizeratiofactorsfinal
0
0 comments X
read the original abstract

We investigate the dynamical and convergent properties of stochastic gradient descent (SGD) applied to Deep Neural Networks (DNNs). Characterizing the relation between learning rate, batch size and the properties of the final minima, such as width or generalization, remains an open question. In order to tackle this problem we investigate the previously proposed approximation of SGD by a stochastic differential equation (SDE). We theoretically argue that three factors - learning rate, batch size and gradient covariance - influence the minima found by SGD. In particular we find that the ratio of learning rate to batch size is a key determinant of SGD dynamics and of the width of the final minima, and that higher values of the ratio lead to wider minima and often better generalization. We confirm these findings experimentally. Further, we include experiments which show that learning rate schedules can be replaced with batch size schedules and that the ratio of learning rate to batch size is an important factor influencing the memorization process.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 6 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Too Sharp, Too Sure: When Calibration Follows Curvature

    cs.LG 2026-04 unverdicted novelty 7.0

    Calibration error tracks curvature via shared margin-dependent exponential tails; a margin-aware objective improves out-of-sample calibration across optimizers.

  2. The Origin of Edge of Stability

    cs.LG 2026-04 unverdicted novelty 7.0

    Full-batch gradient descent forces the largest Hessian eigenvalue to exactly 2/η via the edge coupling functional, its criticality condition, and the mean value theorem with no gap.

  3. Large Spikes in Stochastic Gradient Descent: A Large-Deviations View

    cs.LG 2026-03 unverdicted novelty 7.0

    Large loss spikes in SGD are polynomially likely and serve as the dominant mechanism for escaping sharp minima toward flatter solutions in the NTK regime.

  4. On What We Can Learn from Low-Resolution Data

    cs.LG 2026-05 unverdicted novelty 6.0

    Low-resolution data improves high-resolution model performance when high-resolution samples are limited, via KL-divergence bounds and experiments on vision transformers and CNNs.

  5. SGD at the Edge of Stability: The Stochastic Sharpness Gap

    cs.LG 2026-04 unverdicted novelty 6.0

    SGD stabilizes sharpness below 2/η with equilibrium gap ΔS = η β σ_u²/(4α) due to noise-enhanced stochastic self-stabilization.

  6. There Will Be a Scientific Theory of Deep Learning

    stat.ML 2026-04 unverdicted novelty 2.0

    A mechanics of the learning process is emerging in deep learning theory, characterized by dynamics, coarse statistics, and falsifiable predictions across idealized settings, limits, laws, hyperparameters, and universa...