pith. machine review for the scientific record. sign in

arxiv: 1811.08888 · v3 · submitted 2018-11-21 · 💻 cs.LG · cs.AI· math.OC· stat.ML

Recognition: unknown

Stochastic Gradient Descent Optimizes Over-parameterized Deep ReLU Networks

Authors on Pith no claims yet
classification 💻 cs.LG cs.AImath.OCstat.ML
keywords descentgradientdeepstochasticnetworksrelutrainingloss
0
0 comments X
read the original abstract

We study the problem of training deep neural networks with Rectified Linear Unit (ReLU) activation function using gradient descent and stochastic gradient descent. In particular, we study the binary classification problem and show that for a broad family of loss functions, with proper random weight initialization, both gradient descent and stochastic gradient descent can find the global minima of the training loss for an over-parameterized deep ReLU network, under mild assumption on the training data. The key idea of our proof is that Gaussian random initialization followed by (stochastic) gradient descent produces a sequence of iterates that stay inside a small perturbation region centering around the initial weights, in which the empirical loss function of deep ReLU networks enjoys nice local curvature properties that ensure the global convergence of (stochastic) gradient descent. Our theoretical results shed light on understanding the optimization for deep learning, and pave the way for studying the optimization dynamics of training modern deep neural networks.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Mild Over-Parameterization Benefits Asymmetric Tensor PCA

    cs.LG 2026-04 unverdicted novelty 7.0

    A three-phase alternating-update method for asymmetric tensor PCA achieves d to the power of k-minus-2 sample complexity with d-squared memory and improves when signal vectors align.

  2. Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

    cs.LG 2024-01 unverdicted novelty 6.0

    SPIN lets weak LLMs become strong by self-generating training data from previous model versions and training to prefer human-annotated responses over its own outputs, outperforming DPO even with extra GPT-4 data on be...