pith. machine review for the scientific record. sign in

arxiv: 1603.06160 · v2 · submitted 2016-03-19 · 🧮 math.OC · cs.LG· cs.NE· stat.ML

Recognition: unknown

Stochastic Variance Reduction for Nonconvex Optimization

Authors on Pith no claims yet
classification 🧮 math.OC cs.LGcs.NEstat.ML
keywords svrgnonconvexgradientoptimizationstochasticanalysisanalyzeconvergence
0
0 comments X
read the original abstract

We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (SVRG) methods for them. SVRG and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (SGD); but their theoretical analysis almost exclusively assumes convexity. In contrast, we prove non-asymptotic rates of convergence (to stationary points) of SVRG for nonconvex optimization, and show that it is provably faster than SGD and gradient descent. We also analyze a subclass of nonconvex problems on which SVRG attains linear convergence to the global optimum. We extend our analysis to mini-batch variants of SVRG, showing (theoretical) linear speedup due to mini-batching in parallel settings.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.