pith. machine review for the scientific record. sign in

arxiv: 1612.05356 · v3 · submitted 2016-12-16 · 💻 cs.LG · math.OC· stat.ML

Recognition: unknown

Projected Semi-Stochastic Gradient Descent Method with Mini-Batch Scheme under Weak Strong Convexity Assumption

Authors on Pith no claims yet
classification 💻 cs.LG math.OCstat.ML
keywords gradientassumptionconvexitydescentmethodstrongmini-batchprojected
0
0 comments X
read the original abstract

We propose a projected semi-stochastic gradient descent method with mini-batch for improving both the theoretical complexity and practical performance of the general stochastic gradient descent method (SGD). We are able to prove linear convergence under weak strong convexity assumption. This requires no strong convexity assumption for minimizing the sum of smooth convex functions subject to a compact polyhedral set, which remains popular across machine learning community. Our PS2GD preserves the low-cost per iteration and high optimization accuracy via stochastic gradient variance-reduced technique, and admits a simple parallel implementation with mini-batches. Moreover, PS2GD is also applicable to dual problem of SVM with hinge loss.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Convergence of Riemannian Stochastic Gradient Descents: Varying Batch Sizes And Nonstandard Batch Forming

    math.OC 2026-04 unverdicted novelty 6.0

    Convergence theorems are established for Riemannian SGD with iteration-varying probability spaces, applying to varying batch sizes and unbiased batch forming schemes.