pith. machine review for the scientific record. sign in

arxiv: 1803.08917 · v1 · submitted 2018-03-23 · 💻 cs.LG · cs.DC· cs.DS· math.OC· stat.ML

Recognition: unknown

Byzantine Stochastic Gradient Descent

Authors on Pith no claims yet
classification 💻 cs.LG cs.DCcs.DSmath.OCstat.ML
keywords stochasticvarepsilonbyzantinefracalphacomplexitydescentgradient
0
0 comments X
read the original abstract

This paper studies the problem of distributed stochastic optimization in an adversarial setting where, out of the $m$ machines which allegedly compute stochastic gradients every iteration, an $\alpha$-fraction are Byzantine, and can behave arbitrarily and adversarially. Our main result is a variant of stochastic gradient descent (SGD) which finds $\varepsilon$-approximate minimizers of convex functions in $T = \tilde{O}\big( \frac{1}{\varepsilon^2 m} + \frac{\alpha^2}{\varepsilon^2} \big)$ iterations. In contrast, traditional mini-batch SGD needs $T = O\big( \frac{1}{\varepsilon^2 m} \big)$ iterations, but cannot tolerate Byzantine failures. Further, we provide a lower bound showing that, up to logarithmic factors, our algorithm is information-theoretically optimal both in terms of sampling complexity and time complexity.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.