pith. machine review for the scientific record. sign in

arxiv: 1901.09847 · v2 · submitted 2019-01-28 · 💻 cs.LG · math.OC· stat.ML

Recognition: unknown

Error Feedback Fixes SignSGD and other Gradient Compression Schemes

Authors on Pith no claims yet
classification 💻 cs.LG math.OCstat.ML
keywords compressionsignsgdgradientoperatorachievesbiasedconvergeconvergence
0
0 comments X
read the original abstract

Sign-based algorithms (e.g. signSGD) have been proposed as a biased gradient compression technique to alleviate the communication bottleneck in training large neural networks across multiple workers. We show simple convex counter-examples where signSGD does not converge to the optimum. Further, even when it does converge, signSGD may generalize poorly when compared with SGD. These issues arise because of the biased nature of the sign compression operator. We then show that using error-feedback, i.e. incorporating the error made by the compression operator into the next step, overcomes these issues. We prove that our algorithm EF-SGD with arbitrary compression operator achieves the same rate of convergence as SGD without any additional assumptions. Thus EF-SGD achieves gradient compression for free. Our experiments thoroughly substantiate the theory and show that error-feedback improves both convergence and generalization. Code can be found at \url{https://github.com/epfml/error-feedback-SGD}.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Enhancing SignSGD: Small-Batch Convergence Analysis and a Hybrid Switching Strategy

    cs.LG 2026-04 unverdicted novelty 5.0

    SignSGD with pre-sign dithering and a calibrated hybrid switch to SGD achieves 92.18% accuracy on CIFAR-10 with ResNet-18, outperforming pure SGD and SignSGD, plus better results than Adam on CIFAR-100.