pith. machine review for the scientific record. sign in

arxiv: 1610.02132 · v4 · submitted 2016-10-07 · 💻 cs.LG · cs.DS

Recognition: unknown

QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding

Authors on Pith no claims yet
classification 💻 cs.LG cs.DS
keywords gradientcompressioncostqsgdcommunicationquantizationtrainingaccuracy
0
0 comments X
read the original abstract

Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to excellent scalability properties of this algorithm, and to its efficiency in the context of training deep neural networks. A fundamental barrier for parallelizing large-scale SGD is the fact that the cost of communicating the gradient updates between nodes can be very large. Consequently, lossy compression heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always provably converge, and it is not clear whether they are optimal. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes which allow the compression of gradient updates at each node, while guaranteeing convergence under standard assumptions. QSGD allows the user to trade off compression and convergence time: it can communicate a sublinear number of bits per iteration in the model dimension, and can achieve asymptotically optimal communication cost. We complement our theoretical results with empirical data, showing that QSGD can significantly reduce communication cost, while being competitive with standard uncompressed techniques on a variety of real tasks. In particular, experiments show that gradient quantization applied to training of deep neural networks for image classification and automated speech recognition can lead to significant reductions in communication cost, and end-to-end training time. For instance, on 16 GPUs, we are able to train a ResNet-152 network on ImageNet 1.8x faster to full accuracy. Of note, we show that there exist generic parameter settings under which all known network architectures preserve or slightly improve their full accuracy when using quantization.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 6 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Federated Learning: Strategies for Improving Communication Efficiency

    cs.LG 2016-10 conditional novelty 8.0

    Structured updates (low-rank or masked) and sketched updates (quantized, rotated, subsampled) reduce uplink communication in federated learning by up to two orders of magnitude on convolutional and recurrent networks.

  2. NCCLZ: Compression-Enabled GPU Collectives with Decoupled Quantization and Entropy Coding

    cs.DC 2026-05 unverdicted novelty 7.0

    NCCLZ decouples quantization and entropy coding across NCCL stack layers to enable overlapped compression, delivering up to 9.65x speedup over plain NCCL on scientific and training workloads.

  3. Decoupled DiLoCo for Resilient Distributed Pre-training

    cs.CL 2026-04 unverdicted novelty 6.0

    Decoupled DiLoCo enables asynchronous distributed pre-training with zero global downtime under simulated failures while preserving competitive performance on text and vision tasks.

  4. DBLP: Phase-Aware Bounded-Loss Transport for Burst-Resilient Distributed ML Training

    cs.LG 2026-05 unverdicted novelty 5.0

    DBLP is a training-phase-aware bounded-loss transport protocol that reduces end-to-end distributed ML training time by 24.4% on average (up to 33.9%) and achieves up to 5.88x communication speedup during microbursts w...

  5. Enhancing SignSGD: Small-Batch Convergence Analysis and a Hybrid Switching Strategy

    cs.LG 2026-04 unverdicted novelty 5.0

    SignSGD with pre-sign dithering and a calibrated hybrid switch to SGD achieves 92.18% accuracy on CIFAR-10 with ResNet-18, outperforming pure SGD and SignSGD, plus better results than Adam on CIFAR-100.

  6. Subspace Optimization for Efficient Federated Learning under Heterogeneous Data

    cs.LG 2026-04 unverdicted novelty 5.0

    SSF enables efficient federated learning under heterogeneous data by optimizing in a low-dimensional subspace with projected corrections and backfill updates, achieving a non-asymptotic convergence rate of order O~(1/...