pith. machine review for the scientific record. sign in

arxiv: 1812.07210 · v2 · submitted 2018-12-18 · 💻 cs.LG · cs.DC· stat.ML

Recognition: unknown

Expanding the Reach of Federated Learning by Reducing Client Resource Requirements

Authors on Pith no claims yet
classification 💻 cs.LG cs.DCstat.ML
keywords communicationmodelreductionfederatedtimescapacityclientclient-to-server
0
0 comments X
read the original abstract

Communication on heterogeneous edge networks is a fundamental bottleneck in Federated Learning (FL), restricting both model capacity and user participation. To address this issue, we introduce two novel strategies to reduce communication costs: (1) the use of lossy compression on the global model sent server-to-client; and (2) Federated Dropout, which allows users to efficiently train locally on smaller subsets of the global model and also provides a reduction in both client-to-server communication and local computation. We empirically show that these strategies, combined with existing compression approaches for client-to-server communication, collectively provide up to a $14\times$ reduction in server-to-client communication, a $1.7\times$ reduction in local computation, and a $28\times$ reduction in upload communication, all without degrading the quality of the final model. We thus comprehensively reduce FL's impact on client device resources, allowing higher capacity models to be trained, and a more diverse set of users to be reached.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Quantizing With Randomized Hadamard Transforms: Efficient Heuristic Now Proven

    cs.LG 2026-05 unverdicted novelty 7.0

    Two randomized Hadamard transforms suffice to make coordinate marginals O(d^{-1/2})-close to Gaussian for most quantization methods, with three needed for vector quantization to match uniform random rotations asymptotically.

  2. Enhanced Privacy and Communication Efficiency in Non-IID Federated Learning with Adaptive Quantization and Differential Privacy

    cs.CV 2026-04 unverdicted novelty 5.0

    Adaptive bit-length schedulers plus Laplacian DP in non-IID FL reduce communicated data by up to 52.64% on MNIST and 45% on CIFAR-10 while keeping competitive accuracy and privacy.

  3. Representation-Aligned Multi-Scale Personalization for Federated Learning

    cs.LG 2026-04 unverdicted novelty 5.0

    FRAMP generates client-specific models from compact descriptors in federated learning, trains tailored submodels, and aligns representations to balance personalization with global consistency.