pith. machine review for the scientific record. sign in

arxiv: 1712.07557 · v2 · submitted 2017-12-20 · 💻 cs.CR · cs.LG· stat.ML

Recognition: unknown

Differentially Private Federated Learning: A Client Level Perspective

Authors on Pith no claims yet
classification 💻 cs.CR cs.LGstat.ML
keywords modelclientsfederatedprivacyclientdifferentialduringdata
0
0 comments X
read the original abstract

Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 9 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Taming Noise-Induced Prototype Degradation for Privacy-Preserving Personalized Federated Fine-Tuning

    cs.CV 2026-04 unverdicted novelty 7.0

    VPDR improves the privacy-utility trade-off in ProtoPFL by allocating less noise to high-variance discriminative prototype dimensions via VPP and using DCR to keep feature norms near the clipping threshold without har...

  2. DP-FedAdamW: An Efficient Optimizer for Differentially Private Federated Large Models

    cs.LG 2026-02 unverdicted novelty 7.0

    DP-FedAdamW delivers an unbiased second-moment estimator for AdamW in DPFL, proving linear convergence acceleration without heterogeneity assumptions and outperforming SOTA by 5.83% on Tiny-ImageNet with Swin-Base at ε=1.

  3. Federated Cross-Modal Retrieval with Missing Modalities via Semantic Routing and Adapter Personalization

    cs.CV 2026-04 unverdicted novelty 6.0

    RCSR is a personalization-friendly federated framework that improves cross-modal retrieval accuracy and stability under missing modalities via semantic routing and adapters.

  4. Practical Quantum Federated Learning for Privacy-Sensitive Healthcare: Communication Efficiency and Noise Resilience

    quant-ph 2026-03 unverdicted novelty 6.0

    Hybrid QFL cuts quantum transmissions from 3TNMP to {3t + 2(T-t)}NMP over T rounds while preserving near-centralized convergence and improving depolarizing-noise resilience via decentralized aggregation and Steane-code QEC.

  5. DP-LAC: Lightweight Adaptive Clipping for Differentially Private Federated Fine-tuning of Language Models

    cs.LG 2026-05 unverdicted novelty 5.0

    DP-LAC provides a new adaptive clipping technique for DP-SGD in federated LLM fine-tuning that improves accuracy by 6.6% on average without consuming additional privacy budget or requiring new hyperparameters.

  6. Enhanced Privacy and Communication Efficiency in Non-IID Federated Learning with Adaptive Quantization and Differential Privacy

    cs.CV 2026-04 unverdicted novelty 5.0

    Adaptive bit-length schedulers plus Laplacian DP in non-IID FL reduce communicated data by up to 52.64% on MNIST and 45% on CIFAR-10 while keeping competitive accuracy and privacy.

  7. DDP-SA: Scalable Privacy-Preserving Federated Learning via Distributed Differential Privacy and Secure Aggregation

    cs.CR 2026-04 unverdicted novelty 5.0

    DDP-SA combines client-side Laplace noise perturbation with full-threshold additive secret sharing to let federated learning servers reconstruct only aggregated noisy gradients without exposing individual client updates.

  8. FedSpy-LLM: Towards Scalable and Generalizable Data Reconstruction Attacks from Gradients on LLMs

    cs.CR 2026-04 unverdicted novelty 5.0

    FedSpy-LLM uses gradient decomposition and iterative alignment to reconstruct larger batches and longer sequences of training data from LLM gradients in federated settings, including with PEFT methods.

  9. Compliance Management for Federated Data Processing

    cs.SE 2026-02 unverdicted novelty 4.0

    A prototype framework collects legal requirements and translates them into machine-actionable policies for federated data processing networks via policy-as-code and LLMs.