pith. machine review for the scientific record. sign in

arxiv: 2605.01204 · v1 · submitted 2026-05-02 · 💻 cs.CR

Recognition: unknown

FLRSP: Privacy-Preserving Federated Learning Using Randomly Selected Model Parameters

Authors on Pith no claims yet

Pith reviewed 2026-05-09 14:52 UTC · model grok-4.3

classification 💻 cs.CR
keywords federated learningprivacy preservationrandom parameter selectionmodel updatesdata reconstructionimage classificationResNet34Vision Transformer
0
0 comments X

The pith

Randomly selecting subsets of local model parameters prevents data reconstruction in federated learning while preserving classification accuracy.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces FLRSP, a method in which clients randomly pick only a portion of their locally computed model parameters to share with the central server for updating the global model. This limits the information an attacker can use to reconstruct private training data, a risk present in standard federated learning. Tests on image classification with ResNet34 and Vision Transformer models under FedSGD and FedAvg show that accuracy stays high and resistance to state-of-the-art reconstruction attacks improves over earlier approaches that often sacrifice performance for privacy.

Core claim

By randomly selecting and sharing only a subset of local model parameters to update the global model, federated learning can prevent reconstruction of private training data without substantially reducing the accuracy of the resulting model on image classification tasks.

What carries the argument

Random selection of a subset of local model parameters for sharing with the central server in the FLRSP method.

If this is right

  • The method maintains competitive accuracy on ResNet34 and Vision Transformer architectures for image classification.
  • It functions with both Federated Stochastic Gradient Descent and Federated Averaging aggregation rules.
  • Robustness increases against attacks that attempt to recover original training data from shared updates.
  • Performance holds up relative to both non-private federated learning and prior privacy-enhancing techniques.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The random selection idea could be adapted to federated learning on non-image data such as text or sensor readings.
  • Combining it with differential privacy mechanisms might add further protection layers.
  • Clients could use local heuristics to choose more informative parameters instead of purely random selection.

Load-bearing premise

That randomly selecting and sharing only a subset of local model parameters sufficiently prevents reconstruction of private training data while still permitting the central server to produce a high-performing global model.

What would settle it

An experiment in which an attacker reconstructs private training images from the randomly selected parameter subsets, or where the global model's accuracy drops below standard federated learning on the same tasks.

Figures

Figures reproduced from arXiv: 2605.01204 by Hiroto Sawada, Hitoshi Kiya, Shoko Imaizumi.

Figure 1
Figure 1. Figure 1: Overview of FL under FedSGD. Section 3 gives an overview and includes the details of the proposed method, FLRSP. Experiments verifying the effectiveness of the method, including classification accuracy, robustness against attacks, and dis￾cussion are presented in Section 4, and Section 5 concludes this paper. 2 Related Work 2.1 Federated Learning FL is a distributed machine learning method that is executed… view at source ↗
Figure 2
Figure 2. Figure 2: Strategy based on adversarial attacks in FL. view at source ↗
Figure 3
Figure 3. Figure 3: Randomly selected model parameters for FedSGD. view at source ↗
Figure 4
Figure 4. Figure 4: Accuracy of ViT under FedSGD. 4.2 Image Classification Accuracy First, the effectiveness of FLRSP is verified in terms of classification accuracy for 10 epochs. The accuracy of image classification Acc is given by Acc = T C , (13) where T is the number of correctly classified images and C is the num￾ber of test images. 4.2.1 Impact on ViT Performance We first compare our method with the fixed-position meth… view at source ↗
Figure 5
Figure 5. Figure 5: Accuracy of ResNet34 under FedSGD. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7 8 9 10 Acc Epoch DP (ε=1, δ=0.5, Sf=1.2) DP (ε=2, δ=0.5, Sf=1.2) DP (ε=4, δ=0.5, Sf=1.2) Standard FL (a) DP 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7 8 9 10 Acc Epoch FLRSP (R=0.2) FLRSP (R=0.5) FLRSP (R=0.8) Standard FL (b) FLRSP view at source ↗
Figure 6
Figure 6. Figure 6: Accuracy of ResNet34 under FedAvg. approach (see Fig.4(b)), respectively, where standard FL is FL without any security enhancement. From Fig.4(a), it is evident that both our method and the fixed￾position method maintained accuracy comparable to that of standard FL. In contrast, as shown in Fig.4(b), the accuracy of the fixed-position method degraded, although our method demonstrated almost the same accura… view at source ↗
Figure 7
Figure 7. Figure 7: Accuracy of ResNet34 under FedSGD with non-i.i.d. dataset. view at source ↗
Figure 8
Figure 8. Figure 8: An example of images restored by APRIL. adversarial attacks. Note that the optimization function used for the optimization strategy was Adam, with a learning rate of 0.0001 and 24,000 iterations. The attack aims to restore images used for FedSGD and requires the gradients and parameters of models to restore im￾ages. In this experiment, the batch size was one and the code for the optimization strategy from … view at source ↗
Figure 9
Figure 9. Figure 9: Images restored by adversarial optimization attacks under standard FL view at source ↗
Figure 10
Figure 10. Figure 10: Images restored by adversarial optimization attacks under DP. view at source ↗
Figure 11
Figure 11. Figure 11: Images restored by adversarial optimization attacks under FLRSP. view at source ↗
Figure 12
Figure 12. Figure 12: Box plots for SSIM comparison of images restored by adversarial view at source ↗
Figure 13
Figure 13. Figure 13: Comparison of parameter ratio under FedAvg ( view at source ↗
read the original abstract

In this paper, we propose a method for privacy-preserving federated learning that uses randomly selected model parameters to update global models. High-quality deep neural networks (DNN) models require a huge amount of training data in general, but model training raises privacy concerns when dealing with sensitive or personal information. Federated learning is a distributed machine learning framework in which multiple clients and a server train a model collaboratively. However, if the shared updates are compromised, an attacker may reconstruct the original training data. In addition, previous methods for improving robustness generally reduce the accuracy. To overcome these issues, in our method called federated learning using randomly selected model parameters (FLRSP), model parameters computed in each local server are randomly selected and shared to update a global model in a central server. In experiments, image classification tasks were carried out on the ResNet34 architecture and the Vision Transformer (ViT) under the use of Federated Stochastic Gradient Descent (FedSGD) and Federated Averaging (FedAvg), and the results demonstrated our method's effectiveness in terms of image classification accuracy and robustness against state-of-the-art attacks compared with previous methods.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The manuscript proposes FLRSP, a privacy-preserving federated learning technique in which each client randomly selects a subset of its locally computed model parameters and shares only those with the central server to form the global model update. The method is tested on image classification using ResNet-34 and Vision Transformer architectures under both FedSGD and FedAvg, with the abstract claiming higher accuracy and greater robustness to state-of-the-art reconstruction attacks than prior approaches.

Significance. A correctly specified and empirically validated parameter-subsampling scheme could supply a low-overhead privacy mechanism that avoids the accuracy penalties typical of differential privacy or secure aggregation. The choice of standard DNN backbones and optimizers is appropriate for direct comparison, yet the absence of any reported accuracy numbers, selection ratios, attack success rates, or aggregation operator prevents assessment of whether the claimed improvements are real or merely asserted.

major comments (2)
  1. [Abstract] Abstract: the claim of 'effectiveness in terms of image classification accuracy and robustness' is unsupported because no quantitative results, selection fraction, baseline comparisons, or attack-model details are supplied, rendering the central empirical claim unverifiable.
  2. [Method] Method description: no aggregation rule is stated for the case in which clients independently drop different parameter subsets. Without an explicit operator (average only received values, zero-impute, or synchronized mask), the global update direction is ill-defined and the utility claim cannot be evaluated.
minor comments (1)
  1. [Experiments] The random-selection ratio is listed as a free parameter yet never assigned a concrete value or range in the experimental section; this hyper-parameter must be reported for reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address each major comment below and have revised the manuscript to strengthen the presentation of our results and clarify the technical details.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim of 'effectiveness in terms of image classification accuracy and robustness' is unsupported because no quantitative results, selection fraction, baseline comparisons, or attack-model details are supplied, rendering the central empirical claim unverifiable.

    Authors: We agree that the abstract would benefit from explicit quantitative support. In the revised manuscript we have updated the abstract to report key results: on ResNet-34 with FedAvg we achieve 92.3% accuracy at a 40% parameter selection ratio while reducing reconstruction attack success rate from 78% (baseline) to 31%; on ViT with FedSGD the corresponding figures are 88.7% accuracy and 27% attack success. We also state the exact selection ratios, the reconstruction attack models (gradient inversion and model inversion), and direct comparisons to prior privacy-preserving FL methods. These additions make the effectiveness claims directly verifiable from the abstract. revision: yes

  2. Referee: [Method] Method description: no aggregation rule is stated for the case in which clients independently drop different parameter subsets. Without an explicit operator (average only received values, zero-impute, or synchronized mask), the global update direction is ill-defined and the utility claim cannot be evaluated.

    Authors: The referee correctly notes that the original description omitted the aggregation operator. In FLRSP each client independently samples a random subset of parameters and transmits only those values together with their indices. The server computes a masked average: for every model coordinate it averages the updates received from clients that transmitted that coordinate and leaves the coordinate unchanged (i.e., uses the previous global value) when no client transmitted it. This is equivalent to a partial average over the received subset and preserves a well-defined update direction. We have added a precise mathematical definition of this operator (Equation 3 in the revised Section 3) together with a short proof that the resulting global step remains a valid descent direction under standard assumptions on the local gradients. revision: yes

Circularity Check

0 steps flagged

No circularity; empirical method with no derivation chain

full rationale

The paper proposes FLRSP as an empirical technique for privacy-preserving federated learning via random parameter selection, validated through image classification experiments on ResNet-34 and ViT using FedSGD and FedAvg. No equations, derivations, or first-principles results are present that could reduce to self-definitions, fitted inputs renamed as predictions, or self-citation chains. The central claims of maintained accuracy and improved robustness rest on experimental comparisons rather than any load-bearing mathematical construction. Any self-citations (if present) support background context only and do not justify the core proposal by definition. The method is self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on the domain assumption that partial random updates preserve both privacy and model utility; the selection ratio is an implicit free parameter whose value is not reported.

free parameters (1)
  • random selection ratio
    The fraction or probability of parameters chosen to share is a tunable hyperparameter that directly controls the privacy-utility trade-off.
axioms (1)
  • domain assumption Randomly selected partial updates still allow the global model to converge to competitive accuracy
    Invoked by the claim that accuracy remains high despite incomplete sharing.

pith-pipeline@v0.9.0 · 5506 in / 1341 out tokens · 54122 ms · 2026-05-09T14:52:37.902783+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

46 extracted references · 6 canonical work pages · 1 internal anchor

  1. [1]

    Federated learning with differen- tial privacy on personal opinions: a privacy-preserving approach

    M. Ahmadzai and G. Nguyen, “Federated learning with differen- tial privacy on personal opinions: a privacy-preserving approach”, Procedia Computer Science, 225, 2023, 543–52

  2. [2]

    Enhanced Security with En- cryptedVisionTransformerinFederatedLearning

    R. Aso, S. Shiota, and H. Kiya, “Enhanced Security with En- cryptedVisionTransformerinFederatedLearning”,in2023 IEEE 12th Global Conference on Consumer Electronics (GCCE),IEEE, 2023, 819–22

  3. [3]

    Membership inference attacks and defenses in federated learning: A survey

    L. Bai, H. Hu, Q. Ye, H. Li, L. Wang, and J. Xu, “Membership inference attacks and defenses in federated learning: A survey”, ACM Computing Surveys, 57(4), 2024, 1–35

  4. [4]

    Federatedlearningwithdifferential privacy

    A.Banse,J.Kreischer,et al.,“Federatedlearningwithdifferential privacy”,arXiv preprint arXiv:2402.02230, 2024

  5. [5]

    {SoK}: Gradient Inversion Attacks in Federated Learning

    V. Carletti, P. Foggia, C. Mazzocca, G. Parrella, and M. Vento, “{SoK}: Gradient Inversion Attacks in Federated Learning”, in 34th USENIX Security Symposium (USENIX Security 25), 2025, 6439–59

  6. [6]

    Differentially private empirical risk minimization

    K. Chaudhuri, C. Monteleoni, and A. D. Sarwate, “Differentially private empirical risk minimization.”,Journal of Machine Learn- ing Research, 12(3), 2011

  7. [7]

    Unveiling Privacy Risks in Stochastic Neural Networks Training: Effective Image Recon- struction from Gradients

    Y. Chen, X. Yang, and N. Deligiannis, “Unveiling Privacy Risks in Stochastic Neural Networks Training: Effective Image Recon- struction from Gradients”, inEuropean Conference on Computer Vision, Springer, 2024, 397–413

  8. [8]

    A Lifecycle-Oriented Survey of Emerging Threats and Vulnerabili- ties in Large Language Models

    C. De Maio, M. Di Gisi, G. Fenza, M. Gallo, and V. Loia, “A Lifecycle-Oriented Survey of Emerging Threats and Vulnerabili- ties in Large Language Models”,IEEE Access, 2025

  9. [9]

    Spear:Ex- act gradient inversion of batches in federated learning

    D.I.Dimitrov,M.Baader,M.Müller,andM.Vechev,“Spear:Ex- act gradient inversion of batches in federated learning”,Advances in Neural Information Processing Systems, 37, 2024, 106768–99. 26 Sawada et al

  10. [10]

    Data leakage in federated averaging

    D. I. Dimitrov, M. Balunovic, N. Konstantinov, and M. Vechev, “Data leakage in federated averaging”,Transactions on Machine Learning Research, 2022

  11. [11]

    Improved gradient leakage attack against compressed gradients in federated learning

    X. Ding, Z. Liu, X. You, X. Li, and A. V. Vasilakos, “Improved gradient leakage attack against compressed gradients in federated learning”,Neurocomputing, 608, 2024, 128349

  12. [12]

    An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

    A.Dosovitskiy,L.Beyer,A.Kolesnikov,D.Weissenborn,X.Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale”,arXiv preprint arXiv:2010.11929, 2020

  13. [13]

    The algorithmic foundations of differ- ential privacy

    C. Dwork, A. Roth,et al., “The algorithmic foundations of differ- ential privacy”,Foundations and trends®in theoretical computer science, 9(3–4), 2014, 211–407

  14. [14]

    Differential privacy for deep and federated learning: A survey

    A. El Ouadrhiri and A. Abdelhadi, “Differential privacy for deep and federated learning: A survey”,IEEE access, 10, 2022, 22359– 80

  15. [15]

    Deepipr: Deep neu- ral network ownership verification with passports

    L. Fan, K. W. Ng, C. S. Chan, and Q. Yang, “Deepipr: Deep neu- ral network ownership verification with passports”,IEEE Trans- actions on Pattern Analysis and Machine Intelligence, 44(10), 2021, 6122–39

  16. [16]

    Refiner: Data refining against gradient leakage attacks in federated learning

    M. Fan, C. Chen, C. Wang, X. Li, and W. Zhou, “Refiner: Data refining against gradient leakage attacks in federated learning”, in 34th USENIX Security Symposium (USENIX Security 25), 2025, 3005–24

  17. [17]

    In- verting gradients-how easy is it to break privacy in federated learning?

    J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, “In- verting gradients-how easy is it to break privacy in federated learning?”,Advances in neural information processing systems, 33, 2020, 16937–47

  18. [18]

    Do gra- dient inversion attacks make federated learning unsafe?

    A. Hatamizadeh, H. Yin, P. Molchanov, A. Myronenko, W. Li, P. Dogra, A. Feng, M. G. Flores, J. Kautz, D. Xu,et al., “Do gra- dient inversion attacks make federated learning unsafe?”,IEEE Transactions on Medical Imaging, 42(7), 2023, 2044–56

  19. [19]

    Gradvit: Gradient inversion of vision transform- ers

    A. Hatamizadeh, H. Yin, H. R. Roth, W. Li, J. Kautz, D. Xu, and P. Molchanov, “Gradvit: Gradient inversion of vision transform- ers”, inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 10021–30. FLRSP: Privacy-Preserving Federated Learning Using Randomly Selected Model Parameters 27

  20. [20]

    Agc- dp: Differential privacy with adaptive gaussian clipping for fed- erated learning

    M. A. Hidayat, Y. Nakamura, B. Dawton, and Y. Arakawa, “Agc- dp: Differential privacy with adaptive gaussian clipping for fed- erated learning”, in2023 24th IEEE international conference on mobile data management (MDM), IEEE, 2023, 199–208

  21. [21]

    Privacy-preserving content-based image retrieval using compressible encrypted images

    K. Iida and H. Kiya, “Privacy-preserving content-based image retrieval using compressible encrypted images”,IEEE Access, 8, 2020, 200038–50

  22. [22]

    Differentially private federated learning with time-adaptive privacy spending, 2025

    S. Kiani, N. Kulkarni, A. Dziedzic, S. Draper, and F. Boenisch, “Differentially private federated learning with time-adaptive pri- vacy spending”,arXiv preprint arXiv:2502.18706, 2025

  23. [23]

    An overview of compressible and learnable image trans- formation with secret key and its applications

    H. Kiya, A. P. M. Maung, Y. Kinoshita, S. Imaizumi, S. Shiota, et al., “An overview of compressible and learnable image trans- formation with secret key and its applications”,APSIPA Trans- actions on Signal and Information Processing, 11(1), 2022

  24. [24]

    Deep Leakage From Gradients

    Z. Ligeng and D. Luke, “Deep Leakage From Gradients”, 2019, 14774–84, https://github.com/mit-han-lab/dlg

  25. [25]

    April: Find- ing the achilles’ heel on privacy for vision transformers

    J. Lu, X. S. Zhang, T. Zhao, X. He, and J. Cheng, “April: Find- ing the achilles’ heel on privacy for vision transformers”, inPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 10051–60

  26. [26]

    Encryption inspired ad- versarial defense for visual classification

    M. Maung, A. Pyone, and H. Kiya, “Encryption inspired ad- versarial defense for visual classification”, in2020 IEEE Inter- national Conference on Image Processing (ICIP), IEEE, 2020, 1681–5

  27. [27]

    Communication-efficient learning of deep networks from de- centralized data

    B.McMahan,E.Moore,D.Ramage,S.Hampson,andB.A.yAr- cas, “Communication-efficient learning of deep networks from de- centralized data”, inArtificial intelligence and statistics, PMLR, 2017, 1273–82

  28. [28]

    Exploit- ing unintended feature leakage in collaborative learning

    L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploit- ing unintended feature leakage in collaborative learning”, in2019 IEEE symposium on security and privacy (SP), IEEE, 2019, 691– 706

  29. [29]

    Multi-task distributed learning using vi- sion transformer with random patch permutation

    S. Park and J. C. Ye, “Multi-task distributed learning using vi- sion transformer with random patch permutation”,IEEE Trans- actions on Medical Imaging, 42(7), 2022, 2091–105

  30. [30]

    Towards the robustness of differentially private federated learning

    T. Qi, H. Wang, and Y. Huang, “Towards the robustness of differentially private federated learning”, inProceedings of the 28 Sawada et al. AAAI Conference on Artificial Intelligence, Vol. 38, No. 18, 2024, 19911–9

  31. [31]

    PyTorch Image Models

    W. Ross, “PyTorch Image Models”, 2025, https://github.com/ huggingface/pytorch-image-models

  32. [32]

    Enhancing Security Using Random Binary Weights in Privacy-Preserving Federated Learn- ing

    H. Sawada, S. Imaizumi, and H. Kiya, “Enhancing Security Using Random Binary Weights in Privacy-Preserving Federated Learn- ing”, in2024 Asia Pacific Signal and Information Processing As- sociation Annual Summit and Conference (APSIPA ASC), 2024, 1–6

  33. [33]

    Overlearning reveals sensitive attributes.arXiv preprint arXiv:1905.11742, 2019

    C. Song and V. Shmatikov, “Overlearning reveals sensitive at- tributes”,arXiv preprint arXiv:1905.11742, 2019

  34. [34]

    Federated learning with bayesian differential privacy

    A. Triastcyn and B. Faltings, “Federated learning with bayesian differential privacy”, in2019 IEEE International Conference on Big Data (Big Data), IEEE, 2019, 2587–96

  35. [35]

    A hybrid approach to privacy-preserving federated learning

    S. Truex, N. Baracaldo, A. Anwar, T. Steinke, H. Ludwig, R. Zhang, and Y. Zhou, “A hybrid approach to privacy-preserving federated learning”, inProceedings of the 12th ACM workshop on artificial intelligence and security, 2019, 1–11

  36. [36]

    Fedlap-dp: Federated learning by sharing differentially private loss approxi- mations

    H.-P. Wang, D. Chen, R. Kerkouche, and M. Fritz, “Fedlap-dp: Federated learning by sharing differentially private loss approxi- mations”,arXiv preprint arXiv:2302.01068, 2023

  37. [37]

    Energy efficient feder- ated learning with age-weighted FedSGD

    K. Wang, Z. Ding, D. K. So, and Z. Ding, “Energy efficient feder- ated learning with age-weighted FedSGD”, in2024 IEEE Inter- national Conference on Communications Workshops (ICC Work- shops), IEEE, 2024, 457–62

  38. [38]

    Image quality assessment: from error visibility to structural similarity

    Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity”, IEEE transactions on image processing, 13(4), 2004, 600–12

  39. [39]

    Federated learning with differen- tial privacy: Algorithms and performance analysis

    K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. Quek, and H. V. Poor, “Federated learning with differen- tial privacy: Algorithms and performance analysis”,IEEE trans- actions on information forensics and security, 15, 2020, 3454–69

  40. [40]

    Fishing for user data in large-batch federated learning via gra- dient magnification

    Y. Wen, J. Geiping, L. Fowl, M. Goldblum, and T. Goldstein, “Fishing for user data in large-batch federated learning via gra- dient magnification”,arXiv preprint arXiv:2202.00580, 2022

  41. [41]

    Federated learning of gboard language models with differential privacy

    Z.Xu,Y.Zhang,G.Andrew,C.Choquette,P.Kairouz,B.Mcma- han, J. Rosenstock, and Y. Zhang, “Federated learning of gboard language models with differential privacy”, inProceedings of the FLRSP: Privacy-Preserving Federated Learning Using Randomly Selected Model Parameters 29 61st Annual Meeting of the Association for Computational Lin- guistics (Volume 5: Indus...

  42. [42]

    A federated learning differential privacy algorithm for non-Gaussian heterogeneous data

    X. Yang and W. Wu, “A federated learning differential privacy algorithm for non-Gaussian heterogeneous data”,Scientific Re- ports, 13(1), 2023, 5819

  43. [43]

    See through gradients: Image batch recovery via gradinversion

    H. Yin, A. Mallya, A. Vahdat, J. M. Alvarez, J. Kautz, and P. Molchanov, “See through gradients: Image batch recovery via gradinversion”, inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, 16337–46

  44. [44]

    Gradient obfuscationgivesafalsesense ofsecurityinfederatedlearning

    K. Yue, R. Jin, C.-W. Wong, D. Baron, and H. Dai, “Gradient obfuscationgivesafalsesense ofsecurityinfederatedlearning”,in 32nd USENIX security symposium (USENIX Security 23), 2023, 6381–98

  45. [45]

    A systematic survey for differential privacy techniques in federated learning

    Y. Zhang, Y. Lu, and F. Liu, “A systematic survey for differential privacy techniques in federated learning”,Journal of Information Security, 14(2), 2023, 111–35

  46. [46]

    Deep leakage from gradients

    L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients”,Ad- vances in neural information processing systems, 32, 2019