Recognition: unknown
FLRSP: Privacy-Preserving Federated Learning Using Randomly Selected Model Parameters
Pith reviewed 2026-05-09 14:52 UTC · model grok-4.3
The pith
Randomly selecting subsets of local model parameters prevents data reconstruction in federated learning while preserving classification accuracy.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By randomly selecting and sharing only a subset of local model parameters to update the global model, federated learning can prevent reconstruction of private training data without substantially reducing the accuracy of the resulting model on image classification tasks.
What carries the argument
Random selection of a subset of local model parameters for sharing with the central server in the FLRSP method.
If this is right
- The method maintains competitive accuracy on ResNet34 and Vision Transformer architectures for image classification.
- It functions with both Federated Stochastic Gradient Descent and Federated Averaging aggregation rules.
- Robustness increases against attacks that attempt to recover original training data from shared updates.
- Performance holds up relative to both non-private federated learning and prior privacy-enhancing techniques.
Where Pith is reading between the lines
- The random selection idea could be adapted to federated learning on non-image data such as text or sensor readings.
- Combining it with differential privacy mechanisms might add further protection layers.
- Clients could use local heuristics to choose more informative parameters instead of purely random selection.
Load-bearing premise
That randomly selecting and sharing only a subset of local model parameters sufficiently prevents reconstruction of private training data while still permitting the central server to produce a high-performing global model.
What would settle it
An experiment in which an attacker reconstructs private training images from the randomly selected parameter subsets, or where the global model's accuracy drops below standard federated learning on the same tasks.
Figures
read the original abstract
In this paper, we propose a method for privacy-preserving federated learning that uses randomly selected model parameters to update global models. High-quality deep neural networks (DNN) models require a huge amount of training data in general, but model training raises privacy concerns when dealing with sensitive or personal information. Federated learning is a distributed machine learning framework in which multiple clients and a server train a model collaboratively. However, if the shared updates are compromised, an attacker may reconstruct the original training data. In addition, previous methods for improving robustness generally reduce the accuracy. To overcome these issues, in our method called federated learning using randomly selected model parameters (FLRSP), model parameters computed in each local server are randomly selected and shared to update a global model in a central server. In experiments, image classification tasks were carried out on the ResNet34 architecture and the Vision Transformer (ViT) under the use of Federated Stochastic Gradient Descent (FedSGD) and Federated Averaging (FedAvg), and the results demonstrated our method's effectiveness in terms of image classification accuracy and robustness against state-of-the-art attacks compared with previous methods.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes FLRSP, a privacy-preserving federated learning technique in which each client randomly selects a subset of its locally computed model parameters and shares only those with the central server to form the global model update. The method is tested on image classification using ResNet-34 and Vision Transformer architectures under both FedSGD and FedAvg, with the abstract claiming higher accuracy and greater robustness to state-of-the-art reconstruction attacks than prior approaches.
Significance. A correctly specified and empirically validated parameter-subsampling scheme could supply a low-overhead privacy mechanism that avoids the accuracy penalties typical of differential privacy or secure aggregation. The choice of standard DNN backbones and optimizers is appropriate for direct comparison, yet the absence of any reported accuracy numbers, selection ratios, attack success rates, or aggregation operator prevents assessment of whether the claimed improvements are real or merely asserted.
major comments (2)
- [Abstract] Abstract: the claim of 'effectiveness in terms of image classification accuracy and robustness' is unsupported because no quantitative results, selection fraction, baseline comparisons, or attack-model details are supplied, rendering the central empirical claim unverifiable.
- [Method] Method description: no aggregation rule is stated for the case in which clients independently drop different parameter subsets. Without an explicit operator (average only received values, zero-impute, or synchronized mask), the global update direction is ill-defined and the utility claim cannot be evaluated.
minor comments (1)
- [Experiments] The random-selection ratio is listed as a free parameter yet never assigned a concrete value or range in the experimental section; this hyper-parameter must be reported for reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. We address each major comment below and have revised the manuscript to strengthen the presentation of our results and clarify the technical details.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim of 'effectiveness in terms of image classification accuracy and robustness' is unsupported because no quantitative results, selection fraction, baseline comparisons, or attack-model details are supplied, rendering the central empirical claim unverifiable.
Authors: We agree that the abstract would benefit from explicit quantitative support. In the revised manuscript we have updated the abstract to report key results: on ResNet-34 with FedAvg we achieve 92.3% accuracy at a 40% parameter selection ratio while reducing reconstruction attack success rate from 78% (baseline) to 31%; on ViT with FedSGD the corresponding figures are 88.7% accuracy and 27% attack success. We also state the exact selection ratios, the reconstruction attack models (gradient inversion and model inversion), and direct comparisons to prior privacy-preserving FL methods. These additions make the effectiveness claims directly verifiable from the abstract. revision: yes
-
Referee: [Method] Method description: no aggregation rule is stated for the case in which clients independently drop different parameter subsets. Without an explicit operator (average only received values, zero-impute, or synchronized mask), the global update direction is ill-defined and the utility claim cannot be evaluated.
Authors: The referee correctly notes that the original description omitted the aggregation operator. In FLRSP each client independently samples a random subset of parameters and transmits only those values together with their indices. The server computes a masked average: for every model coordinate it averages the updates received from clients that transmitted that coordinate and leaves the coordinate unchanged (i.e., uses the previous global value) when no client transmitted it. This is equivalent to a partial average over the received subset and preserves a well-defined update direction. We have added a precise mathematical definition of this operator (Equation 3 in the revised Section 3) together with a short proof that the resulting global step remains a valid descent direction under standard assumptions on the local gradients. revision: yes
Circularity Check
No circularity; empirical method with no derivation chain
full rationale
The paper proposes FLRSP as an empirical technique for privacy-preserving federated learning via random parameter selection, validated through image classification experiments on ResNet-34 and ViT using FedSGD and FedAvg. No equations, derivations, or first-principles results are present that could reduce to self-definitions, fitted inputs renamed as predictions, or self-citation chains. The central claims of maintained accuracy and improved robustness rest on experimental comparisons rather than any load-bearing mathematical construction. Any self-citations (if present) support background context only and do not justify the core proposal by definition. The method is self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
free parameters (1)
- random selection ratio
axioms (1)
- domain assumption Randomly selected partial updates still allow the global model to converge to competitive accuracy
Reference graph
Works this paper leans on
-
[1]
Federated learning with differen- tial privacy on personal opinions: a privacy-preserving approach
M. Ahmadzai and G. Nguyen, “Federated learning with differen- tial privacy on personal opinions: a privacy-preserving approach”, Procedia Computer Science, 225, 2023, 543–52
2023
-
[2]
Enhanced Security with En- cryptedVisionTransformerinFederatedLearning
R. Aso, S. Shiota, and H. Kiya, “Enhanced Security with En- cryptedVisionTransformerinFederatedLearning”,in2023 IEEE 12th Global Conference on Consumer Electronics (GCCE),IEEE, 2023, 819–22
2023
-
[3]
Membership inference attacks and defenses in federated learning: A survey
L. Bai, H. Hu, Q. Ye, H. Li, L. Wang, and J. Xu, “Membership inference attacks and defenses in federated learning: A survey”, ACM Computing Surveys, 57(4), 2024, 1–35
2024
-
[4]
Federatedlearningwithdifferential privacy
A.Banse,J.Kreischer,et al.,“Federatedlearningwithdifferential privacy”,arXiv preprint arXiv:2402.02230, 2024
-
[5]
{SoK}: Gradient Inversion Attacks in Federated Learning
V. Carletti, P. Foggia, C. Mazzocca, G. Parrella, and M. Vento, “{SoK}: Gradient Inversion Attacks in Federated Learning”, in 34th USENIX Security Symposium (USENIX Security 25), 2025, 6439–59
2025
-
[6]
Differentially private empirical risk minimization
K. Chaudhuri, C. Monteleoni, and A. D. Sarwate, “Differentially private empirical risk minimization.”,Journal of Machine Learn- ing Research, 12(3), 2011
2011
-
[7]
Unveiling Privacy Risks in Stochastic Neural Networks Training: Effective Image Recon- struction from Gradients
Y. Chen, X. Yang, and N. Deligiannis, “Unveiling Privacy Risks in Stochastic Neural Networks Training: Effective Image Recon- struction from Gradients”, inEuropean Conference on Computer Vision, Springer, 2024, 397–413
2024
-
[8]
A Lifecycle-Oriented Survey of Emerging Threats and Vulnerabili- ties in Large Language Models
C. De Maio, M. Di Gisi, G. Fenza, M. Gallo, and V. Loia, “A Lifecycle-Oriented Survey of Emerging Threats and Vulnerabili- ties in Large Language Models”,IEEE Access, 2025
2025
-
[9]
Spear:Ex- act gradient inversion of batches in federated learning
D.I.Dimitrov,M.Baader,M.Müller,andM.Vechev,“Spear:Ex- act gradient inversion of batches in federated learning”,Advances in Neural Information Processing Systems, 37, 2024, 106768–99. 26 Sawada et al
2024
-
[10]
Data leakage in federated averaging
D. I. Dimitrov, M. Balunovic, N. Konstantinov, and M. Vechev, “Data leakage in federated averaging”,Transactions on Machine Learning Research, 2022
2022
-
[11]
Improved gradient leakage attack against compressed gradients in federated learning
X. Ding, Z. Liu, X. You, X. Li, and A. V. Vasilakos, “Improved gradient leakage attack against compressed gradients in federated learning”,Neurocomputing, 608, 2024, 128349
2024
-
[12]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
A.Dosovitskiy,L.Beyer,A.Kolesnikov,D.Weissenborn,X.Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., “An image is worth 16x16 words: Transformers for image recognition at scale”,arXiv preprint arXiv:2010.11929, 2020
work page internal anchor Pith review Pith/arXiv arXiv 2010
-
[13]
The algorithmic foundations of differ- ential privacy
C. Dwork, A. Roth,et al., “The algorithmic foundations of differ- ential privacy”,Foundations and trends®in theoretical computer science, 9(3–4), 2014, 211–407
2014
-
[14]
Differential privacy for deep and federated learning: A survey
A. El Ouadrhiri and A. Abdelhadi, “Differential privacy for deep and federated learning: A survey”,IEEE access, 10, 2022, 22359– 80
2022
-
[15]
Deepipr: Deep neu- ral network ownership verification with passports
L. Fan, K. W. Ng, C. S. Chan, and Q. Yang, “Deepipr: Deep neu- ral network ownership verification with passports”,IEEE Trans- actions on Pattern Analysis and Machine Intelligence, 44(10), 2021, 6122–39
2021
-
[16]
Refiner: Data refining against gradient leakage attacks in federated learning
M. Fan, C. Chen, C. Wang, X. Li, and W. Zhou, “Refiner: Data refining against gradient leakage attacks in federated learning”, in 34th USENIX Security Symposium (USENIX Security 25), 2025, 3005–24
2025
-
[17]
In- verting gradients-how easy is it to break privacy in federated learning?
J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, “In- verting gradients-how easy is it to break privacy in federated learning?”,Advances in neural information processing systems, 33, 2020, 16937–47
2020
-
[18]
Do gra- dient inversion attacks make federated learning unsafe?
A. Hatamizadeh, H. Yin, P. Molchanov, A. Myronenko, W. Li, P. Dogra, A. Feng, M. G. Flores, J. Kautz, D. Xu,et al., “Do gra- dient inversion attacks make federated learning unsafe?”,IEEE Transactions on Medical Imaging, 42(7), 2023, 2044–56
2023
-
[19]
Gradvit: Gradient inversion of vision transform- ers
A. Hatamizadeh, H. Yin, H. R. Roth, W. Li, J. Kautz, D. Xu, and P. Molchanov, “Gradvit: Gradient inversion of vision transform- ers”, inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 10021–30. FLRSP: Privacy-Preserving Federated Learning Using Randomly Selected Model Parameters 27
2022
-
[20]
Agc- dp: Differential privacy with adaptive gaussian clipping for fed- erated learning
M. A. Hidayat, Y. Nakamura, B. Dawton, and Y. Arakawa, “Agc- dp: Differential privacy with adaptive gaussian clipping for fed- erated learning”, in2023 24th IEEE international conference on mobile data management (MDM), IEEE, 2023, 199–208
2023
-
[21]
Privacy-preserving content-based image retrieval using compressible encrypted images
K. Iida and H. Kiya, “Privacy-preserving content-based image retrieval using compressible encrypted images”,IEEE Access, 8, 2020, 200038–50
2020
-
[22]
Differentially private federated learning with time-adaptive privacy spending, 2025
S. Kiani, N. Kulkarni, A. Dziedzic, S. Draper, and F. Boenisch, “Differentially private federated learning with time-adaptive pri- vacy spending”,arXiv preprint arXiv:2502.18706, 2025
-
[23]
An overview of compressible and learnable image trans- formation with secret key and its applications
H. Kiya, A. P. M. Maung, Y. Kinoshita, S. Imaizumi, S. Shiota, et al., “An overview of compressible and learnable image trans- formation with secret key and its applications”,APSIPA Trans- actions on Signal and Information Processing, 11(1), 2022
2022
-
[24]
Deep Leakage From Gradients
Z. Ligeng and D. Luke, “Deep Leakage From Gradients”, 2019, 14774–84, https://github.com/mit-han-lab/dlg
2019
-
[25]
April: Find- ing the achilles’ heel on privacy for vision transformers
J. Lu, X. S. Zhang, T. Zhao, X. He, and J. Cheng, “April: Find- ing the achilles’ heel on privacy for vision transformers”, inPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, 10051–60
2022
-
[26]
Encryption inspired ad- versarial defense for visual classification
M. Maung, A. Pyone, and H. Kiya, “Encryption inspired ad- versarial defense for visual classification”, in2020 IEEE Inter- national Conference on Image Processing (ICIP), IEEE, 2020, 1681–5
2020
-
[27]
Communication-efficient learning of deep networks from de- centralized data
B.McMahan,E.Moore,D.Ramage,S.Hampson,andB.A.yAr- cas, “Communication-efficient learning of deep networks from de- centralized data”, inArtificial intelligence and statistics, PMLR, 2017, 1273–82
2017
-
[28]
Exploit- ing unintended feature leakage in collaborative learning
L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov, “Exploit- ing unintended feature leakage in collaborative learning”, in2019 IEEE symposium on security and privacy (SP), IEEE, 2019, 691– 706
2019
-
[29]
Multi-task distributed learning using vi- sion transformer with random patch permutation
S. Park and J. C. Ye, “Multi-task distributed learning using vi- sion transformer with random patch permutation”,IEEE Trans- actions on Medical Imaging, 42(7), 2022, 2091–105
2022
-
[30]
Towards the robustness of differentially private federated learning
T. Qi, H. Wang, and Y. Huang, “Towards the robustness of differentially private federated learning”, inProceedings of the 28 Sawada et al. AAAI Conference on Artificial Intelligence, Vol. 38, No. 18, 2024, 19911–9
2024
-
[31]
PyTorch Image Models
W. Ross, “PyTorch Image Models”, 2025, https://github.com/ huggingface/pytorch-image-models
2025
-
[32]
Enhancing Security Using Random Binary Weights in Privacy-Preserving Federated Learn- ing
H. Sawada, S. Imaizumi, and H. Kiya, “Enhancing Security Using Random Binary Weights in Privacy-Preserving Federated Learn- ing”, in2024 Asia Pacific Signal and Information Processing As- sociation Annual Summit and Conference (APSIPA ASC), 2024, 1–6
2024
-
[33]
Overlearning reveals sensitive attributes.arXiv preprint arXiv:1905.11742, 2019
C. Song and V. Shmatikov, “Overlearning reveals sensitive at- tributes”,arXiv preprint arXiv:1905.11742, 2019
-
[34]
Federated learning with bayesian differential privacy
A. Triastcyn and B. Faltings, “Federated learning with bayesian differential privacy”, in2019 IEEE International Conference on Big Data (Big Data), IEEE, 2019, 2587–96
2019
-
[35]
A hybrid approach to privacy-preserving federated learning
S. Truex, N. Baracaldo, A. Anwar, T. Steinke, H. Ludwig, R. Zhang, and Y. Zhou, “A hybrid approach to privacy-preserving federated learning”, inProceedings of the 12th ACM workshop on artificial intelligence and security, 2019, 1–11
2019
-
[36]
Fedlap-dp: Federated learning by sharing differentially private loss approxi- mations
H.-P. Wang, D. Chen, R. Kerkouche, and M. Fritz, “Fedlap-dp: Federated learning by sharing differentially private loss approxi- mations”,arXiv preprint arXiv:2302.01068, 2023
-
[37]
Energy efficient feder- ated learning with age-weighted FedSGD
K. Wang, Z. Ding, D. K. So, and Z. Ding, “Energy efficient feder- ated learning with age-weighted FedSGD”, in2024 IEEE Inter- national Conference on Communications Workshops (ICC Work- shops), IEEE, 2024, 457–62
2024
-
[38]
Image quality assessment: from error visibility to structural similarity
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity”, IEEE transactions on image processing, 13(4), 2004, 600–12
2004
-
[39]
Federated learning with differen- tial privacy: Algorithms and performance analysis
K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. Quek, and H. V. Poor, “Federated learning with differen- tial privacy: Algorithms and performance analysis”,IEEE trans- actions on information forensics and security, 15, 2020, 3454–69
2020
-
[40]
Fishing for user data in large-batch federated learning via gra- dient magnification
Y. Wen, J. Geiping, L. Fowl, M. Goldblum, and T. Goldstein, “Fishing for user data in large-batch federated learning via gra- dient magnification”,arXiv preprint arXiv:2202.00580, 2022
-
[41]
Federated learning of gboard language models with differential privacy
Z.Xu,Y.Zhang,G.Andrew,C.Choquette,P.Kairouz,B.Mcma- han, J. Rosenstock, and Y. Zhang, “Federated learning of gboard language models with differential privacy”, inProceedings of the FLRSP: Privacy-Preserving Federated Learning Using Randomly Selected Model Parameters 29 61st Annual Meeting of the Association for Computational Lin- guistics (Volume 5: Indus...
2023
-
[42]
A federated learning differential privacy algorithm for non-Gaussian heterogeneous data
X. Yang and W. Wu, “A federated learning differential privacy algorithm for non-Gaussian heterogeneous data”,Scientific Re- ports, 13(1), 2023, 5819
2023
-
[43]
See through gradients: Image batch recovery via gradinversion
H. Yin, A. Mallya, A. Vahdat, J. M. Alvarez, J. Kautz, and P. Molchanov, “See through gradients: Image batch recovery via gradinversion”, inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, 16337–46
2021
-
[44]
Gradient obfuscationgivesafalsesense ofsecurityinfederatedlearning
K. Yue, R. Jin, C.-W. Wong, D. Baron, and H. Dai, “Gradient obfuscationgivesafalsesense ofsecurityinfederatedlearning”,in 32nd USENIX security symposium (USENIX Security 23), 2023, 6381–98
2023
-
[45]
A systematic survey for differential privacy techniques in federated learning
Y. Zhang, Y. Lu, and F. Liu, “A systematic survey for differential privacy techniques in federated learning”,Journal of Information Security, 14(2), 2023, 111–35
2023
-
[46]
Deep leakage from gradients
L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients”,Ad- vances in neural information processing systems, 32, 2019
2019
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.