pith. machine review for the scientific record. sign in

arxiv: 2602.00407 · v2 · submitted 2026-01-30 · 💻 cs.LG

Recognition: 1 theorem link

· Lean Theorem

Fed-Listing: Federated Label Distribution Inference in Graph Neural Networks

Authors on Pith no claims yet

Pith reviewed 2026-05-16 08:55 UTC · model grok-4.3

classification 💻 cs.LG
keywords federated learninggraph neural networksprivacy attackslabel distribution inferencegradient leakageFedGNNs
0
0 comments X

The pith

Final-layer gradients in federated GNN training leak clients' private label distributions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper introduces Fed-Listing, a method to infer the label distribution of a client's graph data in federated GNNs by examining only the final-layer gradients shared during training. The attack requires no access to raw data or node features and works by uncovering statistical patterns in these gradients. Experiments across four datasets and three architectures demonstrate that it outperforms random guessing and prior attacks like Decaf, even with non-i.i.d. data distributions. Standard defense mechanisms fail to mitigate the attack without significantly harming model utility.

Core claim

Fed-Listing shows that the final-layer gradients exchanged in federated graph neural network training contain sufficient information to accurately reconstruct the proportion of each label in a client's local dataset, enabling inference attacks that succeed across diverse graph datasets and model architectures under both uniform and non-uniform data partitions.

What carries the argument

Fed-Listing attack that extracts label distribution statistics from aggregated final-layer gradients using pattern matching on gradient signals.

If this is right

  • Label proportions can be inferred stealthily from shared model updates in FedGNNs.
  • Existing privacy defenses provide little protection against this form of leakage.
  • Model utility must be traded off substantially to block the attack.
  • The vulnerability holds in non-i.i.d. settings typical of real-world federated deployments.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar gradient-based inference may be possible in other federated learning settings beyond graphs.
  • Federated systems may need to incorporate label-specific noise addition to final layers.
  • Future work could explore whether earlier layers also leak label information in GNNs.

Load-bearing premise

The final-layer gradients preserve enough statistical information about local label counts to allow reliable inference even after aggregation and in non-uniform data settings.

What would settle it

Demonstrating that perturbing the final-layer gradients to the point where label distribution inference accuracy drops to chance level, while keeping overall model accuracy intact.

Figures

Figures reproduced from arXiv: 2602.00407 by Junggab Son, Suprim Nakarmi, Yue Zhao, Zuobin Xiong.

Figure 1
Figure 1. Figure 1: Overview of the proposed attack: Fed-Listing. All clients train an identical GNN (chosen from three variants) to [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 3
Figure 3. Figure 3: For example, in the Random distribution setting, we illustrate ( [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 2
Figure 2. Figure 2: Model utility (accuracy) and attack effectiveness (JS-divergence) under three defense strategies: (a) Gradient compression, [PITH_FULL_IMAGE:figures/full_fig_p009_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Figure illustrating the impact of the attack when the number of shadow FL training is varied. Plots (a-d) show the [PITH_FULL_IMAGE:figures/full_fig_p010_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: This figure illustrates the impact of the attack when the proportion of clients with a specific Partition setting is changed. [PITH_FULL_IMAGE:figures/full_fig_p011_4.png] view at source ↗
read the original abstract

Federated Graph Neural Networks (FedGNNs) facilitate collaborative learning across multiple clients with graph-structured data while preserving user privacy. However, emerging research indicates that within this setting, shared model updates, particularly gradients, can unintentionally leak sensitive information of local users. Numerous privacy inference attacks have been explored in traditional federated learning and extended to graph settings, but the problem of label distribution inference in FedGNNs remains largely underexplored. In this work, we introduce Fed-Listing (Federated Label Distribution Inference in GNNs), a novel gradient-based attack designed to infer the private label statistics of target clients in FedGNNs without access to raw data or node features. Fed-Listing only leverages the final-layer gradients exchanged during training to uncover statistical patterns that reveal class proportions in a stealthy manner. Extensive experiments on four benchmark datasets and three GNN architectures show that Fed-Listing significantly outperforms existing baselines, including random guessing and Decaf, even under challenging non-i.i.d. scenarios. Moreover, existing defense mechanisms can barely reduce the attack performance of Fed-Listing, unless the model's utility is severely degraded. The code implementation and Supplementary materials are available here: https://github.com/suprimnakarmi/Fed-Listing.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces Fed-Listing, a novel gradient-based attack to infer private label distributions of target clients in Federated Graph Neural Networks (FedGNNs). The attack relies solely on final-layer gradients exchanged during training, without access to raw data or node features, to uncover statistical patterns revealing class proportions. Experiments on four benchmark datasets and three GNN architectures claim that Fed-Listing significantly outperforms baselines including random guessing and Decaf, even under non-i.i.d. partitions, while existing defenses fail to mitigate the attack unless model utility is severely degraded. Code and supplementary materials are provided via GitHub.

Significance. If the central empirical claims hold after addressing the aggregation issue, the work highlights an important privacy leakage vector in FedGNNs that is distinct from prior label inference attacks in standard federated learning. The multi-dataset, multi-architecture evaluation and public code release strengthen reproducibility and could guide development of gradient-aggregation-aware defenses for graph-structured federated settings. The result is proportionate in scope to the underexplored problem of label-distribution inference under non-i.i.d. graph data.

major comments (2)
  1. [Abstract and §3] Abstract and §3 (Attack Design): The central claim requires that final-layer gradients retain per-client label signal after server aggregation under non-i.i.d. partitions. Standard federated protocols aggregate client gradients at the server before broadcasting updates, which would sum signals across clients and potentially erase per-client statistics. The abstract's reference to 'shared model updates' and 'final-layer gradients exchanged during training' does not clarify whether Fed-Listing is evaluated on pre-aggregation individual gradients or on the aggregated updates that clients actually receive. This distinction is load-bearing for the non-i.i.d. feasibility claim.
  2. [§5] §5 (Experiments): The description of non-i.i.d. data partitioning, exact generation process, and any data exclusion rules is insufficient to reproduce the reported results. In addition, performance tables lack error bars, standard deviations, or statistical significance tests for the claimed outperformance over Decaf and random guessing, undermining assessment of reliability across the four datasets and three architectures.
minor comments (2)
  1. [Abstract] The abstract would benefit from explicitly naming the four datasets and three GNN architectures to give readers immediate context for the scope of the evaluation.
  2. [§3] Notation for gradient vectors and label-distribution vectors should be introduced consistently in the attack formulation section to avoid ambiguity when describing the inference procedure.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We appreciate the referee's thorough review and valuable feedback on our manuscript. We have carefully considered the comments and provide point-by-point responses below. We will revise the manuscript to address the concerns regarding clarity on gradient aggregation and experimental details.

read point-by-point responses
  1. Referee: [Abstract and §3] Abstract and §3 (Attack Design): The central claim requires that final-layer gradients retain per-client label signal after server aggregation under non-i.i.d. partitions. Standard federated protocols aggregate client gradients at the server before broadcasting updates, which would sum signals across clients and potentially erase per-client statistics. The abstract's reference to 'shared model updates' and 'final-layer gradients exchanged during training' does not clarify whether Fed-Listing is evaluated on pre-aggregation individual gradients or on the aggregated updates that clients actually receive. This distinction is load-bearing for the non-i.i.d. feasibility claim.

    Authors: We thank the referee for highlighting this important clarification. In the Fed-Listing attack, we assume a semi-honest server that observes the individual gradients uploaded by each client before performing aggregation. This is consistent with the threat model in many federated learning privacy attacks, where the server has access to per-client updates. The 'shared model updates' refer to the gradients exchanged from clients to the server. We will revise the abstract and Section 3 to explicitly state that the attack leverages pre-aggregation individual client gradients, which preserves the per-client label signal even in non-i.i.d. settings. This setup is feasible in standard FedGNN protocols where clients send their local gradients to the server. revision: yes

  2. Referee: [§5] §5 (Experiments): The description of non-i.i.d. data partitioning, exact generation process, and any data exclusion rules is insufficient to reproduce the reported results. In addition, performance tables lack error bars, standard deviations, or statistical significance tests for the claimed outperformance over Decaf and random guessing, undermining assessment of reliability across the four datasets and three architectures.

    Authors: We agree that additional details are necessary for reproducibility. We will expand Section 5 to include a precise description of the non-i.i.d. data partitioning process, including the exact generation procedure (e.g., Dirichlet distribution parameters or other methods used) and any data exclusion rules applied. Furthermore, we will update the performance tables to include error bars representing standard deviations across multiple runs, and conduct statistical significance tests (e.g., paired t-tests) to validate the outperformance over baselines. These revisions will strengthen the reliability assessment of our results. revision: yes

Circularity Check

0 steps flagged

Empirical attack evaluation contains no self-referential derivation or fitted-input prediction

full rationale

The paper introduces Fed-Listing as a gradient-based attack and validates its performance via experiments on four datasets and three GNN architectures. No equations or derivation steps are presented that reduce a claimed prediction to its own inputs by construction; the method simply extracts statistical patterns from final-layer gradients, and success is measured against external baselines (random guessing, Decaf) rather than being forced by any internal fit or self-citation. The load-bearing claim about signal retention after aggregation is an empirical hypothesis tested on observed data, not a definitional tautology.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The attack rests on standard federated-learning assumptions about gradient exchange and empirical correlation between final-layer gradients and label counts; no new free parameters, axioms, or invented entities are introduced beyond the attack construction itself.

axioms (1)
  • domain assumption Final-layer gradients exchanged in FedGNN training retain statistical information about local label distributions
    Invoked in the threat model and attack design described in the abstract.

pith-pipeline@v0.9.0 · 5526 in / 1182 out tokens · 21372 ms · 2026-05-16T08:55:30.723752+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

49 extracted references · 49 canonical work pages · 4 internal anchors

  1. [1]

    Gnn model based on node classification forecasting in social network,

    A. Awasthi, A. K. Garov, M. Sharma, and M. Sinha, “Gnn model based on node classification forecasting in social network,” in2023 Interna- tional Conference on Artificial Intelligence and Smart Communication (AISC). IEEE, 2023, pp. 1039–1043

  2. [2]

    A deep graph neural network-based mechanism for social recommendations,

    Z. Guo and H. Wang, “A deep graph neural network-based mechanism for social recommendations,”IEEE Transactions on Industrial Informat- ics, vol. 17, no. 4, pp. 2776–2783, 2020

  3. [3]

    A survey of graph neural networks for social recommender systems,

    K. Sharma, Y .-C. Lee, S. Nambi, A. Salian, S. Shah, S.-W. Kim, and S. Kumar, “A survey of graph neural networks for social recommender systems,”ACM Computing Surveys, vol. 56, no. 10, pp. 1–34, 2024

  4. [4]

    Graph neural networks for molecules,

    Y . Wang, Z. Li, and A. Barati Farimani, “Graph neural networks for molecules,” inMachine learning in molecular sciences. Springer, 2023, pp. 21–66

  5. [5]

    Graph neural networks and their current applications in bioinformatics,

    X.-M. Zhang, L. Liang, L. Liu, and M.-J. Tang, “Graph neural networks and their current applications in bioinformatics,”Frontiers in genetics, vol. 12, p. 690049, 2021

  6. [6]

    Graph neural network for traffic forecasting: The research progress,

    W. Jiang, J. Luo, M. He, and W. Gu, “Graph neural network for traffic forecasting: The research progress,”ISPRS International Journal of Geo- Information, vol. 12, no. 3, p. 100, 2023

  7. [7]

    A graph neural network (gnn)-based approach for real-time estimation of traffic speed in sus- tainable smart cities,

    A. Sharma, A. Sharma, P. Nikashina, V . Gavrilenko, A. Tselykh, A. Bozhenyuk, M. Masud, and H. Meshref, “A graph neural network (gnn)-based approach for real-time estimation of traffic speed in sus- tainable smart cities,”Sustainability, vol. 15, no. 15, p. 11893, 2023

  8. [8]

    Graph neural networks for recommender system,

    C. Gao, X. Wang, X. He, and Y . Li, “Graph neural networks for recommender system,” inProceedings of the fifteenth ACM international conference on web search and data mining, 2022, pp. 1623–1625

  9. [9]

    Graph neural networks in recommender systems: a survey,

    S. Wu, F. Sun, W. Zhang, X. Xie, and B. Cui, “Graph neural networks in recommender systems: a survey,”ACM Computing Surveys, vol. 55, no. 5, pp. 1–37, 2022

  10. [10]

    Federated graph neural networks: Overview, techniques, and challenges,

    R. Liu, P. Xing, Z. Deng, A. Li, C. Guan, and H. Yu, “Federated graph neural networks: Overview, techniques, and challenges,”IEEE transactions on neural networks and learning systems, 2024

  11. [11]

    Decentralized learning in healthcare: a review of emerging techniques,

    C. Shiranthika, P. Saeedi, and I. V . Baji ´c, “Decentralized learning in healthcare: a review of emerging techniques,”IEEE Access, vol. 11, pp. 54 188–54 209, 2023

  12. [12]

    Decentralized graph neural network for privacy-preserving recommendation,

    X. Zheng, Z. Wang, C. Chen, J. Qian, and Y . Yang, “Decentralized graph neural network for privacy-preserving recommendation,” inPro- ceedings of the 32nd ACM International Conference on Information and Knowledge Management, 2023, pp. 3494–3504

  13. [13]

    A systematic review of contemporary applications of privacy-aware graph neural networks in smart cities,

    J. Zhang and I. Tal, “A systematic review of contemporary applications of privacy-aware graph neural networks in smart cities,” inProceedings of the 19th International Conference on Availability, Reliability and Security, 2024, pp. 1–10

  14. [14]

    Federated social recommendation with graph neural network,

    Z. Liu, L. Yang, Z. Fan, H. Peng, and P. S. Yu, “Federated social recommendation with graph neural network,”ACM Transactions on Intelligent Systems and Technology (TIST), vol. 13, no. 4, pp. 1–24, 2022

  15. [15]

    Fedgraphmri-net: A federated graph neural network framework for robust mri reconstruction across non-iid data,

    S. Ahmed, F. Jinchao, M. A. Manan, M. Yaqub, M. U. Ali, and A. Ra- heem, “Fedgraphmri-net: A federated graph neural network framework for robust mri reconstruction across non-iid data,”Biomedical Signal Processing and Control, vol. 102, p. 107360, 2025

  16. [16]

    Investigating the predictive repro- ducibility of federated graph neural networks using medical datasets,

    M. Y . Balık, A. Rekik, and I. Rekik, “Investigating the predictive repro- ducibility of federated graph neural networks using medical datasets,” inInternational Workshop on PRedictive Intelligence In MEdicine. Springer, 2022, pp. 160–171

  17. [17]

    Graphganfed: A federated generative framework for graph-structured molecules towards efficient drug discovery,

    D. Manu, J. Yao, W. Liu, and X. Sun, “Graphganfed: A federated generative framework for graph-structured molecules towards efficient drug discovery,”IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 21, no. 2, pp. 240–253, 2024

  18. [18]

    Inverting gradients-how easy is it to break privacy in federated learning?

    J. Geiping, H. Bauermeister, H. Dr ¨oge, and M. Moeller, “Inverting gradients-how easy is it to break privacy in federated learning?”Ad- vances in neural information processing systems, vol. 33, pp. 16 937– 16 947, 2020

  19. [19]

    Gradient leakage attacks in federated learning: Research frontiers, taxonomy and future directions,

    H. Yang, M. Ge, D. Xue, K. Xiang, H. Li, and R. Lu, “Gradient leakage attacks in federated learning: Research frontiers, taxonomy and future directions,”IEEE Network, 2023

  20. [20]

    Membership inference attack on graph neural networks,

    I. E. Olatunji, W. Nejdl, and M. Khosla, “Membership inference attack on graph neural networks,” in2021 Third IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). IEEE, 2021, pp. 11–20

  21. [21]

    Membership inference attacks and defenses in federated learning: A survey,

    L. Bai, H. Hu, Q. Ye, H. Li, L. Wang, and J. Xu, “Membership inference attacks and defenses in federated learning: A survey,”ACM Computing Surveys, vol. 57, no. 4, pp. 1–35, 2024

  22. [22]

    Piafgnn: Property inference attacks against federated graph neural networks

    J. Liu, B. Chen, B. Xue, M. Guo, and Y . Xu, “Piafgnn: Property inference attacks against federated graph neural networks.”Computers, Materials & Continua, vol. 82, no. 2, 2025

  23. [23]

    Grain: Exact graph reconstruction from gradients,

    M. Drencheva, I. Petrov, M. Baader, D. I. Dimitrov, and M. Vechev, “Grain: Exact graph reconstruction from gradients,”arXiv preprint arXiv:2503.01838, 2025

  24. [24]

    Adversarial attack and defense on graph data: A survey,

    L. Sun, Y . Dou, C. Yang, K. Zhang, J. Wang, P. S. Yu, L. He, and B. Li, “Adversarial attack and defense on graph data: A survey,”IEEE Transactions on Knowledge and Data Engineering, vol. 35, no. 8, pp. 7693–7711, 2022

  25. [25]

    Decaf: Data distribution decompose attack against federated learning,

    Z. Dai, Y . Gao, C. Zhou, A. Fu, Z. Zhang, M. Xue, Y . Zheng, and Y . Zhang, “Decaf: Data distribution decompose attack against federated learning,”IEEE Transactions on Information Forensics and Security, 2024

  26. [26]

    User label leakage from gradients in federated learning,

    A. Wainakh, F. Ventola, T. M ¨ußig, J. Keim, C. G. Cordero, E. Zimmer, T. Grube, K. Kersting, and M. M ¨uhlh¨auser, “User label leakage from gradients in federated learning,”arXiv preprint arXiv:2105.09369, 2021

  27. [27]

    Devil in disguise: Breaching graph neural networks privacy through infiltration,

    L. Meng, Y . Bai, Y . Chen, Y . Hu, W. Xu, and H. Weng, “Devil in disguise: Breaching graph neural networks privacy through infiltration,” inProceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, 2023, pp. 1153–1167

  28. [28]

    Label inference attacks against node-level vertical federated gnns,

    M. Arazzi, M. Conti, S. Koffas, M. Krcek, A. Nocera, S. Picek, and J. Xu, “Label inference attacks against node-level vertical federated gnns,”arXiv preprint arXiv:2308.02465, 2023

  29. [29]

    Ppa: Preference profiling attack against federated learning,

    C. Zhou, Y . Gao, A. Fu, K. Chen, Z. Dai, Z. Zhang, M. Xue, and Y . Zhang, “Ppa: Preference profiling attack against federated learning,” arXiv preprint arXiv:2202.04856, 2022

  30. [30]

    Preference profiling attacks against vertical federated learning over graph data,

    Y . Liu, P. Jiang, and L. Zhu, “Preference profiling attacks against vertical federated learning over graph data,” inIEEE INFOCOM 2025-IEEE Conference on Computer Communications. IEEE, 2025, pp. 1–10

  31. [31]

    Ldia: Label distribution inference attack against federated learning in edge computing,

    Y . Gu and Y . Bai, “Ldia: Label distribution inference attack against federated learning in edge computing,”Journal of Information Security and Applications, vol. 74, p. 103475, 2023

  32. [32]

    Inferring class-label distribution in feder- ated learning,

    R. Ramakrishna and G. D ´an, “Inferring class-label distribution in feder- ated learning,” inProceedings of the 15th ACM Workshop on Artificial Intelligence and Security, 2022, pp. 45–56

  33. [33]

    Ec-lda: Label distribu- tion inference attack against federated graph learning with embedding compression,

    T. Cheng, F. Jie, X. Ling, H. Li, and Z. Chen, “Ec-lda: Label distribu- tion inference attack against federated graph learning with embedding compression,”arXiv preprint arXiv:2505.15140, 2025

  34. [34]

    A comprehensive survey on graph neural networks,

    Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A comprehensive survey on graph neural networks,”IEEE transactions on neural networks and learning systems, vol. 32, no. 1, pp. 4–24, 2020

  35. [35]

    Communication-efficient learning of deep networks from decentralized data,

    B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” inArtificial intelligence and statistics. PMLR, 2017, pp. 1273– 1282

  36. [36]

    Federated Learning with Non-IID Data

    Y . Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V . Chandra, “Federated learning with non-iid data,”arXiv preprint arXiv:1806.00582, 2018

  37. [37]

    Breaking secure aggregation: Label leakage from aggregated gradients in federated learning,

    Z. Wang, Z. Chang, J. Hu, X. Pang, J. Du, Y . Chen, and K. Ren, “Breaking secure aggregation: Label leakage from aggregated gradients in federated learning,” inIEEE INFOCOM 2024-IEEE Conference on Computer Communications. IEEE, 2024, pp. 151–160

  38. [38]

    Label distribution learning,

    X. Geng, “Label distribution learning,”IEEE Transactions on Knowl- edge and Data Engineering, vol. 28, no. 7, pp. 1734–1748, 2016

  39. [39]

    Loss functions in deep learning: A comprehensive review,

    O. Elharrouss, Y . Mahmood, Y . Bechqito, M. A. Serhani, E. Badidi, J. Riffi, and H. Tairi, “Loss functions in deep learning: A comprehensive review,”arXiv preprint arXiv:2504.04242, 2025

  40. [40]

    A comprehensive survey of loss functions in machine learning,

    Q. Wang, Y . Ma, K. Zhao, and Y . Tian, “A comprehensive survey of loss functions in machine learning,”Annals of Data Science, vol. 9, no. 2, pp. 187–212, 2022

  41. [41]

    Collective classification in network data,

    P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher, and T. Eliassi- Rad, “Collective classification in network data,”AI magazine, vol. 29, no. 3, pp. 93–93, 2008

  42. [42]

    Pitfalls of Graph Neural Network Evaluation

    O. Shchur, M. Mumme, A. Bojchevski, and S. G ¨unnemann, “Pitfalls of graph neural network evaluation,”arXiv preprint arXiv:1811.05868, 2018

  43. [43]

    Semi-Supervised Classification with Graph Convolutional Networks

    T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,”arXiv preprint arXiv:1609.02907, 2016

  44. [44]

    Inductive representation learning on large graphs,

    W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,”Advances in neural information processing systems, vol. 30, 2017

  45. [45]

    How Powerful are Graph Neural Networks?

    K. Xu, W. Hu, J. Leskovec, and S. Jegelka, “How powerful are graph neural networks?”arXiv preprint arXiv:1810.00826, 2018

  46. [46]

    Differential privacy,

    C. Dwork, “Differential privacy,” inInternational colloquium on au- tomata, languages, and programming. Springer, 2006, pp. 1–12

  47. [47]

    Deep learning with differential privacy,

    M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 2016, pp. 308–318

  48. [48]

    Deep leakage from gradients,

    L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,”Advances in neural information processing systems, vol. 32, 2019

  49. [49]

    Deep gradient compression: Reducing the communica- tion bandwidth for distributed training,

    Y . Lin, S. Han, H. Mao, Y . Wang, and W. J. Dally, “Deep gradient compression: Reducing the communication bandwidth for distributed training,”arXiv preprint arXiv:1712.01887, 2017