pith. machine review for the scientific record. sign in

arxiv: 2605.10680 · v1 · submitted 2026-05-11 · 💻 cs.LG

Recognition: 2 theorem links

· Lean Theorem

Exact Unlearning from Proxies Induces Closeness Guarantees on Approximate Unlearning

Authors on Pith no claims yet

Pith reviewed 2026-05-12 03:25 UTC · model grok-4.3

classification 💻 cs.LG
keywords machine unlearningexact unlearningapproximate unlearningKL divergencedata distributionsproxy modelsadmissibility criterionforgetting scenarios
0
0 comments X

The pith

Linking machine unlearning to data distributions via proxies provides KL divergence bounds to ideal retrained models.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes a new approach to machine unlearning that focuses on the structure of data distributions instead of directly updating neural network parameters. By inferring these distributions precisely, it becomes possible to distill an exact unlearning signal. This results in theoretical bounds on the Kullback-Leibler divergence between the unlearned model and the ideal model from retraining from scratch, under a verifiable admissibility criterion. Validation through experiments on three forgetting scenarios shows this method yields the closest classifier to the ideal retrained model among compared techniques.

Core claim

Exact unlearning from proxies of the data distributions induces closeness guarantees on approximate unlearning, specifically theoretical bounds on the Kullback-Leibler divergence from the ideal retrained model to the unlearned model, under a verifiable admissibility criterion.

What carries the argument

Exact unlearning from proxies, which links unlearning directly to data distribution structure to distill the unlearning signal and provide closeness guarantees.

If this is right

  • The approximate unlearned model is bounded in its divergence from the ideal retrained model.
  • The framework is shown to be sound through these theoretical bounds.
  • The method achieves the closest performance to ideal retraining in experimental forgetting scenarios.
  • The admissibility criterion allows practical verification of the guarantees.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • This suggests that accurate distribution modeling could make unlearning more reliable without always requiring full retraining.
  • Extending this to non-classification tasks or other models could broaden the applicability of guaranteed unlearning.
  • Challenges in precisely inferring distributions might limit adoption, pointing to needs for robust inference techniques.

Load-bearing premise

Inferring the data distributions with enough precision to distill the exact unlearning signal, and that the admissibility criterion is verifiable in practice.

What would settle it

A case where the Kullback-Leibler divergence between the unlearned model and the retrained model exceeds the paper's theoretical bound, even though the admissibility criterion is satisfied.

Figures

Figures reproduced from arXiv: 2605.10680 by Teddy Furon, Virgile Dine.

Figure 2
Figure 2. Figure 2: 6 [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 1
Figure 1. Figure 1: Convexity of h(η) on DINOv2+mlp1/CIFAR-10 for 3 forgetting scenarios and 5 proxy models: LDA (9), QDA (10) (dashed), LDA-Mix (14), QDA-Mix (15) (dashed), LDA-2C (16). 7 [PITH_FULL_IMAGE:figures/full_fig_p007_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: (a) KLt as a function of RTE normalised by the Retrain RTE on CIFAR-10 / DINOv2 + MLP-1, scenario Subclass (airplane). (b) Number of queries an attacker would need to distinguish the unlearned model from the retrained model up to an α rate of false positive. 9 [PITH_FULL_IMAGE:figures/full_fig_p009_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Cross-entropy training matches LDA. Top: last-layer weight matrix (rows reshaped as images) of a linear network trained with the Adam optimizer and cross-entropy loss. Bottom: closed-form LDA weight formula. Dataset: MNIST [LeCun et al., 1998]. The near-perfect visual match empirically confirms the cross-entropy/LDA equivalence. Cost of Homoscedastic Assumption This subsection exhibits the cost to model th… view at source ↗
Figure 4
Figure 4. Figure 4: Illustration of the sub-class scenario over synthetic heteroscedastic Gaussian data. Class 1 [PITH_FULL_IMAGE:figures/full_fig_p024_4.png] view at source ↗
read the original abstract

This paper proposes a paradigm shift linking machine unlearning directly to the structure of the data distributions rather than a mere update of the neural network parameters. We show that inferring these distributions with precision enables distilling the exact unlearning signal induced by the modeling. Theoretical bounds on the Kullback-Leibler divergence from the ideal retrained model to our unlearned model, under verifiable admissibility criterion, reveal the soundness of our framework. This method is experimentally validated over three forgetting scenarios as reaching the closest classifier to the ideal retrained model when compared to competitors.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes a paradigm shift in machine unlearning by connecting it directly to the structure of data distributions instead of parameter updates. It claims that precise inference of distributions allows distilling an exact unlearning signal, derives theoretical bounds on the KL divergence from the ideal retrained model to the unlearned model under a verifiable admissibility criterion, and experimentally shows the method produces the closest classifier to the retrained model across three forgetting scenarios relative to competitors.

Significance. If the KL bounds are rigorously established and the experiments demonstrate clear superiority with proper controls, the work would be significant for providing distribution-based guarantees in unlearning, a key open challenge. This could shift the field from heuristic updates toward verifiable closeness to retraining, with potential impact on privacy and compliance applications.

major comments (2)
  1. [Abstract] Abstract: the central claim of theoretical KL bounds under a 'verifiable admissibility criterion' is asserted without any derivation outline, definition of the criterion, or statement of assumptions. This is load-bearing for the soundness argument and cannot be assessed from the given information.
  2. [Abstract] Abstract and experimental section: the claim of reaching the 'closest classifier' is presented without error bars, statistical significance tests, or details on how closeness is measured across the three forgetting scenarios. This undermines the experimental validation of superiority.
minor comments (2)
  1. The weakest assumption (precise distribution inference enabling signal distillation) should be discussed with concrete examples or failure modes to clarify practical verifiability.
  2. Notation for the unlearned model, proxy, and ideal retrained model should be introduced early and used consistently to improve readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments on our manuscript. We address each major comment below and will incorporate revisions to strengthen the presentation of our theoretical and experimental contributions.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central claim of theoretical KL bounds under a 'verifiable admissibility criterion' is asserted without any derivation outline, definition of the criterion, or statement of assumptions. This is load-bearing for the soundness argument and cannot be assessed from the given information.

    Authors: We agree that the abstract, as a high-level summary, should provide sufficient context for the central theoretical claim. The full manuscript contains the complete derivation of the KL bounds, the definition of the verifiable admissibility criterion, and the explicit list of assumptions in the theoretical development. In the revised version, we will expand the abstract to include a concise outline of the derivation steps, a brief definition of the admissibility criterion, and the key assumptions, ensuring the claim is assessable without requiring the reader to consult the body of the paper. revision: yes

  2. Referee: [Abstract] Abstract and experimental section: the claim of reaching the 'closest classifier' is presented without error bars, statistical significance tests, or details on how closeness is measured across the three forgetting scenarios. This undermines the experimental validation of superiority.

    Authors: We acknowledge that additional statistical details would strengthen the experimental claims. The manuscript reports closeness via KL divergence to the retrained model across the three scenarios, with comparisons to baselines. In the revision, we will add error bars to all reported metrics, include statistical significance tests (such as paired t-tests with p-values), and explicitly detail the measurement procedure for closeness in both the abstract and the experimental section to provide rigorous validation of the superiority results. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The provided abstract and context describe a framework that infers data distributions to distill an unlearning signal, then derives KL divergence bounds to the retrained model under an admissibility criterion, with experimental validation across scenarios. No load-bearing derivation steps, equations, or self-citations are exhibited in the available text that reduce by construction to fitted inputs, self-definitions, or prior author work. The central claims rest on independent theoretical bounds and direct empirical comparisons rather than renaming or smuggling ansatzes. This is the expected honest non-finding for a paper whose derivation chain is not shown to collapse internally.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Only the abstract is available; no explicit free parameters, axioms, or invented entities can be extracted.

pith-pipeline@v0.9.0 · 5385 in / 996 out tokens · 25600 ms · 2026-05-12T03:25:01.165005+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

107 extracted references · 107 canonical work pages · 1 internal anchor

  1. [1]

    and Thomas, J.A

    Cover, T.M. and Thomas, J.A. , biburl =

  2. [2]

    2015 IEEE Symposium on Security and Privacy , pages =

    Towards Making Systems Forget with Machine Unlearning , author =. 2015 IEEE Symposium on Security and Privacy , pages =. 2015 , organization =

  3. [3]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages =

    Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks , author =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages =

  4. [4]

    Proceedings of the 37th International Conference on Machine Learning (ICML) , year =

    Certified Data Removal from Machine Learning Models , author =. Proceedings of the 37th International Conference on Machine Learning (ICML) , year =

  5. [5]

    Proceedings of the AAAI Conference on Artificial Intelligence , year =

    Amnesiac Machine Learning , author =. Proceedings of the AAAI Conference on Artificial Intelligence , year =

  6. [6]

    2017 IEEE Symposium on Security and Privacy (SP) , year =

    Membership Inference Attacks Against Machine Learning Models , author =. 2017 IEEE Symposium on Security and Privacy (SP) , year =

  7. [7]

    28th USENIX Security Symposium (USENIX Security 19) , year =

    The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks , author =. 28th USENIX Security Symposium (USENIX Security 19) , year =

  8. [8]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , title =

    Chen, Min and Gao, Weizhuo and Liu, Gaoyang and Peng, Kai and Wang, Chen , year =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , title =

  9. [9]

    Transactions on Machine Learning Research , year =

    Oquab, Maxime and Darcet, Timoth. Transactions on Machine Learning Research , year =

  10. [10]

    2016 , institution =

    Regulation (EU) 2016/679 (General Data Protection Regulation) , author =. 2016 , institution =

  11. [11]

    , title =

    Csiszár, I. , title =. The Annals of Probability , year =

  12. [12]

    Choquette-Choo and Hengrui Jia and Adelin Travers and Baiwu Zhang and David Lie and Nicolas Papernot , title =

    Lucas Bourtoule and Varun Chandrasekaran and Christopher A. Choquette-Choo and Hengrui Jia and Adelin Travers and Baiwu Zhang and David Lie and Nicolas Papernot , title =. 2021. 2021 , publisher =

  13. [13]

    Proceedings of the 34th International Conference on Machine Learning (

    Pang Wei Koh and Percy Liang , title =. Proceedings of the 34th International Conference on Machine Learning (. 2017 , publisher =

  14. [14]

    Advances in Neural Information Processing Systems (

    Ayush Sekhari and Jayadev Acharya and Gautam Kamath and Ananda Theertha Suresh , title =. Advances in Neural Information Processing Systems (. 2021 , publisher =

  15. [15]

    Advances in Neural Information Processing Systems (

    Meghdad Kurmanji and Peter Triantafillou and Jamie Hayes and Eleni Triantafillou , title =. Advances in Neural Information Processing Systems (. 2023 , publisher =

  16. [16]

    Chundawat and Ayush Kumar Tarun and Murari Mandal and Mohan S

    Vikram S. Chundawat and Ayush Kumar Tarun and Murari Mandal and Mohan S. Kankanhalli , title =. Proceedings of the 37th. 2023 , publisher =

  17. [17]

    The 12th International Conference on Learning Representations (

    Chongyu Fan and Jiancheng Liu and Yihua Zhang and Eric Wong and Dennis Wei and Sijia Liu , title =. The 12th International Conference on Learning Representations (. 2024 , note =

  18. [18]

    Chundawat and Murari Mandal and Mohan S

    Ayush Kumar Tarun and Vikram S. Chundawat and Murari Mandal and Mohan S. Kankanhalli , title =. 2023 , publisher =

  19. [19]

    Lampert , title =

    Alexandra Peste and Dan Alistarh and Christoph H. Lampert , title =. 2021 , note =

  20. [20]

    2022 , month = oct, journal =

    Thanh Tam Nguyen and Thanh Trung Huynh and Phi Le Nguyen and Alan Wee-Chung Liew and Hongzhi Yin and Quoc Viet Hung Nguyen , title =. arXiv preprint arXiv:2209.02299 , year =

  21. [21]

    Reza Shokri and Marco Stronati and Congzheng Song and Vitaly Shmatikov , title =. 2017. 2017 , publisher =

  22. [22]

    Membership Inference Attacks From First Principles , booktitle =

    Nicholas Carlini and Steve Chien and Milad Nasr and Shuang Song and Andreas Terzis and Florian Tram. Membership Inference Attacks From First Principles , booktitle =. 2022 , publisher =

  23. [23]

    Zico Kolter , title =

    Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary Chase Lipton and J. Zico Kolter , title =. Conference on Language Modeling (

  24. [24]

    2024 , eprint=

    Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition , author=. 2024 , eprint=

  25. [25]

    Advances in Neural Information Processing Systems (

    Erik Daxberger and Agustinus Kristiadi and Alexander Immer and Runa Eschenhagen and Matthias Bauer and Philipp Hennig , title =. Advances in Neural Information Processing Systems (. 2021 , publisher =

  26. [26]

    The 10th International Conference on Learning Representations (

    Ananya Kumar and Aditi Raghunathan and Robbie Jones and Tengyu Ma and Percy Liang , title =. The 10th International Conference on Learning Representations (. 2022 , note =

  27. [27]

    2025 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) , title =

    Hayes, Jamie and Shumailov, Ilia and Triantafillou, Eleni and Khalifa, Amr and Papernot, Nicolas , year =. 2025 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML) , title =

  28. [28]

    McLachlan , title =

    Geoffrey J. McLachlan , title =. 1992 , edition =

  29. [29]

    2026 , eprint=

    Softmax Linear Attention: Reclaiming Global Competition , author=. 2026 , eprint=

  30. [30]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (

    Aditya Golatkar and Alessandro Achille and Avinash Ravichandran and Marzia Polito and Stefano Soatto , title =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (. 2021 , publisher =

  31. [31]

    Understanding intermediate layers using linear classifier probes

    Guillaume Alain and Yoshua Bengio , title =. arXiv preprint arXiv:1610.01644 , year =

  32. [32]

    Le , title =

    Simon Kornblith and Jonathon Shlens and Quoc V. Le , title =. Proceedings of the. 2019 , publisher =

  33. [33]

    Alex Krizhevsky , title =

  34. [34]

    Advances in Neural Information Processing Systems (

    Kimin Lee and Kibok Lee and Honglak Lee and Jinwoo Shin , title =. Advances in Neural Information Processing Systems (. 2018 , publisher =

  35. [35]

    Advances in Neural Information Processing Systems (

    Yifei Ming and Ziyang Cai and Jiuxiang Gu and Yiyou Sun and Wei Li and Yixuan Li , title =. Advances in Neural Information Processing Systems (. 2022 , publisher =

  36. [36]

    Proceedings of the 52nd Annual

    Vitaly Feldman , title =. Proceedings of the 52nd Annual. 2020 , publisher =

  37. [37]

    Friedman and Richard A

    Leo Breiman and Jerome H. Friedman and Richard A. Olshen and Charles J. Stone , title =

  38. [38]

    Proceedings of the 41st International Conference on Machine Learning (

    Eli Chien and Haoyu Peter Wang and Ziang Chen and Pan Li , title =. Proceedings of the 41st International Conference on Machine Learning (. 2024 , publisher =

  39. [39]

    Proceedings of the 41st International Conference on Machine Learning (

    Martin Pawelczyk and Seth Neel and Himabindu Lakkaraju , title =. Proceedings of the 41st International Conference on Machine Learning (. 2024 , publisher =

  40. [40]

    2014 , series =

    Cynthia Dwork and Aaron Roth , title =. 2014 , series =

  41. [41]

    Zou , title =

    Antonio Ginart and Melody Guan and Gregory Valiant and James Y. Zou , title =. Advances in Neural Information Processing Systems (. 2019 , publisher =

  42. [42]

    Bishop , title =

    Christopher M. Bishop , title =

  43. [43]

    Donoho , title =

    Vardan Papyan and X.\ Y.\ Han and David L. Donoho , title =. Proceedings of the National Academy of Sciences , volume =. 2020 , doi =

  44. [44]

    Gradient-Based Learning Applied to Document Recognition , journal =

    Yann LeCun and L. Gradient-Based Learning Applied to Document Recognition , journal =. 1998 , doi =

  45. [45]

    2025 , eprint=

    Machine Unlearning via Information Theoretic Regularization , author=. 2025 , eprint=

  46. [46]

    Electronics and Communications in Japan (Part I: Communications) , volume =

    Amari, Shun-Ichi , title =. Electronics and Communications in Japan (Part I: Communications) , volume =. doi:https://doi.org/10.1002/ecja.4400660602 , url =. https://onlinelibrary.wiley.com/doi/pdf/10.1002/ecja.4400660602 , abstract =

  47. [47]

    Proceedings of the 38th

    Jack Foster and Stefan Schoepf and Alexandra Brintrup , title =. Proceedings of the 38th. 2024 , publisher =

  48. [48]

    2026 , eprint=

    POUR: A Provably Optimal Method for Unlearning Representations via Neural Collapse , author=. 2026 , eprint=

  49. [49]

    The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

    Ascent Fails to Forget , author=. The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

  50. [50]

    2025 , MONTH = Oct, DOI =

    Dine, Virgile and Furon, Teddy and Faure, Charly , URL =. 2025 , MONTH = Oct, DOI =

  51. [51]

    The Thirty-ninth Annual Conference on Neural Information Processing Systems , year =

    Efficient Utility-Preserving Machine Unlearning with Implicit Gradient Surgery , author=. The Thirty-ninth Annual Conference on Neural Information Processing Systems , year =

  52. [52]

    2024 , eprint=

    Deep Unlearning: Fast and Efficient Gradient-free Approach to Class Forgetting , author=. 2024 , eprint=

  53. [53]

    The Thirty-ninth Annual Conference on Neural Information Processing Systems , year =

    Unlearning-Aware Minimization , author=. The Thirty-ninth Annual Conference on Neural Information Processing Systems , year =

  54. [54]

    Proceedings of the AAAI Conference on Artificial Intelligence , author=

    On the Misalignment Between Data Learnability and Forgettability in Machine Unlearning , volume=. Proceedings of the AAAI Conference on Artificial Intelligence , author=. 2026 , month=. doi:10.1609/aaai.v40i29.39657 , abstractNote=

  55. [55]

    Proceedings of the 43rd International Conference on Machine Learning (ICML) , year =

    Attribute-to-Delete: Machine Unlearning via Datamodel Matching , author =. Proceedings of the 43rd International Conference on Machine Learning (ICML) , year =

  56. [56]

    ACM Computing Surveys , volume =

    Machine Unlearning: A Survey , author =. ACM Computing Surveys , volume =. 2023 , doi =

  57. [57]

    ACM Computing Surveys , year =

    A Survey on Federated Unlearning: Challenges, Methods, and Future Directions , author =. ACM Computing Surveys , year =

  58. [58]

    Nature Machine Intelligence , year =

    Rethinking Machine Unlearning for Large Language Models , author =. Nature Machine Intelligence , year =

  59. [59]

    Voigt, Paul and Von dem Bussche, Axel , edition =. The. 2017 , publisher =

  60. [60]

    and Valiant, Gregory and Zou, James Y

    Ginart, Antonio and Guan, Melody Y. and Valiant, Gregory and Zou, James Y. , booktitle =. Making

  61. [61]

    Proceedings of the 32nd International Conference on Algorithmic Learning Theory (ALT) , series =

    Descent-to-Delete: Gradient-Based Methods for Machine Unlearning , author =. Proceedings of the 32nd International Conference on Algorithmic Learning Theory (ALT) , series =

  62. [62]

    Unrolling

    Thudi, Anvith and Deza, Gabriel and Chandrasekaran, Varun and Papernot, Nicolas , booktitle =. Unrolling. 2022 , publisher =

  63. [63]

    Advances in Neural Information Processing Systems (NeurIPS) , year =

    Model Sparsity Can Simplify Machine Unlearning , author =. Advances in Neural Information Processing Systems (NeurIPS) , year =

  64. [64]

    Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS) , series =

    Approximate Data Deletion from Machine Learning Models , author =. Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS) , series =

  65. [65]

    Proceedings of the 30th Network and Distributed System Security Symposium (NDSS) , year =

    Machine Unlearning of Features and Labels , author =. Proceedings of the 30th Network and Distributed System Security Symposium (NDSS) , year =

  66. [66]

    Proceedings of the European Conference on Computer Vision (ECCV) , pages =

    Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input--Output Observations , author =. Proceedings of the European Conference on Computer Vision (ECCV) , pages =

  67. [67]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages =

    Deep Unlearning via Randomized Conditionally Independent Hessians , author =. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) , pages =

  68. [68]

    Variational

    Nguyen, Quoc Phong and Low, Bryan Kian Hsiang and Jaillet, Patrick , booktitle =. Variational

  69. [69]

    31st USENIX Security Symposium (USENIX Security 22) , pages =

    On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning , author =. 31st USENIX Security Symposium (USENIX Security 22) , pages =

  70. [70]

    2018 IEEE 31st Computer Security Foundations Symposium (CSF) , pages =

    Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting , author =. 2018 IEEE 31st Computer Security Foundations Symposium (CSF) , pages =. 2018 , publisher =

  71. [71]

    arXiv preprint arXiv:2201.06640 , year =

    Towards Adversarial Evaluations for Inexact Machine Unlearning , author =. arXiv preprint arXiv:2201.06640 , year =

  72. [72]

    arXiv preprint arXiv:2402.14015 , year =

    Corrective Machine Unlearning , author =. arXiv preprint arXiv:2402.14015 , year =

  73. [73]

    Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , pages =

    Erasing Concepts from Diffusion Models , author =. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) , pages =

  74. [74]

    Eldan, Ronen and Russinovich, Mark , journal =. Who's

  75. [75]

    and Dombrowski, Ann-Kathrin and Goel, Shashwat and Phan, Long and others , booktitle =

    Li, Nathaniel and Pan, Alexander and Gopal, Anjali and Yue, Summer and Berrios, Daniel and Gatti, Alice and Li, Justin D. and Dombrowski, Ann-Kathrin and Goel, Shashwat and Phan, Long and others , booktitle =. The

  76. [76]

    2021 , publisher =

    Liu, Gaoyang and Ma, Xiaoqiang and Yang, Yang and Wang, Chen and Liu, Jiangchuan , booktitle =. 2021 , publisher =

  77. [77]

    IEEE Transactions on Neural Networks and Learning Systems , year =

    Federated Unlearning: A Survey on Methods, Design Guidelines, and Evaluation Metrics , author =. IEEE Transactions on Neural Networks and Learning Systems , year =

  78. [78]

    and Lindenstrauss, Joram , journal =

    Johnson, William B. and Lindenstrauss, Joram , journal =. Extensions of

  79. [79]

    Proceedings of the 30th Annual ACM Symposium on Theory of Computing (STOC) , pages =

    Approximate nearest neighbors: towards removing the curse of dimensionality , author =. Proceedings of the 30th Annual ACM Symposium on Theory of Computing (STOC) , pages =

  80. [80]

    An elementary proof of a theorem of

    Dasgupta, Sanjoy and Gupta, Anupam , journal =. An elementary proof of a theorem of

Showing first 80 references.