pith. machine review for the scientific record. sign in

arxiv: 2605.11170 · v1 · submitted 2026-05-11 · 💻 cs.LG · cs.CR

Recognition: 2 theorem links

· Lean Theorem

Unlearning with Asymmetric Sources: Improved Unlearning-Utility Trade-off with Public Data

Authors on Pith no claims yet

Pith reviewed 2026-05-13 05:53 UTC · model grok-4.3

classification 💻 cs.LG cs.CR
keywords machine unlearningcertified unlearningpublic dataLangevin dynamicsRényi divergencedistribution mismatchutility trade-offmembership inference
0
0 comments X

The pith

Asymmetric Langevin Unlearning injects public data to reduce certified unlearning noise by a factor of O(1/n_pub²) while preserving model utility.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Noise-based certified machine unlearning requires large noise magnitudes that destroy utility when deleting many examples. This paper introduces Asymmetric Langevin Unlearning, which mixes in public data during the Langevin dynamics to lower the privacy cost. The analysis shows the unlearning cost drops quadratically with the volume of public data, giving a computational advantage over full retraining from scratch. The framework also quantifies how distribution mismatch between public and private data affects the final utility, and demonstrates that constant-fraction deletions become feasible without catastrophic accuracy loss.

Core claim

We introduce Asymmetric Langevin Unlearning (ALU) that incorporates public data asymmetrically into the unlearning Langevin dynamics. We prove that public data injection suppresses the unlearning cost by a factor of O(1/n_pub²), guaranteeing a strict computational advantage over retraining. The method enables mass unlearning of constant dataset fractions while maintaining high utility, even after explicitly characterizing the impact of distribution shifts between public and private sources, as confirmed by variational Rényi divergence bounds and membership inference attack evaluations.

What carries the argument

Asymmetric Langevin Unlearning (ALU), which augments the standard Langevin dynamics with public data to relax noise requirements via variational Rényi divergence analysis.

If this is right

  • Mass deletion of a constant fraction of the training set becomes computationally cheaper than retraining.
  • Increasing public data volume directly trades off against the noise level needed for certification.
  • Utility loss remains controlled even when public and private data distributions differ moderately.
  • Certified unlearning extends to regimes where symmetric noise-based methods are impractical.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Public data could serve as a tunable resource for balancing privacy and performance in other certified deletion or privacy tasks.
  • Optimal allocation of public versus private data might be derived for specific model architectures or deletion sizes.
  • The quadratic suppression suggests similar asymmetric source techniques could improve efficiency in related Langevin-based sampling or optimization settings.

Load-bearing premise

The proof assumes Langevin dynamics and Rényi divergence bounds continue to hold when public data from a different distribution is injected asymmetrically.

What would settle it

An experiment that measures the certified noise magnitude required as public data volume increases and finds it does not scale as O(1/n_pub²), or that membership inference attack success rates rise above the certified bound while utility remains high.

Figures

Figures reproduced from arXiv: 2605.11170 by Ahmed Mehdi Inane, Gintare Karolina Dzugaite, Ioannis Mitliagkas, Vincent Quirion.

Figure 1
Figure 1. Figure 1: Training pipelines showing the relationship between learning, unlearning, and retraining with public data injection. The divergence Dα(π T R∥π T L ) quantifies how public data helps maintain similarity between retraining and original learning distributions, facilitating subsequent unlearning. Following Chien et al. (2024a), we measure unlearning quality via Renyi divergence. ´ Definition 3.2. For probabili… view at source ↗
Figure 2
Figure 2. Figure 2: Required noise magnitude σ to bound Dα(π T R∥π T L ) as a function of the forget fraction c = nforget/npriv. Values are computed assuming a strongly convex loss (Chien et al., 2024a), for a binary classification task. Details are deferred to Appendix B.3. distribution after T + K iterations is upper bounded by Dα(π T +K R ∥π K U ) ≤ Dα(π T L ∥π T R) × min  gα,η,L(k, σ), exp  − 2Kσ2η αC˜  , where gα,η,L… view at source ↗
Figure 3
Figure 3. Figure 3: Renyi divergence estimation across varying public data (Clipart) volumes. ´ via loss on the private data distribution Ppriv after unlearn￾ing, comparing against the retraining baseline on the training mixture. Results are summarized in [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: U-LiRA confidence scores after K unlearning iterations as violin plots with quartiles. LiRA membership inference attack for unlearning (Hayes et al., 2024; Carlini et al., 2021). Given a training set, forget set, and specified learning and unlearning algorithms, the ad￾versary’s goal is to infer whether a model’s weights θ were drawn from the unlearning distribution π K U or the retrain￾ing distribution π … view at source ↗
Figure 5
Figure 5. Figure 5: The two domains of public and private data used for Sections 5.1 and 5.2 (Peng et al., 2019). Both datasets share the same number of classes, with Clipart being a collection of stylized images representing the private data, and Quickdraw representing a collection of hand-draw sketches. infograph axe infograph mushroom infograph spider infograph stove infograph bathtub infograph lollipop real axe real mushr… view at source ↗
Figure 6
Figure 6. Figure 6: The two domains of public and private data used for Section 5.2 (Peng et al., 2019). Both datasets share the same number of classes, with Infograph being a collection of stylized images representing the public data, and Real representing a collection of real-life images. 23 [PITH_FULL_IMAGE:figures/full_fig_p023_6.png] view at source ↗
read the original abstract

Noise-based certified machine unlearning currently faces a hard ceiling: the noise magnitude required to certify unlearning typically destroys model utility, particularly for large-scale deletion requests. While leveraging public data is a standard technique in differential privacy to relax this tension, its role in unlearning remains unexplored. We address this gap by introducing Asymmetric Langevin Unlearning (ALU), a framework that uses public data to mitigate privacy costs. We prove that public data injection suppresses the unlearning cost by a factor of $O(1/n_{\mathrm{pub}}^2)$, guaranteeing a strict computational advantage over retraining. This establishes a new control mechanism: practitioners can mitigate the need for high noise-and the associated utility loss-by increasing the volume of public data. Crucially, we analyze the realistic setting of distribution mismatch, explicitly characterizing how shifts between public and private sources impact utility. We show that ALU enables mass unlearning of constant dataset fractions -- a regime where standard symmetric methods become impractical -- while maintaining high utility. Empirical evaluations using variational R\'enyi divergence and membership inference attacks confirm that ALU effectively thwarts privacy attacks while preserving utility under reasonable distribution shifts.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces Asymmetric Langevin Unlearning (ALU), a framework that injects public data into noise-based certified machine unlearning to relax the noise-utility tension. The central claim is a proof that public-data injection suppresses unlearning cost by a factor of O(1/n_pub²) relative to retraining or symmetric methods, with an explicit characterization of how distribution mismatch between public and private sources affects utility guarantees. The work also presents empirical support via variational Rényi divergence bounds and membership-inference attacks, showing that ALU remains effective for mass unlearning of constant dataset fractions under moderate shifts.

Significance. If the O(1/n_pub²) scaling is rigorously established, the result supplies a concrete, tunable control (volume of public data) for reducing the noise magnitude required for certified unlearning, directly addressing the practical barrier that currently limits noise-based methods to small deletion sets. The explicit mismatch analysis and the combination of theoretical contraction bounds with MIA experiments constitute genuine strengths; the former distinguishes the contribution from purely empirical public-data heuristics in DP literature.

major comments (2)
  1. [Theorem 1 and surrounding derivation] The headline O(1/n_pub²) suppression is derived from variational Rényi divergence contraction under modified Langevin dynamics. The manuscript must show the precise re-derivation of the Fokker-Planck operator and the contraction constant when the stationary measure becomes the asymmetric mixture induced by public-data injection (see the paragraph following the statement of Theorem 1 and the subsequent display of the Rényi bound). If the mismatch term (KL or total-variation distance between public and private distributions) enters the contraction rate at order 1 rather than being absorbed into the 1/n_pub² prefactor, the quadratic improvement does not survive; the current text states that mismatch is “explicitly characterized” but does not exhibit the algebraic step that isolates the quadratic scaling.
  2. [Section 4 (mismatch analysis)] The utility guarantee under mismatch is stated to remain high for “reasonable” shifts, yet the paper does not quantify the regime in which the O(1/n_pub²) advantage is preserved (e.g., an explicit condition on δ = d_TV(P_pub, P_priv) such that the extra linear-in-δ term does not dominate). Without this threshold, the claim that ALU enables “mass unlearning of constant dataset fractions” cannot be evaluated for the distribution shifts that arise in realistic public-data sources.
minor comments (2)
  1. [§2 and §5] Notation for the public-data injection schedule (how many public samples are added per Langevin step) is introduced only in the experimental section; moving the formal definition to the theoretical setup would improve readability.
  2. [Figure 3 and Table 2] The empirical plots report Rényi divergence and MIA accuracy but omit error bars or the number of independent runs; adding these would strengthen the reproducibility of the utility claims.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed comments, which help clarify the presentation of our theoretical results. We address each major comment below and will revise the manuscript accordingly to provide the requested derivations and explicit conditions.

read point-by-point responses
  1. Referee: [Theorem 1 and surrounding derivation] The headline O(1/n_pub²) suppression is derived from variational Rényi divergence contraction under modified Langevin dynamics. The manuscript must show the precise re-derivation of the Fokker-Planck operator and the contraction constant when the stationary measure becomes the asymmetric mixture induced by public-data injection (see the paragraph following the statement of Theorem 1 and the subsequent display of the Rényi bound). If the mismatch term (KL or total-variation distance between public and private distributions) enters the contraction rate at order 1 rather than being absorbed into the 1/n_pub² prefactor, the quadratic improvement does not survive; the current text states that mismatch is “explicitly characterized” but does not exhibit the algebraic step that isolates the quadratic scaling.

    Authors: We thank the referee for identifying this gap in the exposition. The manuscript states the contraction bound for the asymmetric case but does not expand the Fokker-Planck operator or isolate the algebraic contribution of the mismatch term. In the revised version we will add a dedicated appendix subsection that (i) derives the Fokker-Planck equation for the mixture stationary measure induced by public-data injection and (ii) shows the precise steps in which the KL (or TV) mismatch term enters the contraction rate at order O(1/n_pub), which, when multiplied by the leading 1/n_pub factor from the noise schedule, produces the claimed O(1/n_pub²) suppression. This derivation confirms that the quadratic scaling survives for any mismatch bounded independently of n_pub. revision: yes

  2. Referee: [Section 4 (mismatch analysis)] The utility guarantee under mismatch is stated to remain high for “reasonable” shifts, yet the paper does not quantify the regime in which the O(1/n_pub²) advantage is preserved (e.g., an explicit condition on δ = d_TV(P_pub, P_priv) such that the extra linear-in-δ term does not dominate). Without this threshold, the claim that ALU enables “mass unlearning of constant dataset fractions” cannot be evaluated for the distribution shifts that arise in realistic public-data sources.

    Authors: We agree that an explicit threshold on δ would make the practical scope of the result clearer. In the revision we will insert a corollary in Section 4 that states the precise regime: the O(1/n_pub²) advantage is retained whenever δ = o(1/n_pub). Under this condition the linear-in-δ perturbation remains strictly smaller than the quadratic suppression term, thereby justifying the claim that ALU supports mass unlearning of constant fractions under moderate distribution shifts. The corollary will be derived directly from the variational Rényi bound already present in the manuscript. revision: yes

Circularity Check

0 steps flagged

No circularity: O(1/n_pub²) bound derived from asymmetric Langevin dynamics and Rényi analysis

full rationale

The paper's central claim is a derived bound on unlearning cost suppression via public data injection in Asymmetric Langevin Unlearning. The abstract presents this as following from modified dynamics and explicit mismatch characterization under variational Rényi divergence, without any reduction to a fitted parameter, self-definitional loop, or load-bearing self-citation. No equations in the provided text equate the claimed quadratic factor to an input by construction. The derivation chain remains self-contained against external benchmarks such as standard symmetric unlearning and retraining baselines.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim rests on standard assumptions of Langevin dynamics and Rényi divergence bounds plus the new modeling choice of asymmetric public-data injection; no explicit free parameters or invented entities are named in the abstract.

axioms (1)
  • domain assumption Langevin dynamics and variational Rényi divergence bounds remain valid under asymmetric public-data injection
    Invoked to derive the O(1/n_pub²) suppression factor
invented entities (1)
  • Asymmetric Langevin Unlearning (ALU) framework no independent evidence
    purpose: Mechanism to inject public data asymmetrically for unlearning cost reduction
    New framework introduced to achieve the stated suppression

pith-pipeline@v0.9.0 · 5518 in / 1277 out tokens · 47743 ms · 2026-05-13T05:53:27.099159+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

243 extracted references · 243 canonical work pages · 3 internal anchors

  1. [1]

    and Telgarsky, Matus and Yu, Bin , month = jun, year =

    Wu, Jingfeng and Bartlett, Peter L. and Telgarsky, Matus and Yu, Bin , month = jun, year =. Large. Proceedings of

  2. [2]

    Catapults in

    Zhu, Libin and Liu, Chaoyue and Radhakrishnan, Adityanarayanan and Belkin, Mikhail , month = jun, year =. Catapults in. doi:10.48550/arXiv.2306.04815 , abstract =

  3. [3]

    Emergence in non-neural models: grokking modular arithmetic via average gradient outer product , shorttitle =

    Mallinar, Neil and Beaglehole, Daniel and Zhu, Libin and Radhakrishnan, Adityanarayanan and Pandit, Parthe and Belkin, Mikhail , month = jul, year =. Emergence in non-neural models: grokking modular arithmetic via average gradient outer product , shorttitle =. doi:10.48550/arXiv.2407.20199 , abstract =

  4. [4]

    DePavia, Adela and Charisopoulos, Vasileios and Willett, Rebecca , month = feb, year =. Faster. doi:10.48550/arXiv.2502.01594 , abstract =

  5. [5]

    Mechanism for feature learning in neural networks and backpropagation-free machine learning models

    Mechanism for feature learning in neural networks and backpropagation-free machine learning models , volume =. Science , author =. 2024 , note =. doi:10.1126/science.adi5639 , abstract =

  6. [6]

    Tansley, Edward and Massart, Estelle and Cartis, Coralia , month = oct, year =. On the. doi:10.48550/arXiv.2510.15563 , abstract =

  7. [7]

    Average gradient outer product as a mechanism for deep neural collapse , url =

    Beaglehole, Daniel and Súkeník, Peter and Mondelli, Marco and Belkin, Mikhail , month = jan, year =. Average gradient outer product as a mechanism for deep neural collapse , url =. doi:10.48550/arXiv.2402.13728 , abstract =

  8. [8]

    doi:10.48550/arXiv.2211.13609 , abstract =

    Lotfi, Sanae and Finzi, Marc and Kapoor, Sanyam and Potapczynski, Andres and Goldblum, Micah and Wilson, Andrew Gordon , month = nov, year =. doi:10.48550/arXiv.2211.13609 , abstract =

  9. [9]

    Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan Raiman, Mohammad Shoeybi, Bryan Catanzaro, and Wei Ping

    Kusupati, Aditya and Bhatt, Gantavya and Rege, Aniket and Wallingford, Matthew and Sinha, Aditya and Ramanujan, Vivek and Howard-Snyder, William and Chen, Kaifeng and Kakade, Sham and Jain, Prateek and Farhadi, Ali , month = feb, year =. Matryoshka. doi:10.48550/arXiv.2205.13147 , abstract =

  10. [10]

    doi:10.48550/arXiv.1802.04434 , abstract =

    Bernstein, Jeremy and Wang, Yu-Xiang and Azizzadenesheli, Kamyar and Anandkumar, Anima , month = aug, year =. doi:10.48550/arXiv.1802.04434 , abstract =

  11. [11]

    Convergence

    Dong, Yiming and Li, Huan and Lin, Zhouchen , month = nov, year =. Convergence. doi:10.48550/arXiv.2411.07724 , abstract =

  12. [12]

    Adam: A Method for Stochastic Optimization

    Kingma, Diederik P. and Ba, Jimmy , month = jan, year =. Adam:. doi:10.48550/arXiv.1412.6980 , abstract =

  13. [13]

    Dissecting

    Balles, Lukas and Hennig, Philipp , month = dec, year =. Dissecting. doi:10.48550/arXiv.1705.07774 , abstract =

  14. [14]

    Peyré, Gabriel , month = may, year =. Optimal. doi:10.48550/arXiv.2505.06589 , abstract =

  15. [15]

    Charting the

    Daskalakis, Constantinos and Gemp, Ian and Jiang, Yanchen and Leme, Renato Paes and Papadimitriou, Christos and Piliouras, Georgios , month = dec, year =. Charting the. doi:10.48550/arXiv.2412.05747 , abstract =

  16. [16]

    Krishnan, Aravind and Reddy, Siva and Mosbach, Marius , month = sep, year =. Not. doi:10.48550/arXiv.2504.05058 , abstract =

  17. [17]

    Huang, Yiwen and Gokaslan, Aaron and Kuleshov, Volodymyr and Tompkin, James , month = jan, year =. The. doi:10.48550/arXiv.2501.05441 , abstract =

  18. [18]

    Devon , month = aug, year =

    Belghazi, Mohamed Ishmael and Baratin, Aristide and Rajeswar, Sai and Ozair, Sherjil and Bengio, Yoshua and Courville, Aaron and Hjelm, R. Devon , month = aug, year =. doi:10.48550/arXiv.1801.04062 , abstract =

  19. [19]

    and Rey-Bellet, Luc , month = feb, year =

    Birrell, Jeremiah and Pantazis, Yannis and Dupuis, Paul and Katsoulakis, Markos A. and Rey-Bellet, Luc , month = feb, year =. Function-space regularized. doi:10.48550/arXiv.2210.04974 , abstract =

  20. [20]

    Mironov, Ilya , month = aug, year =. Renyi. 2017. doi:10.1109/CSF.2017.11 , abstract =

  21. [21]

    Teaching to transgress: education as the practice of freedom , isbn =

    hooks, bell , year =. Teaching to transgress: education as the practice of freedom , isbn =

  22. [22]

    Deep learning generalizes because the parameter-function map is biased towards simple functions

    Valle-Pérez, Guillermo and Camargo, Chico Q. and Louis, Ard A. , month = apr, year =. Deep learning generalizes because the parameter-function map is biased towards simple functions , url =. doi:10.48550/arXiv.1805.08522 , abstract =

  23. [23]

    Unregularized limit of stochastic gradient method for

    Le, Tam , month = jun, year =. Unregularized limit of stochastic gradient method for. doi:10.48550/arXiv.2506.04948 , abstract =

  24. [24]

    Evolutionary Human Sciences , author =

    Contrasting modes of cultural evolution:. Evolutionary Human Sciences , author =. 2025 , pages =. doi:10.1017/ehs.2025.10008 , abstract =

  25. [25]

    (2019) Where is the information in a Deep Neural Network?https://doi.org/10.48550/arXiv.1905.12213

    Achille, Alessandro and Paolini, Giovanni and Soatto, Stefano , month = jun, year =. Where is the. doi:10.48550/arXiv.1905.12213 , abstract =

  26. [26]

    Spectral Normalization for Generative Adversarial Networks

    Miyato, Takeru and Kataoka, Toshiki and Koyama, Masanori and Yoshida, Yuichi , month = feb, year =. Spectral. doi:10.48550/arXiv.1802.05957 , abstract =

  27. [27]

    Allouah, Youssef and Kazdan, Joshua and Guerraoui, Rachid and Koyejo, Sanmi , month = feb, year =. The. doi:10.48550/arXiv.2412.09119 , abstract =

  28. [28]

    Certified

    Koloskova, Anastasia and Allouah, Youssef and Jha, Animesh and Guerraoui, Rachid and Koyejo, Sanmi , month = jun, year =. Certified. doi:10.48550/arXiv.2506.06985 , abstract =

  29. [29]

    Temperature is

    Harel, Itamar and Wolanowsky, Yonathan and Vardi, Gal and Srebro, Nathan and Soudry, Daniel , month = may, year =. Temperature is. doi:10.48550/arXiv.2505.19087 , abstract =

  30. [30]

    Journal of Physics A: Mathematical and Theoretical , author =

    On. Journal of Physics A: Mathematical and Theoretical , author =. 2012 , note =. doi:10.1088/1751-8113/45/3/032003 , abstract =

  31. [31]

    Kuo, Kevin and Setlur, Amrith and Srinivas, Kartik and Raghunathan, Aditi and Smith, Virginia , month = apr, year =. Exact. doi:10.48550/arXiv.2504.04626 , abstract =

  32. [32]

    doi:10.48550/arXiv.2505.08576 , abstract =

    Li, Xiang and Thuraisingham, Bhavani and Wei, Wenqi , month = may, year =. doi:10.48550/arXiv.2505.08576 , abstract =

  33. [33]

    Seeger, Anneke von and Zou, Dongmian and Lerman, Gilad , month = feb, year =. Stein. doi:10.48550/arXiv.2502.03587 , abstract =

  34. [34]

    Koç, Okan and Soen, Alexander and Chiang, Chao-Kai and Sugiyama, Masashi , month = mar, year =. Domain. doi:10.48550/arXiv.2503.08155 , abstract =

  35. [35]

    Kunstner, Frederik and Bach, Francis , month = may, year =. Scaling. doi:10.48550/arXiv.2505.19227 , abstract =

  36. [36]

    , month = jul, year =

    Yang, Greg and Hu, Edward J. , month = jul, year =. Feature. doi:10.48550/arXiv.2011.14522 , abstract =

  37. [37]

    Philosophy & Technology , author =

    Decolonial. Philosophy & Technology , author =. 2020 , note =. doi:10.1007/s13347-020-00405-8 , abstract =

  38. [38]

    Advances in

    Zhou, Chunting and Liu, Pengfei and Xu, Puxin and Iyer, Srinivasan and Sun, Jiao and Mao, Yuning and Ma, Xuezhe and Efrat, Avia and Yu, Ping and YU, LILI and Zhang, Susan and Ghosh, Gargi and Lewis, Mike and Zettlemoyer, Luke and Levy, Omer , editor =. Advances in. 2023 , pages =

  39. [39]

    and Rey-Bellet, Luc and Wang, Jie , month = jul, year =

    Birrell, Jeremiah and Dupuis, Paul and Katsoulakis, Markos A. and Rey-Bellet, Luc and Wang, Jie , month = jul, year =. Variational. doi:10.48550/arXiv.2007.03814 , abstract =

  40. [40]

    arXiv preprint arXiv:2310.12508 (2023)

    Fan, Chongyu and Liu, Jiancheng and Zhang, Yihua and Wong, Eric and Wei, Dennis and Liu, Sijia , month = apr, year =. doi:10.48550/arXiv.2310.12508 , abstract =

  41. [41]

    doi:10.48550/arXiv.2405.19237 , abstract =

    Chavhan, Ruchika and Li, Da and Hospedales, Timothy , month = may, year =. doi:10.48550/arXiv.2405.19237 , abstract =

  42. [42]

    Certifying

    Sinha, Aman and Namkoong, Hongseok and Volpi, Riccardo and Duchi, John , month = may, year =. Certifying. doi:10.48550/arXiv.1710.10571 , abstract =

  43. [43]

    Journal of Applied Probability , author =

    Robust. Journal of Applied Probability , author =. 2019 , note =. doi:10.1017/jpr.2019.49 , abstract =

  44. [44]

    , month = apr, year =

    Gao, Rui and Kleywegt, Anton J. , month = apr, year =. Distributionally. doi:10.48550/arXiv.1604.02199 , abstract =

  45. [45]

    Unbalanced

    Chizat, Lenaic and Peyré, Gabriel and Schmitzer, Bernhard and Vialard, François-Xavier , month = feb, year =. Unbalanced. doi:10.48550/arXiv.1508.05216 , abstract =

  46. [46]

    Unbalanced

    Séjourné, Thibault and Peyré, Gabriel and Vialard, François-Xavier , month = jan, year =. Unbalanced. doi:10.48550/arXiv.2211.08775 , abstract =

  47. [47]

    Are labels informative in semi-supervised learning? --

    Sportisse, Aude and Schmutz, Hugo and Humbert, Olivier and Bouveyron, Charles and Mattei, Pierre-Alexandre , month = feb, year =. Are labels informative in semi-supervised learning? --. doi:10.48550/arXiv.2302.07540 , abstract =

  48. [48]

    Udell, Madeleine and Townsend, Alex , month = may, year =. Why are. doi:10.48550/arXiv.1705.07474 , abstract =

  49. [49]

    Steinke, Thomas and Nasr, Milad and Jagielski, Matthew , month = may, year =. Privacy. doi:10.48550/arXiv.2305.08846 , abstract =

  50. [50]

    and Li, Pan and Chien, Eli , month = jan, year =

    Wei, Rongzhe and Li, Mufei and Ghassemi, Mohsen and Kreačić, Eleonora and Li, Yifan and Yue, Xiang and Li, Bo and Potluru, Vamsi K. and Li, Pan and Chien, Eli , month = jan, year =. Underestimated. doi:10.48550/arXiv.2412.08559 , abstract =

  51. [51]

    and Zhang, Chiyuan , month = oct, year =

    Shi, Weijia and Lee, Jaechan and Huang, Yangsibo and Malladi, Sadhika and Zhao, Jieyu and Holtzman, Ari and Liu, Daogao and Zettlemoyer, Luke and Smith, Noah A. and Zhang, Chiyuan , month = oct, year =

  52. [52]

    Normalizing

    Cunningham, Edmond and Zabounidis, Renos and Agrawal, Abhinav and Fiterau, Madalina and Sheldon, Daniel , month = jun, year =. Normalizing. doi:10.48550/arXiv.2006.13070 , abstract =

  53. [53]

    Trumpets:

    Kothari, Konik and Khorashadizadeh, AmirEhsan and Hoop, Maarten de and Dokmanić, Ivan , month = feb, year =. Trumpets:. doi:10.48550/arXiv.2102.10461 , abstract =

  54. [54]

    Flows for simultaneous manifold learning and density estimation , volume =

    Brehmer, Johann and Cranmer, Kyle , year =. Flows for simultaneous manifold learning and density estimation , volume =. Advances in

  55. [55]

    , month = oct, year =

    Altschuler, Jason M. , month = oct, year =. Flows,. doi:10.48550/arXiv.2210.16456 , abstract =

  56. [56]

    Frontiers in Neural Circuits , author =

    Why. Frontiers in Neural Circuits , author =. 2020 , note =. doi:10.3389/fncir.2020.00054 , abstract =

  57. [57]

    Paul, Mansheej and Ganguli, Surya and Dziugaite, Gintare Karolina , year =. Deep. Advances in

  58. [58]

    and Chewi, Sinho , month = dec, year =

    Altschuler, Jason M. and Chewi, Sinho , month = dec, year =. Shifted. doi:10.48550/arXiv.2412.17997 , abstract =

  59. [59]

    Are we making progress in unlearning?

    Triantafillou, Eleni and Kairouz, Peter and Pedregosa, Fabian and Hayes, Jamie and Kurmanji, Meghdad and Zhao, Kairan and Dumoulin, Vincent and Junior, Julio Jacques and Mitliagkas, Ioannis and Wan, Jun and Hosoya, Lisheng Sun and Escalera, Sergio and Dziugaite, Gintare Karolina and Triantafillou, Peter and Guyon, Isabelle , month = jun, year =. Are we ma...

  60. [60]

    Chen, Yudong and Xi, Xumei and Yu, Christina Lee , month = sep, year =. Entry-. doi:10.48550/arXiv.2409.03980 , abstract =

  61. [61]

    Discrete & Computational Geometry , author =

    Sampling from a. Discrete & Computational Geometry , author =. 2018 , keywords =. doi:10.1007/s00454-018-9992-1 , abstract =

  62. [62]

    Chizat, Lenaic and Bach, Francis , month = oct, year =. On the. doi:10.48550/arXiv.1805.09545 , abstract =

  63. [63]

    Lipman, Yaron and Havasi, Marton and Holderrieth, Peter and Shaul, Neta and Le, Matt and Karrer, Brian and Chen, Ricky T. Q. and Lopez-Paz, David and Ben-Hamu, Heli and Gat, Itai , month = dec, year =. Flow. doi:10.48550/arXiv.2412.06264 , abstract =

  64. [64]

    International Journal of Frontiers in Science and Technology Research , author =

    Advancing public safety and housing solutions:. International Journal of Frontiers in Science and Technology Research , author =. 2025 , note =. doi:10.53294/ijfstr.2025.8.1.0024 , abstract =

  65. [65]

    Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , author =

    Enhancing. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , author =. 2024 , note =. doi:10.1609/aies.v7i1.31735 , abstract =

  66. [66]

    , month = apr, year =

    John, Caleb and Messier, Geoffrey G. , month = apr, year =. A. doi:10.48550/arXiv.2205.09883 , abstract =

  67. [67]

    2024 , pages =

    International Journal for Research in Applied Science and Engineering Technology , author =. 2024 , pages =. doi:10.22214/ijraset.2024.62728 , abstract =

  68. [68]

    Online rental housing market representation and the digital reproduction of urban inequality -

  69. [69]

    Choi, Minseok and Rim, Daniel and Lee, Dohyun and Choo, Jaegul , month = dec, year =. Opt-. doi:10.48550/arXiv.2406.12329 , abstract =

  70. [70]

    Multiple

    Mansour, Yishay and Mohri, Mehryar and Rostamizadeh, Afshin , month = may, year =. Multiple. doi:10.48550/arXiv.1205.2628 , abstract =

  71. [71]

    doi:10.48550/arXiv.1812.06393 , abstract =

    Pagnoni, Artidoro and Gramatovici, Stefan and Liu, Samuel , month = dec, year =. doi:10.48550/arXiv.1812.06393 , abstract =

  72. [72]

    Certifiable

    Mahadevan, Ananth and Mathioudakis, Michael , month = aug, year =. Certifiable. doi:10.48550/arXiv.2106.15093 , abstract =

  73. [73]

    Proceedings of the AAAI Conference on Artificial Intelligence , author =

    Amnesiac. Proceedings of the AAAI Conference on Artificial Intelligence , author =. 2021 , note =. doi:10.1609/aaai.v35i13.17371 , abstract =

  74. [74]

    and Chundawat, Vikram S

    Tarun, Ayush K. and Chundawat, Vikram S. and Mandal, Murari and Kankanhalli, Mohan , month = may, year =. Deep. doi:10.48550/arXiv.2210.08196 , abstract =

  75. [75]

    Variational inference via

  76. [76]

    Advances in Neural Information Processing Systems , author =

    Variational inference via. Advances in Neural Information Processing Systems , author =. 2022 , pages =

  77. [77]

    First-order

    Lanzetti, Nicolas and Bolognani, Saverio and Dörfler, Florian , month = sep, year =. First-order. doi:10.48550/arXiv.2209.12197 , abstract =

  78. [78]

    Kent, Carson and Blanchet, Jose and Glynn, Peter , month = may, year =. Frank-. doi:10.48550/arXiv.2105.05352 , abstract =

  79. [79]

    ESAIM: Probability and Statistics , author =

    Optimisation in space of measures and optimal design , volume =. ESAIM: Probability and Statistics , author =. 2004 , pages =. doi:10.1051/ps:2003016 , language =

  80. [80]

    Chamon, Luiz F. O. and Karimi, Mohammad Reza and Korba, Anna , month = jan, year =. Constrained. doi:10.48550/arXiv.2411.00568 , abstract =

Showing first 80 references.