pith. machine review for the scientific record. sign in

arxiv: 2604.23649 · v1 · submitted 2026-04-26 · 💻 cs.CR

Recognition: unknown

R\'enyi Pufferfish Privacy with Gaussian-based Priors: From Single Gaussian to Mixture Model

Authors on Pith no claims yet

Pith reviewed 2026-05-08 05:50 UTC · model grok-4.3

classification 💻 cs.CR
keywords Renyi Pufferfish PrivacyGaussian mechanismsGaussian mixture modelsoptimal transportprivacy-utility tradeoffadditive noisecorrelated data
0
0 comments X

The pith

Incorporating knowledge of Gaussian or Gaussian-mixture priors lets mechanisms add substantially less noise while still satisfying Rényi Pufferfish Privacy.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that Rényi Pufferfish Privacy, which protects correlated data through Rényi divergence, can be achieved with less noise when the prior distribution over the secret is known and modeled as a Gaussian or Gaussian mixture. For a single Gaussian prior it derives the exact post-perturbation Rényi divergence, supplies a closed-form sufficient condition, and proves monotonicity of the required noise in the privacy parameters. For general priors it replaces the conservative Wasserstein bound with a Gaussian-mixture approximation plus an optimal-transport sufficient condition. Experiments on UCI data with statistical and model-output queries confirm that these prior-aware calibrations consistently use less noise than prior RPP baselines.

Core claim

For single Gaussian priors the exact Rényi divergence after Gaussian perturbation is derived, producing a relaxed closed-form sufficient condition for (α,ε)-RPP together with a characterization of how the calibrated noise varies with ε and α; for general priors the secret-conditioned outputs are approximated by Gaussian mixture models and an optimal-transport-based sufficient condition is introduced that still guarantees RPP.

What carries the argument

The Gaussian mechanism whose variance is calibrated either from the closed-form Rényi divergence under a single Gaussian prior or from optimal-transport distances between components of a Gaussian-mixture approximation to the prior.

If this is right

  • The calibrated noise variance decreases as the privacy budget ε relaxes or as the Rényi order α decreases.
  • The resulting mechanisms require less noise than ∞-Wasserstein baselines while still meeting the (α,ε)-RPP definition.
  • Gaussian-mixture approximations extend the method to multimodal and non-Gaussian priors without losing the formal guarantee.
  • The approach improves utility for both simple statistical queries and complex model outputs such as Bayesian neural networks and Gaussian processes.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If realistic priors can be estimated from public data, the same prior-aware calibration strategy could be applied to other divergence-based privacy definitions.
  • Tighter bounds than the optimal-transport condition might further reduce noise when the mixture fit is known to be good.
  • The monotonicity results could guide adaptive privacy-budget allocation across multiple releases that share the same prior.

Load-bearing premise

That the secret-conditioned output distributions are accurately approximated by a single Gaussian or a Gaussian mixture model, so the derived sufficient conditions actually enforce the Rényi Pufferfish Privacy definition.

What would settle it

A direct numerical check showing that the Rényi divergence between the perturbed outputs for two secrets exceeds ε at the claimed noise level, or an experiment in which the privacy guarantee is violated on data whose true prior is known to be far from Gaussian or Gaussian-mixture.

Figures

Figures reproduced from arXiv: 2604.23649 by Jincheng An, Jing Sun, Liehuang Zhu, Ni Ding, Wenjin Yang, Yong Liu, Zhen Li, Zijian Zhang.

Figure 1
Figure 1. Figure 1: Noise parameter θ versus privacy budget ϵ for Gaussian-prior queries with α = 3.0. x Density Gaussian Priors N (0.0, 0.5 2 ) N (1.0, 4.0 2 ) µi = 0.0, µj = 1.0 σ 2 i = 0.25, σ2 j = 16.0 (µi − µj ) Privacy Budget2 = 1.00 RHS(α, ϵ) stays above this for shown α and ϵ ∈ [0.5, 5.0] ϵ Noise Parameter θ θ vs. ϵ α = 1.5 α = 2 α = 3 α = 5 (a) α-Decreasing Case x Density Gaussian Priors N (0.0, 1.0 2 ) N (3.5, 1.1 2… view at source ↗
Figure 2
Figure 2. Figure 2: Two Gaussian-prior examples illustrating how the required noise view at source ↗
Figure 3
Figure 3. Figure 3: The three real-world datasets are chosen to span demographic, education, and healthcare domains and to illustrate diverse non-Gaussian query view at source ↗
Figure 4
Figure 4. Figure 4: Noise parameter θ versus privacy budget ϵ for GMM-prior queries with α = 3.0. also consistently attains a smaller θ than the baseline through￾out the sweep, showing that Theorem IV.1 provides a tighter calibration when the secret-conditioned query distributions are better modeled by Gaussian mixtures. V. EXPERIMENTS We empirically evaluate our proposed Renyi Pufferfish ´ Privacy mechanisms for Gaussian pri… view at source ↗
Figure 5
Figure 5. Figure 5: Adult is chosen as a representative census-style demographic dataset, where the sensitive attribute is race, the released attribute is education-num, and the secrets are si =“race = White” and sj =“race = Other”. The figure shows the required noise parameter θ versus privacy budget ϵ with α = 3.0, where the four panels correspond to RAW, MEAN, BNN, and GP queries. 0 1 2 3 4 5 0 5 10 15 Privacy Budget ϵ Noi… view at source ↗
Figure 6
Figure 6. Figure 6: Heart Disease is chosen as a representative healthcare dataset, where the sensitive attribute is slope, the released attribute is oldpeak, and the secrets are si =“slope = 1” and sj =“slope = 3”. The figure shows the required noise parameter θ versus privacy budget ϵ with α = 3.0, where the four panels correspond to RAW, MEAN, BNN, and GP queries. 0 1 2 3 4 5 0 5 10 15 20 Privacy Budget ϵ Noise Parameter θ… view at source ↗
Figure 7
Figure 7. Figure 7: Student Performance is chosen as a representative education-performance dataset, where the sensitive attribute is schoolsup, the released attribute is G3, and the secrets are si =“schoolsup = no” and sj =“schoolsup = yes”. The figure shows the required noise parameter θ versus privacy budget ϵ with α = 3.0, where the four panels correspond to RAW, MEAN, BNN, and GP queries. D. Summary The experiments suppo… view at source ↗
Figure 8
Figure 8. Figure 8: The three real-world datasets including Adult, Heart Disease, and Student Performance. Adult releases attribute education-num while preserving sensitive attribute race, where secrets si =“race = White” and sj =“race = Other”; Heart Disease releases attribute oldpeak while preserving sensitive attribute slope, and secrets si =“slope = 1” and sj =“slope = 3”; and Student Performance releases attribute G3 whi… view at source ↗
read the original abstract

R\'{e}nyi Pufferfish Privacy (RPP) provides a R\'{e}nyi divergence-based privacy framework for correlated data, but existing $\infty$-Wasserstein mechanisms are often conservative and sacrifice data utility. We study Gaussian mechanisms for RPP under Gaussian and Gaussian-mixture priors. For single Gaussian priors, we derive the exact R\'{e}nyi divergence after Gaussian perturbation, obtain a relaxed closed-form sufficient condition for $(\alpha,\epsilon)$-RPP, and characterize the monotonicity of the calibrated noise with respect to the privacy budget $\epsilon$ and the R\'{e}nyi order $\alpha$. To handle more general non-Gaussian and multimodal priors, we approximate secret-conditioned outputs with Gaussian mixture models and introduce an optimal-transport-based sufficient condition for RPP. Experiments on three UCI datasets with statistical (\textsc{RAW}, \textsc{MEAN}) and model-output (\textsc{BNN}, \textsc{GP}) queries show that our prior-aware mechanisms consistently require less noise than a recent RPP additive-noise baseline, achieving an average noise reduction of 48.9\%. These results show that our mechanisms can substantially improve the privacy-utility trade-off under RPP.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript develops Gaussian mechanisms for Rényi Pufferfish Privacy (RPP) under Gaussian and Gaussian-mixture priors. For single-Gaussian priors it derives the exact Rényi divergence after additive Gaussian noise, supplies a relaxed closed-form sufficient condition for (α,ε)-RPP, and characterizes monotonicity of the calibrated noise variance. For general priors it approximates secret-conditioned output laws by Gaussian mixture models and invokes an optimal-transport sufficient condition on the approximating measures. Experiments on three UCI datasets with statistical and model-based queries report an average 48.9 % reduction in required noise relative to a recent RPP baseline.

Significance. If the derivations are correct and the GMM approximation error is demonstrably controlled, the work would improve the privacy-utility frontier for RPP on correlated data by exploiting prior structure. The explicit single-Gaussian derivations and the empirical noise reductions are concrete strengths. The significance is limited, however, by the absence of quantitative Rényi-divergence bounds on the GMM approximation, which is load-bearing for the general-prior claims.

major comments (3)
  1. [§4] §4 (GMM approximation and OT sufficient condition): the true secret-conditioned law P_{M(X)|s} is replaced by a GMM Q_s and the OT-based condition is applied only to Q_s, yet no explicit upper bound is given on D_α(P_{M(X)|s} || Q_s) nor on how this error propagates into the Rényi Pufferfish divergence. Without such a bound the reported 48.9 % noise reduction cannot be guaranteed to preserve (α,ε)-RPP.
  2. [§3.2–3.3] §3.2–3.3 (relaxed closed-form condition for single Gaussian): the paper states that the relaxed condition is sufficient for (α,ε)-RPP, but the step that shows the relaxation preserves the divergence bound (i.e., that the omitted terms do not increase the Rényi divergence beyond ε) is only sketched; an expanded derivation or inequality chain is required to confirm sufficiency.
  3. [Experimental section] Experimental section (comparison with baseline): the noise-reduction figures rest on the assumption that the proposed mechanisms satisfy RPP; because the GMM error analysis is missing, it is unclear whether the observed utility gains are obtained while still meeting the target privacy level or whether they result from an under-calibrated noise variance.
minor comments (2)
  1. [§2] Notation for the prior parameters (μ_s, Σ_s) and the mixture weights should be introduced once in §2 and used consistently thereafter; several later equations reuse the same symbols without re-definition.
  2. [Figures] Figure captions for the noise-variance plots omit the values of α and the dataset names; adding these would improve readability.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed comments. We address each major point below and will revise the manuscript to strengthen the presentation of the theoretical results and clarify the scope of the guarantees.

read point-by-point responses
  1. Referee: [§4] §4 (GMM approximation and OT sufficient condition): the true secret-conditioned law P_{M(X)|s} is replaced by a GMM Q_s and the OT-based condition is applied only to Q_s, yet no explicit upper bound is given on D_α(P_{M(X)|s} || Q_s) nor on how this error propagates into the Rényi Pufferfish divergence. Without such a bound the reported 48.9 % noise reduction cannot be guaranteed to preserve (α,ε)-RPP.

    Authors: We agree that applying the OT sufficient condition solely to the approximating GMM Q_s does not automatically yield a rigorous (α,ε)-RPP guarantee for the true conditional distribution P_{M(X)|s}. The manuscript presents the GMM step as a practical approximation for non-Gaussian priors. In the revision we will add a dedicated paragraph in §4 that (i) recalls the GMM fitting procedure, (ii) reports empirical estimates of D_α(P||Q_s) on the three UCI datasets for the query types considered, and (iii) states explicitly that the reported noise reductions are obtained under this approximation. A general closed-form propagation bound appears difficult to obtain without further assumptions on the prior; we therefore treat the GMM route as a heuristic that improves utility when the mixture fit is accurate, which the experiments support. revision: partial

  2. Referee: [§3.2–3.3] §3.2–3.3 (relaxed closed-form condition for single Gaussian): the paper states that the relaxed condition is sufficient for (α,ε)-RPP, but the step that shows the relaxation preserves the divergence bound (i.e., that the omitted terms do not increase the Rényi divergence beyond ε) is only sketched; an expanded derivation or inequality chain is required to confirm sufficiency.

    Authors: We will replace the sketch in §3.2–3.3 with a complete, self-contained inequality chain. The omitted terms are non-negative and can be bounded using the monotonicity properties already established for the exact Rényi divergence under Gaussian noise; the revised proof will show that discarding them yields a strictly stronger (hence still sufficient) noise-variance condition. revision: yes

  3. Referee: Experimental section (comparison with baseline): the noise-reduction figures rest on the assumption that the proposed mechanisms satisfy RPP; because the GMM error analysis is missing, it is unclear whether the observed utility gains are obtained while still meeting the target privacy level or whether they result from an under-calibrated noise variance.

    Authors: The 48.9 % average reduction is computed from the noise variances that satisfy the (exact or approximate) sufficient conditions derived in the paper. For the single-Gaussian prior experiments the conditions are exact; for the mixture-prior experiments they rest on the GMM approximation. In the revised experimental section we will (i) separate the results by prior type, (ii) include the empirical D_α estimates mentioned above, and (iii) note that the utility gains are realized under the stated approximation. If the referee deems it necessary, we can also recompute the baseline comparison using a more conservative noise multiplier that accounts for a small additive error term. revision: partial

Circularity Check

0 steps flagged

No significant circularity; derivations use standard closed-form Rényi divergence for Gaussians and modeling approximations without self-referential reduction.

full rationale

The paper's core derivations for single-Gaussian priors start from the known closed-form Rényi divergence between two Gaussians after additive perturbation, then relax it to a sufficient condition for (α,ε)-RPP and analyze monotonicity of the noise scale. These steps are independent of the target privacy parameters and treat prior means/variances as exogenous inputs. For general priors the GMM approximation and optimal-transport sufficient condition are introduced as modeling choices rather than quantities fitted to the output divergence; no equation reduces the claimed RPP guarantee to a self-fit or self-citation chain. No uniqueness theorems, ansatzes smuggled via prior self-work, or renamings of empirical patterns appear. The reported utility gains rest on empirical comparison rather than on any quantity defined circularly by the result itself.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claims rest on the assumption that data priors are exactly Gaussian or well-approximated by finite Gaussian mixtures; the noise scale is a derived quantity that depends on these priors and the privacy parameters.

free parameters (1)
  • calibrated noise variance
    The Gaussian mechanism's variance is chosen to satisfy the derived sufficient condition for the target (α,ε) pair; its value is determined by the prior parameters and privacy budget.
axioms (1)
  • domain assumption Secret-conditioned outputs are exactly Gaussian (single-prior case) or can be approximated by a Gaussian mixture model (general case).
    Invoked to obtain the exact Rényi divergence and the optimal-transport sufficient condition.

pith-pipeline@v0.9.0 · 5537 in / 1420 out tokens · 48620 ms · 2026-05-08T05:50:07.172548+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

38 extracted references · 9 canonical work pages · 1 internal anchor

  1. [1]

    The knowledge complexity of interactive proof-systems,

    S. Goldwasser, S. Micali, and C. Rackoff, “The knowledge complexity of interactive proof-systems,” inProviding sound foundations for cryp- tography: On the work of shafi goldwasser and silvio micali, 2019, pp. 203–225

  2. [2]

    Zero-knowledge proof-based verifiable decen- tralized machine learning in communication network: A comprehensive survey,

    Z. Xing, Z. Zhang, Z. Zhang, Z. Li, M. Li, J. Liu, Z. Zhang, Y . Zhao, Q. Sun, L. Zhuet al., “Zero-knowledge proof-based verifiable decen- tralized machine learning in communication network: A comprehensive survey,”IEEE Communications Surveys & Tutorials, 2025

  3. [3]

    Fully homomorphic encryption using ideal lattices,

    C. Gentry, “Fully homomorphic encryption using ideal lattices,” in Proceedings of the forty-first annual ACM symposium on Theory of computing, 2009, pp. 169–178

  4. [4]

    Maskcrypt: Federated learning with selective ho- momorphic encryption,

    C. Hu and B. Li, “Maskcrypt: Federated learning with selective ho- momorphic encryption,”IEEE Transactions on Dependable and Secure Computing, vol. 22, no. 1, pp. 221–233, 2024

  5. [5]

    Protocols for secure computations,

    A. C. Yao, “Protocols for secure computations,” in23rd annual sympo- sium on foundations of computer science (sfcs 1982). IEEE, 1982, pp. 160–164

  6. [6]

    Ebyftves: Efficient byzantine fault tolerant-based verifiable secret-sharing in distributed privacy-preserving machine learn- ing,

    Z. Li, Z. Zhang, W. Yang, P. Wang, Z. Wang, M. Li, Y . Wu, X. Liu, J. Sun, and L. Zhu, “Ebyftves: Efficient byzantine fault tolerant-based verifiable secret-sharing in distributed privacy-preserving machine learn- ing,”arXiv preprint arXiv:2509.12899, 2025

  7. [7]

    Intel sgx explained,

    V . Costan and S. Devadas, “Intel sgx explained,”Cryptology ePrint Archive, 2016

  8. [8]

    Securing cloud file systems with trusted execution,

    Q. Burke, Y . Beugin, B. Hoak, R. King, E. Pauley, R. Sheatsley, M. Yu, T. He, T. F. La Porta, and P. McDaniel, “Securing cloud file systems with trusted execution,”IEEE Transactions on Dependable and Secure Computing, vol. 22, no. 3, pp. 1976–1992, 2024

  9. [9]

    Differential privacy,

    C. Dwork, “Differential privacy,” inInternational colloquium on au- tomata, languages, and programming. Springer, 2006, pp. 1–12

  10. [10]

    Calibrating noise to sensitivity in private data analysis,

    C. Dwork, F. McSherry, K. Nissim, and A. D. Smith, “Calibrating noise to sensitivity in private data analysis,” inTheory of Cryptography, Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006, Proceedings, vol. 3876, 2006, pp. 265–284

  11. [11]

    eta-inference: A data-aware and high-utility privacy model for relational data publishing,

    Z. Chen, L. Yao, H. Hu, and G. Wu, “eta-inference: A data-aware and high-utility privacy model for relational data publishing,”IEEE Transactions on Dependable and Secure Computing, 2025

  12. [12]

    A utility-aware anonymiza- tion model for multiple sensitive attributes based on association con- cealment,

    L. Yao, X. Wang, H. Hu, and G. Wu, “A utility-aware anonymiza- tion model for multiple sensitive attributes based on association con- cealment,”IEEE Transactions on Dependable and Secure Computing, vol. 21, no. 4, pp. 2045–2056, 2023

  13. [13]

    A rigorous and customizable frame- work for privacy,

    D. Kifer and A. Machanavajjhala, “A rigorous and customizable frame- work for privacy,” inProceedings of the 31st ACM SIGMOD-SIGACT- SIGART Symposium on Principles of Database Systems, PODS 2012, Scottsdale, AZ, USA, May 20-24, 2012, 2012, pp. 77–88

  14. [14]

    Pufferfish: A framework for mathematical privacy definitions,

    ——, “Pufferfish: A framework for mathematical privacy definitions,” ACM Transactions on Database Systems (TODS), vol. 39, no. 1, pp. 1–36, 2014

  15. [15]

    R ´enyi pufferfish privacy: General additive noise mechanisms and privacy amplification by iteration via shift reduction lemmas,

    C. Pierquin, A. Bellet, M. Tommasi, and M. Boussard, “R ´enyi pufferfish privacy: General additive noise mechanisms and privacy amplification by iteration via shift reduction lemmas,” inForty-first International Conference on Machine Learning, 2024

  16. [16]

    R ´enyi divergence and kullback-leibler divergence,

    T. Van Erven and P. Harremos, “R ´enyi divergence and kullback-leibler divergence,”IEEE Transactions on Information Theory, vol. 60, no. 7, pp. 3797–3820, 2014

  17. [17]

    Pufferfish privacy mechanisms for correlated data,

    S. Song, Y . Wang, and K. Chaudhuri, “Pufferfish privacy mechanisms for correlated data,” inProceedings of the 2017 ACM International Conference on Management of Data, 2017, pp. 1291–1306

  18. [18]

    The∞-wasserstein distance: Local solutions and existence of optimal transport maps,

    T. Champion, L. De Pascale, and P. Juutinen, “The∞-wasserstein distance: Local solutions and existence of optimal transport maps,”SIAM Journal on Mathematical Analysis, vol. 40, no. 1, pp. 1–20, 2008

  19. [19]

    A study of the dual problem of the one- dimensional l∞-optimal transport problem with applications,

    L. De Pascale and J. Louet, “A study of the dual problem of the one- dimensional l∞-optimal transport problem with applications,”Journal of Functional Analysis, vol. 276, no. 11, pp. 3304–3324, 2019

  20. [20]

    Kantorovich mechanism for pufferfish privacy,

    N. Ding, “Kantorovich mechanism for pufferfish privacy,” inInterna- tional Conference on Artificial Intelligence and Statistics. PMLR, 2022, pp. 5084–5103

  21. [21]

    Noise reduction for pufferfish privacy: A practical noise calibration method,

    W. Yang, N. Ding, Z. Zhang, J. Sun, Z. Li, Y . Wu, J. Sun, H. Lin, Y . Liu, J. Anet al., “Noise reduction for pufferfish privacy: A practical noise calibration method,”arXiv preprint arXiv:2601.06385, 2026

  22. [22]

    Sliced r\’enyi pufferfish privacy: Di- rectional additive noise mechanism and private learning with gradient clipping,

    T. Zhang and Y . V orobeychik, “Sliced r\’enyi pufferfish privacy: Di- rectional additive noise mechanism and private learning with gradient clipping,”arXiv preprint arXiv:2512.01115, 2025

  23. [23]

    Global inducing point variational pos- teriors for bayesian neural networks and deep gaussian processes,

    S. W. Ober and L. Aitchison, “Global inducing point variational pos- teriors for bayesian neural networks and deep gaussian processes,” in International Conference on Machine Learning. PMLR, 2021, pp. 8248–8259

  24. [24]

    3dgs- avatar: Animatable avatars via deformable 3d gaussian splatting,

    Z. Qian, S. Wang, M. Mihajlovic, A. Geiger, and S. Tang, “3dgs- avatar: Animatable avatars via deformable 3d gaussian splatting,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2024, pp. 5020–5030

  25. [25]

    Bayesian basis function approximation for scalable gaussian process priors in deep generative models,

    M. Y . Balık, M. Sinelnikov, P. Ong, and H. L ¨ahdesm¨aki, “Bayesian basis function approximation for scalable gaussian process priors in deep generative models,” inForty-second International Conference on Machine Learning, 2025

  26. [26]

    Training gaussian mixture models at scale via coresets,

    M. Lucic, M. Faulkner, A. Krause, and D. Feldman, “Training gaussian mixture models at scale via coresets,”Journal of Machine Learning Research, vol. 18, no. 160, pp. 1–25, 2018

  27. [27]

    Large margin gaussian mixture models JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 12 with differential privacy,

    M. A. Pathak and B. Raj, “Large margin gaussian mixture models JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 12 with differential privacy,”IEEE Transactions on dependable and secure computing, vol. 9, no. 4, pp. 463–469, 2012

  28. [28]

    Approximation of pufferfish privacy for gaussian priors,

    N. Ding, “Approximation of pufferfish privacy for gaussian priors,”IEEE Transactions on Information Forensics and Security, 2024

  29. [29]

    Multi-user Pufferfish Privacy

    N. Ding, S. Lu, W. Yang, and Z. Zhang, “Multi-user pufferfish privacy,” arXiv preprint arXiv:2512.18632, 2025

  30. [30]

    Composition for pufferfish privacy,

    J. Bai, G. He, X. Gu, D. Kifer, and K. Maeng, “Composition for pufferfish privacy,”arXiv preprint arXiv:2602.02718, 2026

  31. [31]

    Quantum pufferfish privacy: A flexible privacy framework for quantum systems,

    T. Nuradha, Z. Goldfeld, and M. M. Wilde, “Quantum pufferfish privacy: A flexible privacy framework for quantum systems,”IEEE Transactions on Information Theory, 2024

  32. [32]

    Measured hockey-stick divergence and its applications to quantum pufferfish privacy,

    T. Nuradha, V . Singh, and M. M. Wilde, “Measured hockey-stick divergence and its applications to quantum pufferfish privacy,”arXiv preprint arXiv:2501.12359, 2025

  33. [33]

    R ´enyi differential privacy,

    I. Mironov, “R ´enyi differential privacy,” in2017 IEEE 30th computer security foundations symposium (CSF). IEEE, 2017, pp. 263–275

  34. [34]

    Non-gaussian effects in microcontact,

    J. I. McCool, “Non-gaussian effects in microcontact,”International Journal of Machine Tools and Manufacture, vol. 32, no. 1-2, pp. 115– 123, 1992

  35. [35]

    mclust 5: clustering, classification and density estimation using gaussian finite mixture models,

    L. Scrucca, M. Fop, T. B. Murphy, and A. E. Raftery, “mclust 5: clustering, classification and density estimation using gaussian finite mixture models,”The R journal, vol. 8, no. 1, p. 289, 2016

  36. [36]

    Becker and R

    B. Becker and R. Kohavi, “Adult,” UCI Machine Learning Repository, 1996, DOI: https://doi.org/10.24432/C5XW20

  37. [37]

    UCI Machine Learning Repository

    A. Janosi, W. Steinbrunn, M. Pfisterer, and R. Detrano, “Heart Disease,” UCI Machine Learning Repository, 1989, DOI: https://doi.org/10.24432/C52P4X

  38. [38]

    Student Performance,

    P. Cortez, “Student Performance,” UCI Machine Learning Repository, 2008, DOI: https://doi.org/10.24432/C5TG7T