pith. machine review for the scientific record. sign in

arxiv: 2605.09561 · v1 · submitted 2026-05-10 · 💻 cs.IT · math.IT

Recognition: no theorem link

Sparse Discrete Laplace and Gaussian Mechanisms under Local Differential Privacy

Authors on Pith no claims yet

Pith reviewed 2026-05-12 04:26 UTC · model grok-4.3

classification 💻 cs.IT math.IT
keywords local differential privacysparse mechanismsdiscrete LaplaceGaussian mechanismssupport cardinalityprivacy-sparsity tradeoffinput-dependent supports
0
0 comments X

The pith

Sparse discrete Laplace and Gaussian mechanisms achieve local differential privacy with support cardinality as the key complexity parameter.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines locally private mechanisms where each input x maps to a small set S(x) of possible outputs, with probabilities proportional to a kernel like exponential decay in distance for Laplace or Gaussian. It provides exact conditions for these sparse channels to satisfy pure epsilon-local DP and approximate (epsilon, delta)-local DP. For the radius-truncated versions, it derives tradeoffs showing that a minimum support size is needed for nontrivial privacy, and that larger supports trade reduced leakage for higher distortion. This establishes support size as the intrinsic parameter for designing such mechanisms.

Core claim

For both the sparse discrete-Laplace and sparse Gaussian families with kernels w(x,y) = exp(-λ d(x,y)) and exp(-d(x,y)^2/(2σ²)) respectively, exact characterizations of pure and approximate LDP are given. Input-dependent supports are obtained for pure LDP when supports coincide, and privacy defects are expressed via support leakage and excess loss for approximate LDP. Radius-truncated specializations yield explicit privacy-sparsity tradeoffs in terms of support size s, with Gaussian showing quadratic radius dependence in overlap.

What carries the argument

Sparse channels of the form M(y|x) ∝ w(x,y) 1{y ∈ S(x)} with small input-dependent support S(x), specialized to discrete Laplace and Gaussian kernels on a metric space.

If this is right

  • Choosing the smallest support size that meets the privacy target minimizes distortion while achieving the desired local DP.
  • Nontrivial approximate local privacy requires a minimum support size; below it, the mechanism cannot satisfy the (ε,δ) constraint.
  • For Gaussian kernels, the privacy-sparsity tradeoff is sharper due to quadratic dependence on support radius in the overlap term.
  • Larger supports reduce support leakage but increase distortion, balancing at the minimal viable s.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • These mechanisms could extend to other kernels or metrics where support truncation controls privacy loss similarly.
  • If the metric d(x,y) has specific structure like in graphs or lattices, the minimal support size might admit closed-form expressions.
  • Implementing such mechanisms in practice would require efficient ways to sample from the truncated distributions while maintaining the exact privacy guarantees.
  • The identification of support cardinality suggests it as a universal design knob for sparse local privacy across different noise families.

Load-bearing premise

The analysis assumes the admissible output sets S(x) are small and can depend on the private input x, with a fixed distance metric d(x,y) for the radius-truncated cases.

What would settle it

If a radius-truncated sparse Gaussian mechanism with support size below the derived minimum fails to satisfy the target (ε,δ)-LDP, or if increasing support size beyond it unexpectedly improves both privacy and distortion simultaneously, the tradeoff claims would be falsified.

Figures

Figures reproduced from arXiv: 2605.09561 by Amirreza Zamani, Mikael Skoglund, Parastoo Sadeghi, Sajad Daei.

Figure 1
Figure 1. Figure 1: Laplace support-size tradeoff for (ε, λ, H) = (1, 0.8, 2). The exact privacy defect decreases with s, while the first distortion moment increases. A. Support-size sweeps Table I fixes (ε, λ, H) = (1, 0.5, 3) and varies the support size s. The first row illustrates the feasibility threshold from Theorem 4: when s = 3, one has s − 1 = 2 < H = 3, and the exact privacy defect is 1. Once the support becomes lar… view at source ↗
Figure 3
Figure 3. Figure 3: Laplace concentration sweep. At fixed sparsity, increasing [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
read the original abstract

We study sparse locally private channels of the form $M(y\mid x)\propto w(x,y) 1\{y\in S(x)\},$ where the admissible output set $S(x)$ is allowed to depend on the private input $x$ and is assumed to be small. Here, we consider the sparse discrete-Laplace family with kernel $w(x,y)=e^{-\lambda d(x,y)}$ and the sparse Gaussian family with kernel $w(x,y)=e^{-d(x,y)^2/(2\sigma^2)}$. For both families we give exact characterizations of pure and approximate local differential privacy. For pure $\varepsilon$-local differential privacy, we show that input-dependent sparse supports are obtained when all supports coincide. For $(\varepsilon,\delta)$-local differential privacy, we derive exact formulas for the privacy defect in terms of support leakage and excess privacy loss on the overlap region. We then specialize the analysis to radius-truncated sparse discrete-Laplace and radius-truncated sparse Gaussian mechanisms and obtain explicit privacy-sparsity tradeoffs in terms of the support size $s$. In particular, we show that nontrivial approximate local privacy requires a minimum support size, whereas larger supports reduce support leakage but increase distortion. For the Gaussian family, the overlap term exhibits an additional quadratic dependence on the support radius, which implies a sharper tradeoff between privacy and sparsity. These results identify the support cardinality as the intrinsic complexity parameter of the mechanism and yield an optimal design principle: choose the smallest support size that satisfies the target privacy constraint.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper examines sparse locally private mechanisms of the form M(y|x) ∝ w(x,y) 1{y ∈ S(x)} with input-dependent small supports S(x). It derives exact characterizations of pure ε-LDP and (ε,δ)-LDP for the sparse discrete Laplace family (kernel e^{-λ d(x,y)}) and sparse Gaussian family (kernel e^{-d(x,y)^2/(2σ^2)}). For pure LDP it shows that input-dependent supports arise only when all S(x) coincide. For approximate LDP it gives formulas for the privacy defect via support leakage and excess loss on overlaps. The analysis then specializes to radius-truncated supports to obtain explicit privacy-sparsity tradeoffs in terms of support cardinality s, showing a minimum s is needed for nontrivial approximate LDP and that larger s reduces leakage but increases distortion (with an extra quadratic term for the Gaussian case). The results identify s as the intrinsic complexity parameter and recommend choosing the smallest s meeting the target privacy level.

Significance. If the exact characterizations and tradeoffs hold, the work supplies a concrete design rule for communication-efficient LDP mechanisms by minimizing support size while controlling privacy loss, which is relevant for practical local privacy deployments. The explicit expressions for privacy defect in terms of leakage and overlap, together with the specialization yielding closed-form sparsity-privacy curves, constitute a useful technical contribution even if limited to the radius-truncated geometry.

major comments (3)
  1. [Abstract / specialization analysis] Abstract and the specialization paragraph: the claim that support cardinality s is 'the intrinsic complexity parameter' and that the optimal design is to 'choose the smallest support size' is load-bearing for the paper's main takeaway, yet these statements are obtained only after restricting to radius-truncated supports under a fixed metric d(x,y). The general characterizations (in terms of support leakage and excess loss on the overlap) do not by themselves establish that s alone determines the privacy-utility tradeoff for arbitrary shapes of S(x) with the same cardinality; other geometries could alter leakage or distortion for fixed s.
  2. [characterizations of pure and approximate LDP] The exact formulas for the privacy defect (support leakage plus excess loss on overlap) are presented as closed-form, but the manuscript provides no verification artifacts or proof sketches for the derivations of the pure-LDP coincidence condition or the (ε,δ) expressions; without these the central characterizations cannot be checked for hidden assumptions on the kernel or the metric d.
  3. [radius-truncated specialization] The weakest assumption noted in the analysis—that S(x) may depend on x and is small—is used throughout, yet the design principle of minimal s is derived only for the radius-truncated case; if the paper intends the result to apply more broadly, a counter-example or argument showing that other S(x) of size s cannot improve the tradeoff is needed to support the 'intrinsic' claim.
minor comments (2)
  1. [preliminaries] Notation for the admissible set S(x) and the distance d(x,y) should be introduced with a dedicated preliminary subsection to avoid ambiguity when supports vary with x.
  2. [Abstract] The abstract states 'nontrivial approximate local privacy requires a minimum support size' for the truncated case; a brief remark on whether this minimum is independent of the input domain size would clarify the scope.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the thoughtful review and valuable suggestions. We address each major comment below, clarifying the scope of our results and committing to revisions that strengthen the presentation of the characterizations and specializations.

read point-by-point responses
  1. Referee: [Abstract / specialization analysis] Abstract and the specialization paragraph: the claim that support cardinality s is 'the intrinsic complexity parameter' and that the optimal design is to 'choose the smallest support size' is load-bearing for the paper's main takeaway, yet these statements are obtained only after restricting to radius-truncated supports under a fixed metric d(x,y). The general characterizations (in terms of support leakage and excess loss on the overlap) do not by themselves establish that s alone determines the privacy-utility tradeoff for arbitrary shapes of S(x) with the same cardinality; other geometries could alter leakage or distortion for fixed s.

    Authors: We agree with the referee that the identification of s as the intrinsic complexity parameter and the recommendation to choose the smallest support size are conclusions drawn specifically from the radius-truncated specialization. The general characterizations express the privacy defect in terms of support leakage and excess loss, which depend on the particular choice of the supports S(x). In the radius-truncated case, due to the uniform structure of metric balls, these quantities become functions of the cardinality s alone. We will revise the abstract and the relevant paragraphs to qualify these statements as applying to the radius-truncated sparse mechanisms. This revision will clarify that while s is key in this natural family, general supports may require case-by-case analysis. revision: yes

  2. Referee: [characterizations of pure and approximate LDP] The exact formulas for the privacy defect (support leakage plus excess loss on overlap) are presented as closed-form, but the manuscript provides no verification artifacts or proof sketches for the derivations of the pure-LDP coincidence condition or the (ε,δ) expressions; without these the central characterizations cannot be checked for hidden assumptions on the kernel or the metric d.

    Authors: The full proofs for the pure ε-LDP result (input-dependent supports only when S(x) are identical for all x) and the (ε,δ)-LDP privacy defect formulas are included in the appendix. To address this, we will incorporate concise proof sketches into the main body of the paper, outlining the key steps: starting from the definition of LDP, deriving the condition for pure privacy, and decomposing the approximate privacy loss into leakage from non-overlapping supports and excess loss within overlaps. These sketches will confirm there are no additional assumptions beyond the kernel definitions and the metric properties used. revision: yes

  3. Referee: [radius-truncated specialization] The weakest assumption noted in the analysis—that S(x) may depend on x and is small—is used throughout, yet the design principle of minimal s is derived only for the radius-truncated case; if the paper intends the result to apply more broadly, a counter-example or argument showing that other S(x) of size s cannot improve the tradeoff is needed to support the 'intrinsic' claim.

    Authors: The analysis uses the general form with input-dependent small supports, but the explicit tradeoff curves are obtained by specializing to radius-truncated supports, which are a canonical choice that minimizes the expected distortion for a given cardinality under the distance metric. We will add a discussion in the specialization section arguing that for a fixed metric, the ball-shaped supports optimize the concentration of probability mass, thereby providing the best privacy-sparsity curve among supports of size s. While an exhaustive comparison to all possible subsets is intractable, this geometric optimality supports the design rule in the contexts considered. We do not claim universality beyond this family. revision: partial

Circularity Check

0 steps flagged

No circularity: derivations rest on standard LDP definitions and explicit geometry

full rationale

The paper's characterizations of pure and approximate LDP follow directly from the definitions of local differential privacy applied to the given sparse kernel forms w(x,y) and indicator on S(x). The subsequent specialization to radius-truncated supports produces explicit formulas for privacy defect (support leakage plus excess loss on overlap) that are computed from the metric d and radius; the resulting privacy-sparsity tradeoffs and minimum-support-size statements are consequences of those formulas rather than presupposed inputs. Support cardinality s emerges as the complexity parameter only after these derivations, with no self-definitional loops, fitted parameters renamed as predictions, or load-bearing self-citations reducing the central claims to their own assumptions. The derivation chain remains self-contained against the external benchmarks of LDP definitions.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 0 invented entities

The central claims rest on the assumed functional form of the sparse channel and standard properties of the exponential function and probability normalization; no new entities are postulated.

free parameters (2)
  • support size s
    Chosen as the design variable that trades privacy against distortion; appears as the explicit parameter in the final tradeoff statements.
  • kernel scale λ or σ
    Privacy-noise parameter in the Laplace or Gaussian kernel; controls the base mechanism before sparsity truncation.
axioms (2)
  • domain assumption The mechanism takes the normalized form M(y|x) ∝ w(x,y) 1{y ∈ S(x)} with w exponential in distance.
    This is the defining assumption of the sparse channel family under study.
  • standard math The distance function d(x,y) is a fixed metric on the domain.
    Invoked throughout the privacy-loss calculations.

pith-pipeline@v0.9.0 · 5581 in / 1447 out tokens · 61424 ms · 2026-05-12T04:26:00.847006+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

19 extracted references · 19 canonical work pages

  1. [1]

    Calibrating noise to sensitivity in private data analysis,

    C. Dwork, F. McSherry, K. Nissim, and A. Smith, “Calibrating noise to sensitivity in private data analysis,” inTheory of cryptography conference. Springer, 2006, pp. 265–284

  2. [2]

    The algorithmic foundations of differential privacy,

    C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,”Foundations and trends® in theoretical computer science, vol. 9, no. 3-4, pp. 211–487, 2014

  3. [3]

    Rappor: Randomized aggregatable privacy-preserving ordinal response,

    ´U. Erlingsson, V . Pihur, and A. Korolova, “Rappor: Randomized aggregatable privacy-preserving ordinal response,” inProceedings of the 2014 ACM SIGSAC conference on computer and communications security, 2014, pp. 1054–1067

  4. [4]

    Estimating sparse dis- crete distributions under privacy and communication constraints,

    J. Acharya, P. Kairouz, Y . Liu, and Z. Sun, “Estimating sparse dis- crete distributions under privacy and communication constraints,” in Algorithmic Learning Theory. PMLR, 2021, pp. 79–98

  5. [5]

    Geo-indistinguishability: Differential privacy for location-based systems,

    M. E. Andr ´es, N. E. Bordenabe, K. Chatzikokolakis, and C. Palamidessi, “Geo-indistinguishability: Differential privacy for location-based systems,” inProceedings of the 2013 ACM SIGSAC conference on Computer & communications security, 2013, pp. 901–914

  6. [6]

    Geometric noise for locally private counting queries,

    L. Kacem and C. Palamidessi, “Geometric noise for locally private counting queries,” inProceedings of the 13th Workshop on Programming Languages and Analysis for Security, 2018, pp. 13–16

  7. [7]

    Improving the gaussian mechanism for differential privacy: Analytical calibration and optimal denoising,

    B. Balle and Y .-X. Wang, “Improving the gaussian mechanism for differential privacy: Analytical calibration and optimal denoising,” in International conference on machine learning. PMLR, 2018, pp. 394– 403

  8. [8]

    The discrete gaussian for differential privacy,

    C. L. Canonne, G. Kamath, and T. Steinke, “The discrete gaussian for differential privacy,”Advances in Neural Information Processing Systems, vol. 33, pp. 15 676–15 688, 2020

  9. [9]

    Randomized response: A survey technique for eliminating evasive answer bias,

    S. L. Warner, “Randomized response: A survey technique for eliminating evasive answer bias,”Journal of the American Statistical Association, vol. 60, no. 309, pp. 63–69, 1965

  10. [10]

    Local privacy and minimax bounds: Sharp rates for probability estimation,

    J. C. Duchi, M. I. Jordan, and M. J. Wainwright, “Local privacy and minimax bounds: Sharp rates for probability estimation,” inAdvances in Neural Information Processing Systems, 2014

  11. [11]

    Minimax optimal procedures for locally private estimation,

    ——, “Minimax optimal procedures for locally private estimation,” Journal of the American Statistical Association, vol. 113, no. 521, pp. 182–201, 2018

  12. [12]

    Mechanism design via differential privacy,

    F. McSherry and K. Talwar, “Mechanism design via differential privacy,” in48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07). IEEE, 2007, pp. 94–103

  13. [13]

    The optimal mechanism in differential privacy,

    Q. Geng and P. Viswanath, “The optimal mechanism in differential privacy,” in2014 IEEE international symposium on information theory. IEEE, 2014, pp. 2371–2375

  14. [14]

    Extremal mechanisms for local differential privacy,

    P. Kairouz, S. Oh, and P. Viswanath, “Extremal mechanisms for local differential privacy,”Advances in neural information processing systems, vol. 27, 2014

  15. [15]

    Discrete distribution estimation under local privacy,

    P. Kairouz, K. Bonawitz, and D. Ramage, “Discrete distribution estimation under local privacy,” inInternational Conference on Machine Learning. PMLR, 2016, pp. 2436–2444

  16. [16]

    The skellam mechanism for differentially private federated learning,

    N. Agarwal, P. Kairouz, and Z. Liu, “The skellam mechanism for differentially private federated learning,”Advances in neural information processing systems, vol. 34, pp. 5052–5064, 2021

  17. [17]

    The poisson binomial mechanism for secure and private federated learning,

    W.-N. Chen, A. ¨Ozg¨ur, and P. Kairouz, “The poisson binomial mechanism for secure and private federated learning,”arXiv preprint arXiv:2207.09916, 2022

  18. [18]

    Utility-optimized local differential privacy mechanisms for distribution estimation,

    T. Murakami, H. Hino, J. Sakuma, Y . Kawamoto, and C. Troncoso, “Utility-optimized local differential privacy mechanisms for distribution estimation,” inUSENIX Security Symposium, 2019

  19. [19]

    On the connection between the abs perturbation methodology and differential privacy,

    P. Sadeghi and C.-H. Chien, “On the connection between the abs perturbation methodology and differential privacy,”Journal of Privacy and Confidentiality, vol. 14, no. 2, Jul. 2024. [Online]. Available: https://journalprivacyconfidentiality.org/index.php/jpc/article/view/859