pith. machine review for the scientific record. sign in

arxiv: 2604.28032 · v1 · submitted 2026-04-30 · 💻 cs.LG

Recognition: unknown

Shuffling-Aware Optimization for Private Vector Mean Estimation

Authors on Pith no claims yet

Pith reviewed 2026-05-07 07:52 UTC · model grok-4.3

classification 💻 cs.LG
keywords shuffle modeldifferential privacymean estimationvector meanminimax optimalitylocal differential privacyprivacy amplificationGaussian mechanism
0
0 comments X

The pith

A new mechanism for private vector mean estimation in the shuffle model achieves nearly the same privacy-utility trade-off as the central Gaussian mechanism in high-privacy regimes.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper addresses unbiased mean estimation of d-dimensional vectors where each user sends one privatized message and the analyzer receives only the shuffled collection of reports. It formulates mechanism design as an explicit optimization problem using the shuffle index to measure post-shuffling privacy. A minimax lower bound on mean squared error is derived in terms of this index, showing that mechanisms optimal under local differential privacy can become suboptimal once shuffling occurs. An asymptotically minimax-optimal mechanism is then constructed for the high-privacy regime, yielding performance close to the central Gaussian mechanism.

Core claim

We introduce the shuffle index to formulate the post-shuffling mechanism design problem as an explicit optimization problem. We establish a minimax lower bound on the achievable mean squared error in terms of the shuffle index, which implies that mechanisms that are optimal under LDP can become suboptimal once shuffling is applied. Finally, we construct an asymptotically minimax optimal mechanism in the high privacy regime, which as a consequence achieves a privacy-utility trade-off nearly identical to that of the central Gaussian mechanism.

What carries the argument

The shuffle index, which quantifies the effective privacy amplification after shuffling and is used to optimize the mechanism's noise distribution for minimax mean squared error.

If this is right

  • Local DP optimal mechanisms become suboptimal for the shuffle model once the shuffle index is accounted for in the design.
  • The mean squared error lower bound scales with the shuffle index, providing a concrete limit on utility for any single-message mechanism.
  • The constructed mechanism matches central-model performance asymptotically as privacy strengthens, without requiring trusted aggregation.
  • Design of future shuffle-model protocols should optimize directly for the shuffle index rather than relying on local DP building blocks.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar shuffle-aware optimization could improve utility in other estimation tasks such as covariance or quantile estimation under the same model.
  • The gap between local DP and shuffle-optimal mechanisms suggests that shuffle protocols may close more of the central-to-local gap than previously quantified.
  • If the shuffle index can be computed or bounded for non-uniform or adaptive mechanisms, the same lower-bound technique may extend beyond fixed-variance noise.

Load-bearing premise

The shuffle index accurately captures the effective privacy after shuffling for the vector mean estimation task, and the high-privacy asymptotic regime is the relevant operating point for the claimed near-central performance.

What would settle it

An experiment or calculation in the high-privacy limit where the proposed mechanism's mean squared error fails to approach the central Gaussian mechanism's error or exceeds the derived lower bound expressed in terms of the shuffle index.

Figures

Figures reproduced from arXiv: 2604.28032 by Seng Pei Liew, Shun Takagi.

Figure 2
Figure 2. Figure 2: Privacy profile (up￾per bound) comparison at fixed (n, RMSE) = (103 , 3.16) view at source ↗
read the original abstract

We study $d$-dimensional unbiased mean estimation in the single-message shuffle model, where each user sends a single privatized message and the analyzer only observes the shuffled multiset of reports. While minimax-optimal mechanisms are well understood in the local differential privacy setting, the corresponding notion of optimality after shuffling has remained largely unexplored. To address this gap, we introduce the recently proposed shuffle index and use it to formulate the post-shuffling mechanism design problem as an explicit optimization problem. We then establish a minimax lower bound on the achievable mean squared error in terms of the shuffle index, which implies that mechanisms that are optimal under LDP can become suboptimal once shuffling is applied. Finally, we construct an asymptotically minimax optimal mechanism in the high privacy regime, which as a consequence achieves a privacy-utility trade-off nearly identical to that of the central Gaussian mechanism.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper studies d-dimensional unbiased mean estimation in the single-message shuffle model. It formulates post-shuffling mechanism design as an explicit optimization problem using the recently proposed shuffle index, establishes a minimax lower bound on mean squared error expressed in terms of this index (showing that LDP-optimal mechanisms can be suboptimal after shuffling), and constructs an asymptotically minimax optimal mechanism in the high-privacy regime that achieves a privacy-utility trade-off nearly identical to the central Gaussian mechanism.

Significance. If the shuffle index is a tight task-specific proxy for post-shuffle privacy and the asymptotic construction matches the lower bound without hidden dimension-dependent or bias terms, the work would provide a principled shuffling-aware design method that meaningfully improves utility over direct LDP mechanisms while inheriting near-central performance. This is relevant for distributed estimation where shuffling is used for privacy amplification, and the lower bound usefully demonstrates that optimality notions must be re-derived for the shuffle model rather than imported from LDP.

major comments (2)
  1. [Abstract] Abstract and the central construction section: the claim that the constructed mechanism is 'asymptotically minimax optimal' and achieves utility 'nearly identical' to the central Gaussian mechanism rests on the shuffle index exactly (or asymptotically) equaling the effective (ε,δ) guarantee of the composed shuffled mechanism for the unbiased vector estimator. The abstract provides no derivation details, explicit mechanism description, or error-bar analysis, so it is impossible to verify whether vector unbiasedness, coordinate-wise noise, or norm-based terms introduce correlations that the index (introduced for scalar/generic mechanisms) fails to capture.
  2. [Minimax lower bound section] The minimax lower bound section: the lower bound is stated solely in terms of the shuffle index and used to conclude that LDP mechanisms become suboptimal post-shuffling. For this implication to be load-bearing, the paper must prove that the index is a tight proxy for the specific unbiased vector mean estimation task rather than an overestimate of privacy amplification; otherwise the constructed mechanism satisfies only a weaker guarantee than assumed and asymptotic optimality does not transfer to the true DP parameters.
minor comments (2)
  1. [Asymptotic analysis] The high-privacy asymptotic regime is asserted to dominate all dimension-dependent or bias-correction terms in the MSE, but no explicit scaling analysis or numerical verification is referenced to confirm this for the vector setting.
  2. [Preliminaries] Notation for the shuffle index and its relation to the composed (ε,δ) parameters should be clarified with an explicit equation showing how it is computed for the vector estimator.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their thorough and insightful review of our manuscript. We address each of the major comments below and describe the revisions we intend to make.

read point-by-point responses
  1. Referee: [Abstract] Abstract and the central construction section: the claim that the constructed mechanism is 'asymptotically minimax optimal' and achieves utility 'nearly identical' to the central Gaussian mechanism rests on the shuffle index exactly (or asymptotically) equaling the effective (ε,δ) guarantee of the composed shuffled mechanism for the unbiased vector estimator. The abstract provides no derivation details, explicit mechanism description, or error-bar analysis, so it is impossible to verify whether vector unbiasedness, coordinate-wise noise, or norm-based terms introduce correlations that the index (introduced for scalar/generic mechanisms) fails to capture.

    Authors: We thank the referee for this observation. The manuscript's Section 4 constructs the mechanism by optimizing the shuffle index for each coordinate separately while maintaining unbiasedness through a simple shift. Because the shuffling is applied to the multiset of vector reports but the index is computed coordinate-wise, and the noise is independent across coordinates, there are no cross-term correlations in the privacy analysis. The asymptotic equivalence to the central Gaussian mechanism is established in the high-privacy limit (small ε) where the leading term of the MSE matches that of the Gaussian mechanism with variance scaled by the effective privacy parameter given by the index. We will revise the paper to include a dedicated paragraph in the construction section deriving the effective (ε, δ) parameters from the shuffle index for the vector estimator, along with an error analysis showing that any approximation errors are o(1) in the asymptotic regime. We will also update the abstract to briefly reference this coordinate-wise optimality. revision: yes

  2. Referee: [Minimax lower bound section] The minimax lower bound section: the lower bound is stated solely in terms of the shuffle index and used to conclude that LDP mechanisms become suboptimal post-shuffling. For this implication to be load-bearing, the paper must prove that the index is a tight proxy for the specific unbiased vector mean estimation task rather than an overestimate of privacy amplification; otherwise the constructed mechanism satisfies only a weaker guarantee than assumed and asymptotic optimality does not transfer to the true DP parameters.

    Authors: We agree that proving the tightness of the shuffle index for the vector mean estimation task is essential for the validity of our claims. In the minimax lower bound section, the bound is obtained by applying the definition of the shuffle index to bound the mutual information or variance in the estimation problem. To show it is tight rather than an overestimate, we observe that our constructed mechanism in the high-privacy regime achieves an MSE that matches the lower bound up to lower-order terms, implying that the index must be tight for this task; otherwise the matching would not hold. We will add to the revision a formal argument in Section 3 demonstrating that the shuffle index provides a tight characterization for unbiased estimators by exhibiting an attack that achieves the privacy loss predicted by the index when estimating the mean. This confirms that LDP mechanisms are indeed suboptimal post-shuffling and that the asymptotic optimality holds with respect to the true post-shuffle privacy parameters. revision: yes

Circularity Check

0 steps flagged

No circularity: derivation uses external shuffle index as independent proxy without self-referential reduction.

full rationale

The provided abstract and context show the paper citing the 'recently proposed shuffle index' from prior work as an external quantity. It is used to recast mechanism design as an explicit optimization and to state a minimax MSE lower bound expressed in terms of that index. The constructed mechanism is then shown to match the bound asymptotically in the high-privacy regime. No equations or steps in the visible text reduce the lower bound, optimality claim, or privacy-utility comparison to a tautological re-expression of the input mechanisms, fitted parameters, or self-citations. The index is treated as an independent modeling tool whose validity is an assumption (not derived inside this paper), so the chain remains non-circular. Any concern about whether the index exactly captures post-shuffle (ε,δ) for vector estimators is a question of modeling fidelity, not a definitional or self-citation loop.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The paper relies on the definition and properties of the shuffle index from prior work, standard assumptions of the single-message shuffle model (unbiasedness, local privacy, shuffling), and the high-privacy asymptotic regime. No new entities are postulated. Free parameters are implicit in the mechanism design but not enumerated in the abstract.

axioms (2)
  • domain assumption The shuffle index correctly quantifies the effective privacy amplification due to shuffling for unbiased mean estimators.
    Invoked when the post-shuffling optimization problem is formulated and when the lower bound is stated in terms of the shuffle index.
  • standard math Unbiasedness of the estimator is preserved after shuffling.
    Standard assumption in mean estimation under local privacy and shuffle models.

pith-pipeline@v0.9.0 · 5437 in / 1555 out tokens · 39168 ms · 2026-05-07T07:52:35.307342+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

35 extracted references · 16 canonical work pages

  1. [1]

    Optimal Algorithms for Mean Estimation under Local Differential Privacy

    Hilal Asi, Vitaly Feldman, and Kunal Talwar. Optimal Algorithms for Mean Estimation under Local Differential Privacy. InProceedings of the 39th International Conference on Machine Learning, pages 1046–1056. PMLR, June 2022. URLhttps://proceedings.mlr.press/v162/asi22b.html

  2. [2]

    Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages

    Hilal Asi, Vitaly Feldman, Jelani Nelson, Huy Nguyen, Kunal Talwar, and Samson Zhou. Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages. InProceedings of the 41st International Conference on Machine Learning, pages 1945–1970. PMLR, July 2024. URL https://proceedings.mlr.press/v235/asi24a.html

  3. [3]

    Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising

    Borja Balle and Yu-Xiang Wang. Improving the Gaussian Mechanism for Differential Privacy: Analytical Calibration and Optimal Denoising. InProceedings of the 35th International Conference on Machine Learning, pages 394–403. PMLR, July 2018. URL https://proceedings.mlr.press/v80/balle18a. html

  4. [4]

    Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences

    Borja Balle, Gilles Barthe, and Marco Gaboardi. Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences. InAdvances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018. URL https://proceedings.neurips.cc/paper_files/paper/2018/ hash/3b5020bb891119b9f5130f1fea9bd773-Abstract.html. 11

  5. [5]

    The Privacy Blanket of the Shuffle Model

    Borja Balle, James Bell, Adri` a Gasc´ on, and Kobbi Nissim. The Privacy Blanket of the Shuffle Model. InAdvances in Cryptology – CRYPTO 2019: 39th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 18–22, 2019, Proceedings, Part II, pages 638–667, Berlin, Heidelberg, August 2019. Springer-Verlag. ISBN 978-3-030-26950-0. doi: 10.10...

  6. [6]

    Beyond Differential Privacy: Composition Theorems and Relational Logic for f-divergences between Probabilistic Programs

    Gilles Barthe and Federico Olmedo. Beyond Differential Privacy: Composition Theorems and Relational Logic for f-divergences between Probabilistic Programs. In David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos...

  7. [7]

    Protection Against Reconstruction and Its Applications in Private Federated Learning, June 2019

    Abhishek Bhowmick, John Duchi, Julien Freudiger, Gaurav Kapoor, and Ryan Rogers. Protection Against Reconstruction and Its Applications in Private Federated Learning, June 2019. URL http: //arxiv.org/abs/1812.00984

  8. [8]

    Tony Cai, Yichen Wang, and Linjun Zhang

    T. Tony Cai, Yichen Wang, and Linjun Zhang. The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy.The Annals of Statistics, 49(5):2825–2850, October 2021. ISSN 0090-5364, 2168-8966. doi: 10.1214/21-AOS2058. URL https://projecteuclid.org/journals/annals-of-statistics/volume-49/issue-5/ The-cost-of-privacy--Opt...

  9. [9]

    Breaking the Communication-Privacy-Accuracy Trilemma

    Wei-Ning Chen, Peter Kairouz, and Ayfer Ozgur. Breaking the Communication-Privacy-Accuracy Trilemma. InAdvances in Neural Information Processing Systems, volume 33, pages 3312–

  10. [10]

    URL https://proceedings.neurips.cc/paper/2020/hash/ 222afbe0d68c61de60374b96f1d86715-Abstract.html

    Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 222afbe0d68c61de60374b96f1d86715-Abstract.html

  11. [11]

    Privacy Amplification via Compression: Achieving the Optimal Privacy-Accuracy-Communication Trade-off in Distributed Mean Estimation

    Wei-Ning Chen, Dan Song, Ayfer Ozgur, and Peter Kairouz. Privacy Amplification via Compression: Achieving the Optimal Privacy-Accuracy-Communication Trade-off in Distributed Mean Estimation. InProceedings of the 37th International Conference on Neural Information Processing Systems. arXiv, April 2023. doi: 10.48550/arXiv.2304.01541. URLhttp://arxiv.org/ab...

  12. [12]

    Distributed Differential Privacy via Shuffling

    Albert Cheu, Adam Smith, Jonathan Ullman, David Zeber, and Maxim Zhilyaev. Distributed Differential Privacy via Shuffling. In Yuval Ishai and Vincent Rijmen, editors,Advances in Cryptology – EUROCRYPT 2019, pages 375–403, Cham, 2019. Springer International Publishing. ISBN 978-3-030-17653-2. doi: 10.1007/978-3-030-17653-2 13

  13. [13]

    How Private are DP-SGD Implementations? InProceedings of the 41st International Conference on Machine Learning, pages 8904–8918

    Lynn Chua, Badih Ghazi, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Amer Sinha, and Chiyuan Zhang. How Private are DP-SGD Implementations? InProceedings of the 41st International Conference on Machine Learning, pages 8904–8918. PMLR, July 2024. URL https://proceedings.mlr.press/ v235/chua24a.html

  14. [14]

    Jinshuo Dong, Aaron Roth, and Weijie J. Su. Gaussian Differential Privacy.Journal of the Royal Statistical Society Series B: Statistical Methodology, 84(1):3–37, February 2022. ISSN 1369-7412, 1467-

  15. [15]

    , title =

    doi: 10.1111/rssb.12454. URLhttps://academic.oup.com/jrsssb/article/84/1/3/7056089

  16. [16]

    URLhttps://doi.org/10.1109/FOCS.2013.53

    John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. Local Privacy and Statistical Minimax Rates. In2013 IEEE 54th Annual Symposium on Foundations of Computer Science, pages 429–438, October 2013. doi: 10.1109/FOCS.2013.53. URL https://ieeexplore.ieee.org/document/6686179

  17. [17]

    Differential Privacy

    Cynthia Dwork. Differential Privacy. In Michele Bugliesi, Bart Preneel, Vladimiro Sassone, and Ingo Wegener, editors,Automata, Languages and Programming, pages 1–12, Berlin, Heidelberg, 2006. Springer. ISBN 978-3-540-35908-1. doi: 10.1007/11787006 1. 12

  18. [18]

    The algorithmic foundations of differential privacy

    Cynthia Dwork and Aaron Roth. The Algorithmic Foundations of Differential Privacy.Foun- dations and Trends®in Theoretical Computer Science, 9(3-4):211–407, 2013. ISSN 1551- 305X, 1551-3068. doi: 10.1561/0400000042. URL http://www.nowpublishers.com/articles/ foundations-and-trends-in-theoretical-computer-science/TCS-042

  19. [19]

    Amplification by shuffling: From local to central differential privacy via anonymity

    ´Ulfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, and Abhradeep Thakurta. Amplification by shuffling: From local to central differential privacy via anonymity. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’19, pages 2468–2479, USA, January 2019. Society for Industrial and Applied M...

  20. [20]

    Encode, Shuffle, Analyze Privacy Revisited: Formalizations and Empirical Evaluation.arXiv:2001.03618 [cs], January 2020

    ´Ulfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Shuang Song, Kunal Talwar, and Abhradeep Thakurta. Encode, Shuffle, Analyze Privacy Revisited: Formalizations and Empirical Evaluation.arXiv:2001.03618 [cs], January 2020. URLhttp://arxiv.org/abs/2001.03618

  21. [21]

    Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex Optimization.Mathematics of Operations Research, 46(3):912–945, August 2021

    Vitaly Feldman, Crist´ obal Guzm´ an, and Santosh Vempala. Statistical Query Algorithms for Mean Vector Estimation and Stochastic Convex Optimization.Mathematics of Operations Research, 46(3):912–945, August 2021. ISSN 0364-765X. doi: 10.1287/moor.2020.1111. URL https://pubsonline.informs. org/doi/abs/10.1287/moor.2020.1111

  22. [22]

    The Reachability Problem for Petri Nets is Not Primitive Recursive , booktitle =

    Vitaly Feldman, Audra McMillan, and Kunal Talwar. Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling. In2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 954–964. arXiv, February 2022. doi: 10.1109/ FOCS52979.2021.00096. URLhttps://ieeexplore.ieee.org/document/9719772

  23. [23]

    Differentially Private Aggregation in the Shuffle Model: Almost Central Accuracy in Almost a Single Message

    Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Rasmus Pagh, and Amer Sinha. Differentially Private Aggregation in the Shuffle Model: Almost Central Accuracy in Almost a Single Message. InProceedings of the 38th International Conference on Machine Learning, pages 3692–3701. PMLR, July 2021. URL https://proceedings.mlr.press/v139/ghazi21a.html

  24. [24]

    Renyi Differential Privacy of The Subsampled Shuffle Model In Distributed Learning

    Antonious Girgis, Deepesh Data, and Suhas Diggavi. Renyi Differential Privacy of The Subsampled Shuffle Model In Distributed Learning. InAdvances in Neural Information Processing Systems, volume 34, pages 29181–29192. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/ 2021/hash/f44ec26e2ac3f1ab8c2472d4b1c2ea86-Abstract.html

  25. [25]

    Practical and Private (Deep) Learning Without Sampling or Shuffling

    Peter Kairouz, Brendan Mcmahan, Shuang Song, Om Thakkar, Abhradeep Thakurta, and Zheng Xu. Practical and Private (Deep) Learning Without Sampling or Shuffling. InProceedings of the 38th International Conference on Machine Learning, pages 5213–5225. PMLR, July 2021. URL https: //proceedings.mlr.press/v139/kairouz21b.html

  26. [26]

    Theory of Point Estimation

    E.L. Lehmann and George Casella.Theory of Point Estimation. Springer Texts in Statistics. Springer- Verlag, New York, 1998. ISBN 978-0-387-98502-2. doi: 10.1007/b98854. URL http://link.springer. com/10.1007/b98854

  27. [27]

    Aggregation and Transformation of Vector- Valued Messages in the Shuffle Model of Differential Privacy.IEEE Transactions on Information Forensics and Security, 17:612–627, 2022

    Mary Scott, Graham Cormode, and Carsten Maple. Aggregation and Transformation of Vector- Valued Messages in the Shuffle Model of Differential Privacy.IEEE Transactions on Information Forensics and Security, 17:612–627, 2022. ISSN 1556-6021. doi: 10.1109/TIFS.2022.3147643. URL https://ieeexplore.ieee.org/document/9696239/

  28. [28]

    Analysis of Shuffling Beyond Pure Local Differential Privacy, January

    Shun Takagi and Seng Pei Liew. Analysis of Shuffling Beyond Pure Local Differential Privacy, January

  29. [29]

    Takagi and C

    URLhttp://arxiv.org/abs/2601.19154. 13 A Missing Proofs A.1 Proposition 3.2 Proposition 3.2.Assume that εn = O p logn/n and εn = ω(1/√n). Then, for each σ > 0, the sequencen εn, δ GM(σ)(εn) o n≥1 lies in theσ-high privacy regime. Proof. We begin by relating the privacy profile of the central Gaussian mechanism to the asymptotic template fn,ε(·). Lemma A.1...

  30. [30]

    := Rx1(Y)− R x′ 1(Y) Rx(Y) , ℓ ′ 0,x(Z;x 1, x′

  31. [31]

    := R′ x1(Z)− R ′ x′ 1 (Z) R′x(Z) , 18 one similarly has ℓ′ 0,x(Z;x 1, x′

  32. [32]

    Taking the supremum over x1 ≃x ′ 1 and x∈ X , and then the square root, concludes that χup(R′) ≥ χup(R)

    =E[ℓ 0,x(Y;x 1, x′ 1)|Z], and hence Var ℓ′ 0,x(Z;x 1, x′ 1) ≤Var(ℓ 0,x(Y;x 1, x′ 1)). Taking the supremum over x1 ≃x ′ 1 and x∈ X , and then the square root, concludes that χup(R′) ≥ χup(R). Lemma A.4(Additivization via a Markov kernel).Fix n≥ 2. Let R be any local randomizer and let A be any unbiased estimator in the single-message shuffle model. Then th...

  33. [33]

    Then φ(x1 −x ′

    = (ej,⊥ ), where ej is the j-th standard basis vector (note that ej ∈B d 2 and ej ≃⊥). Then φ(x1 −x ′

  34. [34]

    Taking the supremum overx∈B d 2 ⊆ Xgives Err1(R,bx) = sup x∈Bd 2 EY∼R x ∥bx(Y)−x∥ 2 2 ≥d χ up(R)2, which proves the claim

    = 1, and combining (28) with (29) yields VarY∼R x bxj(Y) ≥χ up(R)2 for allx∈ X.(30) Finally, sincebxis unbiased, for anyx∈ Xwe have EY∼R x ∥bx(Y)−x∥ 2 2 = dX j=1 EY∼R x (bxj(Y)−x j)2 = dX j=1 VarY∼R x bxj(Y) , 21 and therefore by (30), EY∼R x ∥bx(Y)−x∥ 2 2 ≥ dX j=1 χup(R)2 =d χ up(R)2. Taking the supremum overx∈B d 2 ⊆ Xgives Err1(R,bx) = sup x∈Bd 2 EY∼R ...

  35. [35]

    (3)Structural condition.Either of the following holds

    Moreover, the following conditions hold: R Rx1(y)2 Rref(y) dy <∞and R Rx′ 1 (y)2 Rref(y) dy <∞. (3)Structural condition.Either of the following holds. (Cont)ℓ ε(Y ; x1, x′ 1,R ref) has a nontrivial absolutely continuous component with respect to Lebesgue measure whose support contains a fixed nondegenerate bounded interval I⊂R independent of ε. (Bound) Th...