Postprocessing the discrete Laplace mechanism yields unbiased estimators for subexponential functions and equivalent distributions to Laplace or Staircase mechanisms under the same privacy parameters.
hub
In: Halevi, S., Rabin, T
6 Pith papers cite this work, alongside 3,930 external citations. Polarity classification is still indexing.
hub tools
representative citing papers
Tight closed-form bounds via Berry-Esseen show DP-SGD with random shuffling achieves near-ideal privacy (trade-off close to 1-a) for σ ≥ √(3/ln M) and large M, with δ linear in epochs restricting E to O(√M) and an asymptotic O(√E) δ under E = c_M²M.
A technique for enforcing differential privacy in temporal runtime monitoring by analyzing dependencies and injecting noise into specifications while using tree mechanisms to limit accuracy loss.
Privatar uses horizontal frequency partitioning and distribution-aware minimal perturbation to enable private offloading of VR avatar reconstruction, supporting 2.37x more users with modest overhead.
Privacy and fairness cannot both be guaranteed in facility location over all datasets, but mechanisms exist that are optimal or near-optimal on welfare and fairness for natural data while preserving worst-case differential privacy.
The authors provide a detailed taxonomy of 21 risks associated with language models, covering discrimination, information leaks, misinformation, malicious applications, interaction harms, and societal impacts like job loss and environmental costs.
citing papers explorer
-
Privacy by Postprocessing the Discrete Laplace Mechanism
Postprocessing the discrete Laplace mechanism yields unbiased estimators for subexponential functions and equivalent distributions to Laplace or Staircase mechanisms under the same privacy parameters.
-
Trade-off Functions for DP-SGD with Subsampling based on Random Shuffling: Tight Upper and Lower Bounds
Tight closed-form bounds via Berry-Esseen show DP-SGD with random shuffling achieves near-ideal privacy (trade-off close to 1-a) for σ ≥ √(3/ln M) and large M, with δ linear in epochs restricting E to O(√M) and an asymptotic O(√E) δ under E = c_M²M.
-
Differentially Private Runtime Monitoring
A technique for enforcing differential privacy in temporal runtime monitoring by analyzing dependencies and injecting noise into specifications while using tree mechanisms to limit accuracy loss.
-
Privatar: Scalable Privacy-preserving Multi-user VR via Secure Offloading
Privatar uses horizontal frequency partitioning and distribution-aware minimal perturbation to enable private offloading of VR avatar reconstruction, supporting 2.37x more users with modest overhead.
-
Tradeoffs in Privacy, Welfare, and Fairness for Facility Location
Privacy and fairness cannot both be guaranteed in facility location over all datasets, but mechanisms exist that are optimal or near-optimal on welfare and fairness for natural data while preserving worst-case differential privacy.
-
Ethical and social risks of harm from Language Models
The authors provide a detailed taxonomy of 21 risks associated with language models, covering discrimination, information leaks, misinformation, malicious applications, interaction harms, and societal impacts like job loss and environmental costs.