pith. machine review for the scientific record. sign in

arxiv: 2605.01137 · v1 · submitted 2026-05-01 · 💻 cs.LG · cs.CR

Recognition: unknown

Metric-Normalized Posterior Leakage (mPL): Attacker-Aligned Privacy for Joint Consumption

Authors on Pith no claims yet

Pith reviewed 2026-05-09 19:03 UTC · model grok-4.3

classification 💻 cs.LG cs.CR
keywords metric differential privacyposterior leakagejoint observationmachine learning privacyadaptive auditingattacker modelword embeddings
0
0 comments X

The pith

Metric differential privacy can leave high posterior leakage when machine learning models are observed jointly rather than one record at a time.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that metric differential privacy, which adds noise scaled to semantic distance, works for isolated releases but falls short when an attacker sees multiple outputs together. It introduces metric-normalized posterior leakage to measure the actual shift in an attacker's beliefs after seeing the releases, normalized by distance. For single or independent cases the two measures align, yet joint observation lets learned aggregators combine evidence from correlated items and push leakage higher. To control this the authors define a probabilistic bound on how often leakage may exceed a chosen level and build an adaptive system that perturbs the outputs, audits them with a learned attacker model, and tunes parameters to reduce violations while keeping utility loss small, as verified on word embeddings.

Core claim

Metric-normalized posterior leakage quantifies the distance-scaled change in an attacker's posterior odds after observing releases. Uniformly bounding this quantity is equivalent to metric differential privacy when releases are single or independent. Under joint observation, however, satisfying metric differential privacy can still permit high leakage because learned aggregators compound evidence across correlated items. The paper therefore introduces probabilistically bounded metric-normalized posterior leakage to limit the frequency of excessive shifts and realizes it through Adaptive metric-normalized posterior leakage, a framework that perturbs outputs, audits them with a learned neural-

What carries the argument

metric-normalized posterior leakage (mPL), a distance-calibrated measure of the shift in an attacker's posterior odds induced by one or more releases

If this is right

  • For single or independent releases, any uniform bound on metric-normalized posterior leakage is exactly equivalent to metric differential privacy.
  • Joint observation allows learned aggregators to compound evidence and produce high leakage even when per-record metric differential privacy holds.
  • Probabilistically bounded metric-normalized posterior leakage limits the fraction of releases that may exceed a chosen leakage threshold.
  • Adaptive mPL reduces the frequency of high-leakage events in word-embedding tasks while incurring only low utility loss.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Privacy design for machine learning should start from joint-consumption threat models rather than per-record guarantees.
  • Auditing with learned attackers creates an opening to update the auditor as new attack techniques appear.
  • The same adaptive auditing loop could be tested on recommendation systems or graph learning where items are observed together.

Load-bearing premise

The learned attacker model used for auditing accurately captures real adversary capabilities and parameter adaptation does not create new leakage paths.

What would settle it

If a stronger neural adversary than the one used during auditing can still produce large posterior-odds shifts on jointly observed word embeddings after Adaptive mPL adaptation, the claim that the framework controls leakage would be contradicted.

Figures

Figures reproduced from arXiv: 2605.01137 by Chenxi Qiu, Gaoyi Chen, Minghao Li, Sourabh Yadav, Weishi Shi, Yan Huang, Yusheng Wei.

Figure 1
Figure 1. Figure 1: Attacker belief update: prior vs. posterior distribution. mPLM(xi, xj , y) (7) = 1 dxi,xj [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Threat model. (1) Explicit joint-probability attacker (a toy example). We consider an attacker that models the joint distribution of two secrets and performs Bayesian inference over two perturbed outputs. Let X1, X2 ∈ X = {x1, x2} with a correlated prior: Pr(X1 = x1, X2 = x1) = Pr(X1 = x2, X2 = x2) = 0.01 Pr(X1 = x1, X2 = x2) = Pr(X1 = x2, X2 = x1) = 0.49, and set ϵ = 1.0. For an exponential mechanism (EM)… view at source ↗
Figure 3
Figure 3. Figure 3: Illustration of the AmPL Framework (example: protecting PII and PoII word embeddings). LSTMs, and Transformers are trained to reconstruct the original records from their perturbed versions, effectively simulating strong inference attacks that exploit semantic dependencies across tokens. We then use the outputs of these adversarial models, i.e., the approximated posterior distributions over sensitive tokens… view at source ↗
Figure 4
Figure 4. Figure 4: Example of mPL distributions derived by a DNN-based inference model (Transformer) [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 6
Figure 6. Figure 6: Examples of mPL distributions derived by different DNN-based inference models. deviation introduced by the perturbation, with higher values indicating greater semantic loss. Each token embedding is represented using pre-trained 100-dimensional GloVe vectors, which preserve the structure and context of the original sentence. The overall utility loss for (xi , yk) is computed over all sensitive tokens and ca… view at source ↗
Figure 7
Figure 7. Figure 7: Utility loss (applying RNN as the adversarial model). (a) AG News. (b) IMDB. (c) Amazon [PITH_FULL_IMAGE:figures/full_fig_p021_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Utility loss (applying LSTM as the adversarial model). F.3. Utility Loss Figures 7–9 report utility loss for the same method set and (ϵ ∈ {2.40, 2.50, 2.60}) configuration, differing only by the adversary [PITH_FULL_IMAGE:figures/full_fig_p021_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Utility loss (applying Transformer as the adversarial model). (a) AG News. (b) IMDB. (c) Amazon [PITH_FULL_IMAGE:figures/full_fig_p022_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Trade-off between empirical mPL violation ratio and utility loss for AmPL without Bayesian remap (applying Transformer as the adversarial model). F.4. Tradeoff Between Utility and Violation Rate [PITH_FULL_IMAGE:figures/full_fig_p022_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Average mPL given different ϵ (applying RNN as the adversarial model). (a) AG News. (b) IMDB. (c) Amazon [PITH_FULL_IMAGE:figures/full_fig_p023_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: mPL violation ratio given different ϵ (applying RNN as the adversarial model). F.6. Effect of Attacker Training Data Size To assess how many supervised pairs the learned adversary requires, we run a learning-curve experiment on AG NEWS. We subsample the adversary’s training set to fractions r ∈ (0, 1] of the original size, retrain the attacker for each r, and report (i) attack accuracy, measured by the av… view at source ↗
Figure 13
Figure 13. Figure 13: Effect of attacker training data size. Top: attack accuracy (cosine similarity) as a function of the normalized number of training sentences. Bottom: mPL violation ratio as a function of the normalized number of training sentences [PITH_FULL_IMAGE:figures/full_fig_p024_13.png] view at source ↗
read the original abstract

Metric differential privacy (mDP) strengthens local differential privacy (LDP) by scaling noise to semantic distance, but many machine learning (ML) systems are consumed under joint observation, where model-agnostic, per-record guarantees can miss leakage from evidence aggregation. We introduce metric-normalized posterior leakage (mPL), an attacker-aligned, distance-calibrated measure of posterior-odds shift induced by releases, and show that for single or independent releases, uniformly bounding mPL is equivalent to mDP. Under joint observation, however, satisfying mDP may still leave mPL high because learned aggregators compound evidence across correlated items. To make control practical, we formalize probabilistically bounded mPL (PBmPL), which limits how often mPL may exceed a target budget, and we operationalize it via Adaptive mPL (AmPL), a trust-and-verify framework that perturbs, audits with a learned attacker, and adapts parameters (with optional Bayesian remapping) to balance privacy and utility. In a word-embedding case study, neural adversaries violate mPL under joint consumption despite per-record mDP perturbations, whereas AmPL substantially lowers the frequency of such violations with low utility loss, indicating PBmPL as a practical, certifiable protection for joint-consumption settings.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces metric-normalized posterior leakage (mPL) as an attacker-aligned, distance-calibrated measure of posterior-odds shift from releases. It claims that for single or independent releases, uniformly bounding mPL is equivalent to metric differential privacy (mDP), but that mDP can leave mPL high under joint observation because learned aggregators compound evidence across correlated items. The paper formalizes probabilistically bounded mPL (PBmPL) to limit the frequency of high-mPL events and operationalizes control via Adaptive mPL (AmPL), a trust-and-verify framework that perturbs, audits with a learned attacker, and adapts parameters (with optional Bayesian remapping). A word-embedding case study shows neural adversaries can violate mPL despite per-record mDP, while AmPL reduces violation frequency with low utility loss.

Significance. If the equivalence and empirical results hold, the work provides a practical, certifiable approach to privacy in joint-consumption ML settings where standard per-record mDP guarantees are insufficient. The formalization of PBmPL and the adaptive auditing framework address a real gap between theoretical local guarantees and aggregated evidence leakage. Credit is due for the attacker-aligned formulation and the demonstration that mDP alone does not control mPL under joint observation.

major comments (2)
  1. [Abstract] Abstract: the claim that 'uniformly bounding mPL is equivalent to mDP' for single or independent releases is load-bearing for the paper's distinction between single-release and joint-consumption regimes, yet the abstract supplies no derivation or proof; the full theoretical section must explicitly derive the equivalence (including the precise definitions of mPL and the uniform bound) to confirm it is not tautological.
  2. [AmPL framework and case study] AmPL framework and case study: the central practical claim that AmPL achieves PBmPL control (limiting high-mPL frequency at low utility cost) depends on the learned neural adversary accurately identifying all relevant posterior-odds shifts under joint observation; the manuscript must supply evidence (e.g., ablation against alternative attacker architectures or correlation patterns) that the auditor is sufficiently complete, because incompleteness would allow undetected leakage paths to remain after parameter adaptation.
minor comments (2)
  1. [Abstract] Abstract: the acronym PBmPL is introduced without an immediate parenthetical gloss; adding '(probabilistically bounded mPL)' on first use would improve readability for readers unfamiliar with the new term.
  2. [Abstract] Abstract: the case-study claim of 'low utility loss' would be clearer if the precise utility metric (e.g., embedding similarity, downstream task accuracy) were named.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive feedback. We address each major comment below and have revised the manuscript to strengthen the presentation where appropriate.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that 'uniformly bounding mPL is equivalent to mDP' for single or independent releases is load-bearing for the paper's distinction between single-release and joint-consumption regimes, yet the abstract supplies no derivation or proof; the full theoretical section must explicitly derive the equivalence (including the precise definitions of mPL and the uniform bound) to confirm it is not tautological.

    Authors: We agree that an explicit derivation is necessary to support the load-bearing claim. In the revised manuscript we have added a dedicated subsection in the theoretical development that derives the equivalence in both directions: (i) mDP with parameter ε implies mPL ≤ ε for any single or independent release, and (ii) a uniform bound mPL ≤ ε implies the mDP guarantee. The derivation uses the precise definition of mPL as the supremum of the metric-normalized log posterior-odds ratio and shows that the normalization by semantic distance makes the uniform bound equivalent to the distance-scaled guarantee of mDP. This is not tautological, because mPL is an attacker-centric posterior measure while mDP is a mechanism property; the equivalence holds only under the single-release or independence assumption. The abstract has been updated to reference the derivation. revision: yes

  2. Referee: [AmPL framework and case study] AmPL framework and case study: the central practical claim that AmPL achieves PBmPL control (limiting high-mPL frequency at low utility cost) depends on the learned neural adversary accurately identifying all relevant posterior-odds shifts under joint observation; the manuscript must supply evidence (e.g., ablation against alternative attacker architectures or correlation patterns) that the auditor is sufficiently complete, because incompleteness would allow undetected leakage paths to remain after parameter adaptation.

    Authors: We recognize that the practical utility of AmPL rests on the auditor's completeness. The case study trains a neural adversary on joint observations to surface mPL violations that survive per-record mDP; the reported results show it detects such events. To strengthen the claim, the revised manuscript adds ablations that compare the neural auditor against simpler baselines (logistic regression on aggregated features) and across varied correlation strengths in the embedding space. These experiments confirm that the neural auditor identifies the dominant posterior shifts with higher or equal recall, and that AmPL's parameter adaptation still reduces violation frequency under the alternative auditors. We therefore maintain that the framework provides effective PBmPL control, while the new ablations address the completeness concern. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation is self-contained

full rationale

The abstract presents mPL as a newly introduced measure of posterior-odds shift and states that uniformly bounding it is equivalent to mDP for single/independent releases, but this equivalence is described as a shown result rather than a definitional identity or fitted-parameter renaming. The joint-observation distinction, PBmPL formalization, and AmPL framework (perturb-audit-adapt with learned attacker and optional Bayesian remapping) are operationalized via empirical case study rather than reducing by construction to inputs. No equations, self-citations, or ansatzes are quoted that force the central claims back onto themselves. The paper is therefore self-contained against external benchmarks for the purposes of this circularity check.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review; no free parameters, axioms, or invented entities (such as new particles or forces) are identifiable from the provided text. mPL and AmPL are new metrics and frameworks rather than postulated physical entities.

pith-pipeline@v0.9.0 · 5549 in / 1213 out tokens · 40260 ms · 2026-05-09T19:03:38.661550+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

57 extracted references · 7 canonical work pages

  1. [1]

    1994 , publisher =

    Bayesian Theory , author =. 1994 , publisher =. doi:10.1002/9780470316870 , isbn =

  2. [2]

    Langley , title =

    P. Langley , title =. Proceedings of the 17th International Conference on Machine Learning (ICML 2000) , address =. 2000 , pages =

  3. [3]

    22nd International Conference on Extending Database Technology, EDBT 2019 , pages=

    A utility-preserving and scalable technique for protecting location data with geo-indistinguishability , author=. 22nd International Conference on Extending Database Technology, EDBT 2019 , pages=. 2019 , organization=

  4. [4]

    Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security , pages=

    Geo-indistinguishability: Differential privacy for location-based systems , author=. Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security , pages=

  5. [5]

    Advances in Neural Information Processing Systems , year=

    Spectrally-Normalized Margin Bounds for Neural Networks , author=. Advances in Neural Information Processing Systems , year=

  6. [6]

    N. E. Bordenabe and K. Chatzikokolakis and C. Palamidessi , title =. Proc. of ACM CCS , year =

  7. [7]

    international symposium on privacy enhancing technologies symposium , pages=

    Broadening the scope of differential privacy using metrics , author=. international symposium on privacy enhancing technologies symposium , pages=. 2013 , organization=

  8. [8]

    Privacy Enhancing Technologies: 14th International Symposium, PETS 2014, Amsterdam, The Netherlands, July 16-18, 2014

    A predictive differentially-private mechanism for mobility traces , author=. Privacy Enhancing Technologies: 14th International Symposium, PETS 2014, Amsterdam, The Netherlands, July 16-18, 2014. Proceedings 14 , pages=. 2014 , organization=

  9. [9]

    URLhttps://doi.org/10.1109/FOCS.2013.53

    John C. Duchi and Michael I. Jordan and Martin J. Wainwright , title =. 2013 IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS) , year =. doi:10.1109/FOCS.2013.53 , note =

  10. [10]

    Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006

    Calibrating noise to sensitivity in private data analysis , author=. Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3 , pages=. 2006 , organization=

  11. [11]

    2019 IEEE International Conference on Data Mining (ICDM) , pages=

    Leveraging hierarchical representations for preserving privacy and utility in text , author=. 2019 IEEE International Conference on Data Mining (ICDM) , pages=. 2019 , organization=

  12. [12]

    Feyisetan and B

    O. Feyisetan and B. Balle and T. Drake and T. Diethe , title =. Proceedings of the 13th International Conference on Web Search and Data Mining , pages =. 2020 , publisher =

  13. [13]

    International conference on machine learning , pages=

    On calibration of modern neural networks , author=. International conference on machine learning , pages=. 2017 , organization=

  14. [14]

    2022 , booktitle =

    Jacob Imola and Shiva Kasiviswanathan and Stephen White and Abhinav Aggarwal and Nathanael Teissier , title =. 2022 , booktitle =

  15. [15]

    2012 , isbn =

    Kifer, Daniel and Machanavajjhala, Ashwin , title =. 2012 , isbn =. doi:10.1145/2213556.2213571 , booktitle =

  16. [16]

    PAnDA: Rethinking Metric Differential Privacy Optimization at Scale with Anchor-Based Approximation

    Ruiyao Liu and Chenxi Qiu. PAnDA: Rethinking Metric Differential Privacy Optimization at Scale with Anchor-Based Approximation. Proceedings of The 32nd ACM Conference on Computer and Communications Security (CCS). 2025

  17. [17]

    and Daly, Raymond E

    Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher , title =. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies , month =. 2011 , address =

  18. [18]

    Attention is All you Need , url =

    Vaswani, Ashish and Shazeer, Noam and Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and Gomez, Aidan N and Kaiser, ukasz and Polosukhin, Illia , booktitle =. Attention is All you Need , url =

  19. [19]

    Proceedings of the 10th ACM International Workshop on Security and Privacy Analytics , pages=

    1-Diffractor: Efficient and Utility-Preserving Text Obfuscation Leveraging Word-Level Metric Differential Privacy , author=. Proceedings of the 10th ACM International Workshop on Security and Privacy Analytics , pages=

  20. [20]

    2014 , issue_date =

    Kifer, Daniel and Machanavajjhala, Ashwin , title =. 2014 , issue_date =. doi:10.1145/2514689 , journal =

  21. [21]

    Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) , pages=

    Glove: Global vectors for word representation , author=. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) , pages=

  22. [22]

    Proceedings of the 30th International Conference on Advances in Geographic Information Systems , pages=

    TrafficAdaptor: an adaptive obfuscation strategy for vehicle location privacy against traffic flow aware attacks , author=. Proceedings of the 30th International Conference on Advances in Geographic Information Systems , pages=

  23. [23]

    Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence , pages=

    Enhancing scalability of metric differential privacy via secret dataset partitioning and benders decomposition , author=. Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence , pages=

  24. [24]

    Proceedings of the 29th ACM International Conference on Information & Knowledge Management , pages=

    Time-efficient geo-obfuscation to protect worker location privacy over road networks in spatial crowdsourcing , author=. Proceedings of the 29th ACM International Conference on Information & Knowledge Management , pages=

  25. [25]

    PETS , year=

    Scalable Optimization for Locally Relevant Geo-Location Privacy , author=. PETS , year=

  26. [26]

    2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS) , pages=

    Lipschitz extensions for node-private graph statistics and the generalized exponential mechanism , author=. 2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS) , pages=. 2016 , organization=

  27. [27]

    The Twelfth International Conference on Learning Representations , year=

    Beyond Memorization: Violating Privacy via Inference with Large Language Models , author=. The Twelfth International Conference on Learning Representations , year=

  28. [28]

    and Zhang, D

    Wang, L. and Zhang, D. and Yang, D. and Lim, B. and Ma, X. , year =. Differential Location Privacy for Sparse Mobile Crowdsensing , doi =

  29. [29]

    and Yang, D

    Wang, L. and Yang, D. and Han, X. and Wang, T. and Zhang, D. and Ma, X. , title =. Proc. of ACM WWW , year =

  30. [30]

    IEEE INFOCOM 2017-IEEE Conference on Computer Communications , pages=

    Local private ordinal data distribution estimation , author=. IEEE INFOCOM 2017-IEEE Conference on Computer Communications , pages=. 2017 , organization=

  31. [31]

    2024 , isbn =

    Yadav, Sourabh and Yu, Chenyang and Xie, Xinpeng and Huang, Yan and Qiu, Chenxi , title =. 2024 , isbn =. doi:10.1145/3678717.3691211 , pages =

  32. [32]

    , author=

    Dynamic Differential Location Privacy with Personalized Error Bounds. , author=. NDSS , volume=

  33. [33]

    T. M. Mitchell. The Need for Biases in Learning Generalizations. 1980

  34. [34]

    M. J. Kearns , title =

  35. [35]

    Machine Learning: An Artificial Intelligence Approach, Vol. I. 1983

  36. [36]

    R. O. Duda and P. E. Hart and D. G. Stork. Pattern Classification. 2000

  37. [37]

    Suppressed for Anonymity , author=

  38. [38]

    2017 , eprint=

    Adam: A Method for Stochastic Optimization , author=. 2017 , eprint=

  39. [39]

    2015 , eprint=

    Optimality of the Laplace Mechanism in Differential Privacy , author=. 2015 , eprint=

  40. [40]

    Newell and P

    A. Newell and P. S. Rosenbloom. Mechanisms of Skill Acquisition and the Law of Practice. Cognitive Skills and Their Acquisition. 1981

  41. [41]

    A. L. Samuel. Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development. 1959

  42. [42]

    ArXiv , year=

    TEM: High Utility Metric Differential Privacy on Text , author=. ArXiv , year=

  43. [43]

    Proceedings on Privacy Enhancing Technologies , year=

    Constructing elastic distinguishability metrics for location privacy , author=. Proceedings on Privacy Enhancing Technologies , year=

  44. [44]

    Efficient Utility Improvement for Location Privacy , volume =

    Chatzikokolakis, Kostas and Elsalamouny, Ehab and Palamidessi, Catuscia , year =. Efficient Utility Improvement for Location Privacy , volume =. Proceedings on Privacy Enhancing Technologies , doi =

  45. [45]

    IEEE Trans

    Preserving geo-indistinguishability of the primary user in dynamic spectrum sharing , author=. IEEE Trans. on Vehicular Technology , year=

  46. [46]

    arXiv preprint arXiv:1805.08866 , year=

    Author obfuscation using generalised differential privacy , author=. arXiv preprint arXiv:1805.08866 , year=

  47. [47]

    arXiv preprint arXiv:1905.09778 , year=

    Privacy-preserving obfuscation of critical infrastructure networks , author=. arXiv preprint arXiv:1905.09778 , year=

  48. [48]

    IEEE Transactions on Dependable and Secure Computing , volume=

    Secure and utility-aware data collection with condensed local differential privacy , author=. IEEE Transactions on Dependable and Secure Computing , volume=. 2019 , publisher=

  49. [49]

    Utility-Preserving Privacy Protection of Textual Documents via Word Embeddings , year=

    Hassan, Fadi and Sánchez, David and Domingo-Ferrer, Josep , journal=. Utility-Preserving Privacy Protection of Textual Documents via Word Embeddings , year=

  50. [50]

    TIFS , year=

    A geo-indistinguishable location perturbation mechanism for location-based services supporting frequent queries , author=. TIFS , year=

  51. [51]

    Proceedings on Privacy Enhancing Technologies , year=

    Not All Attributes are Created Equal: dX-Private Mechanisms for Linear Queries , author=. Proceedings on Privacy Enhancing Technologies , year=

  52. [52]

    IEEE Transactions on Dependable and Secure Computing , volume=

    Personalized 3D location privacy protection with differential and distortion geo-perturbation , author=. IEEE Transactions on Dependable and Secure Computing , volume=. 2023 , publisher=

  53. [53]

    IEEE Transactions on Mobile Computing , volume=

    Eclipse: Preserving differential location privacy against long-term observation attacks , author=. IEEE Transactions on Mobile Computing , volume=. 2020 , publisher=

  54. [54]

    IEEE Transactions on Mobile Computing , volume=

    Distpreserv: Maintaining user distribution for privacy-preserving location-based services , author=. IEEE Transactions on Mobile Computing , volume=. 2022 , publisher=

  55. [55]

    To and G

    H. To and G. Ghinita and L. Fan and C. Shahabi , journal=. Differentially Private Location Protection for Worker Datasets in Spatial Crowdsourcing , year=

  56. [56]

    IEEE Transactions on Mobile Computing , volume=

    Privacy-preserving location-based advertising via longitudinal geo-indistinguishability , author=. IEEE Transactions on Mobile Computing , volume=. 2023 , publisher=

  57. [57]

    Advances in neural information processing systems , volume=

    Character-level convolutional networks for text classification , author=. Advances in neural information processing systems , volume=