Recognition: 2 theorem links
· Lean TheoremSoK: Practical Aspects of Releasing Differentially Private Graphs
Pith reviewed 2026-05-15 08:38 UTC · model grok-4.3
The pith
A new objective-based framework lets practitioners select and evaluate differentially private graph release methods while accounting for their specific vulnerabilities.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors systematize differentially private graph release methods by cataloguing key vulnerabilities and introducing an objective-based framework that guides selection, interpretation, and evaluation according to practitioner goals. They illustrate the framework with two social-network scenarios that together supply a unified benchmark for state-of-the-art methods in that domain.
What carries the argument
The objective-based framework, which incorporates identified key vulnerabilities to guide selection, interpretation, and evaluation of existing DP graph methods according to varying practitioner objectives.
If this is right
- Methods using different privacy definitions and utility measures can be compared more consistently under a shared set of practitioner objectives.
- Social-network analysts gain a concrete way to match release techniques to their goals while recognizing common vulnerabilities.
- Evaluation of new methods can be standardized against the benchmark created by the two illustrative scenarios.
- Interpretability issues with differential privacy on graphs are addressed by tying protection claims directly to stated objectives.
Where Pith is reading between the lines
- The framework could be applied in other graph domains such as biological or transportation networks to test its generality.
- Practitioners may still need separate tools to first quantify their own objectives before the framework can be used effectively.
- Periodic updates to the survey portion would be necessary to keep the benchmark current as new DP graph techniques appear.
Load-bearing premise
The identified key vulnerabilities and proposed objective-based framework are comprehensive and representative enough to guide selection and evaluation across different practitioner objectives.
What would settle it
A demonstration that the framework misses a major vulnerability or cannot usefully distinguish methods when applied to an additional scenario outside the two social-network examples would undermine the claim of a unified benchmark.
Figures
read the original abstract
Graph data is increasingly prevalent across domains, offering analytical value but raising significant privacy concerns. Edges may encode sensitive relationships, while node attributes may contain sensitive entity or personal data. Differential Privacy (DP) has gained traction for its strong guarantees, yet applying DP to graphs is challenging because of their complex relational structure, leading to trade-offs between privacy and utility. Existing methods vary in privacy definitions, utility goals, and contextual settings, complicating comparison. For practitioners, this is compounded by DP's interpretability issues, contributing to misleading protection claims. To address this, we propose a novel systemisation of existing methods tailored to practical considerations and adaptable to varying practitioner objectives. Our contributions include: (i) a comprehensive survey of differentially private graph release methods; (ii) identification of key vulnerabilities; and (iii) a practitioner-oriented, objective-based framework to guide the selection, interpretation, and sound evaluation of existing methods. We demonstrate the use of our systemisation through two exemplary scenarios in which we assume the role of a social network analyst, apply it, and conduct evaluations in accordance with our framework. Together, these two illustrative instantiations ultimately provide a unified benchmark for state-of-the-art methods in the social networks domain.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper surveys existing differentially private graph release methods, identifies key vulnerabilities arising from their varying privacy definitions, utility metrics, and graph properties, proposes a practitioner-oriented objective-based framework to guide method selection, interpretation, and evaluation, and demonstrates the framework via two illustrative scenarios in the social network domain. It claims that these two instantiations together provide a unified benchmark for state-of-the-art methods in that domain.
Significance. If the framework is shown to be sufficiently general and the survey representative, the work would help practitioners navigate the fragmented landscape of DP graph releases by enabling more consistent, objective-driven comparisons and reducing misleading privacy claims.
major comments (1)
- [Abstract] Abstract: the claim that the two illustrative scenarios 'ultimately provide a unified benchmark for state-of-the-art methods in the social networks domain' is not load-bearingly supported by the presented evidence. The scenarios are specific to social-network analyst roles and do not demonstrate systematic coverage or resolution of incompatibilities (e.g., directed graphs, dynamic releases, or non-social-network objectives) across the full set of surveyed methods.
minor comments (2)
- The survey section should include an explicit table or appendix enumerating all reviewed methods, their privacy definitions, and which objectives from the framework they address, to allow readers to judge completeness.
- Clarify in the framework description how conflicting utility metrics across methods are normalized or reconciled when performing the claimed apples-to-apples evaluations.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. We address the major comment below and will make the necessary revisions to ensure the claims are accurately supported.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim that the two illustrative scenarios 'ultimately provide a unified benchmark for state-of-the-art methods in the social networks domain' is not load-bearingly supported by the presented evidence. The scenarios are specific to social-network analyst roles and do not demonstrate systematic coverage or resolution of incompatibilities (e.g., directed graphs, dynamic releases, or non-social-network objectives) across the full set of surveyed methods.
Authors: We agree that the original abstract claim overstates the generality of the benchmark. The two scenarios are illustrative and tailored to specific social-network analyst roles and objectives; they do not systematically resolve incompatibilities such as directed graphs, dynamic releases, or non-social-network settings. We will revise the abstract to state that the instantiations 'provide an illustrative unified benchmark for state-of-the-art methods under the social-network objectives and settings considered.' We will also add explicit discussion of scope and limitations in the conclusion section, noting that broader coverage remains future work. revision: yes
Circularity Check
No circularity: survey paper with explicit framework and illustrative examples only
full rationale
The paper is an SoK survey that reviews existing DP graph release methods, catalogs vulnerabilities, and proposes an objective-based selection framework. It demonstrates the framework via two explicit social-network scenarios rather than deriving any new quantities. No equations, fitted parameters, predictions, or self-citation chains appear in the load-bearing claims. The unified-benchmark statement is presented as the outcome of applying the authors' own framework to surveyed methods, not as a reduction to prior self-citations or definitions. This is a standard non-circular survey structure.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We propose a novel systemisation of existing methods... modular design clarifies methodological components... practitioner-oriented, objective-based framework
-
IndisputableMonolith/Foundation/ArithmeticFromLogic.leanLogicNat recovery unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
L1: graph model... L2: trust model... L3: privacy mechanism... L4: privacy-enforcing transformation
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Forward citations
Cited by 1 Pith paper
-
Differentially Private Synthetic Voltage Phasor Release for Distribution Grids
Differential privacy on synthetic loads propagates through the true AC power flow model to yield voltage phasors that are also differentially private with respect to the admittance matrix, preserving physics for GFM training.
Reference graph
Works this paper leans on
-
[1]
Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang
Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communica- tions Security(Vienna, Austria)(CCS ’16). Association for Computing Machinery, New York, NY, USA, 308–318. doi:10.1145/2976749.2978318
-
[2]
John M. Abowd. 2018. The U.S. Census Bureau Adopts Differential Privacy. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining(London, United Kingdom)(KDD ’18). Association for Computing Machinery, New York, NY, USA, 2867. doi:10.1145/3219819.3226070
-
[3]
William Aiello, Fan Chung, and Linyuan Lu. 2001. A Random Graph Model for Power Law Graphs.Exp. Math.10, 1 (2001), 53–66. doi:10.1080/10586458.2001. 10504428
-
[4]
Sofiane Azogagh, Zelma Aubin Birba, Josée Desharnais, Sébastien Gambs, Marc- Olivier Killijian, and Nadia Tawbi. 2025. GRAND: Graph Reconstruction from Potential Partial Adjacency and Neighborhood Data. InProceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2(Toronto ON, Canada)(KDD ’25). Association for Computing Machine...
-
[5]
Lars Backstrom, Cynthia Dwork, and Jon Kleinberg. 2007. Wherefore art thou r3579x? anonymized social networks, hidden patterns, and structural steganography. InProceedings of the 16th International Conference on World Wide Web(Banff, Alberta, Canada)(WWW ’07). Association for Computing Machinery, New York, NY, USA, 181–190. doi:10.1145/1242572.1242598
-
[6]
Albert-László Barabási and Márton Pósfai. 2016.Network science. Cambridge University Press, Cambridge. http://barabasi.com/networksciencebook/
work page 2016
-
[7]
Smriti Bhagat, Irina Rozenbaum, and Graham Cormode. 2007. Applying link- based classification to label blogs. InProceedings of the 9th WebKDD and 1st SNA-KDD 2007 Workshop on Web Mining and Social Network Analysis(San Jose, California)(WebKDD/SNA-KDD ’07). Association for Computing Machinery, New York, NY, USA, 92–101. doi:10.1145/1348549.1348560
-
[8]
Vincent Poor, Lalitha Sankar, and Rafael F
Matthieu Bloch, Onur Günlü, Aylin Yener, Frédérique Oggier, H. Vincent Poor, Lalitha Sankar, and Rafael F. Schaefer. 2021. An Overview of Information- Theoretic Security and Privacy: Metrics, Limits and Applications.IEEE J. Sel. Areas Inf. Theory2, 1 (2021), 5–22. doi:10.1109/JSAIT.2021.3062755 2DOI: 10.82133/C42F-K220
-
[9]
Jeremiah Blocki, Avrim Blum, Anupam Datta, and Or Sheffet. 2012. The Johnson- Lindenstrauss Transform Itself Preserves Differential Privacy. In2012 IEEE 53rd Annual Symposium on Foundations of Computer Science. 410–419. doi:10.1109/ FOCS.2012.67
work page 2012
-
[10]
Jeremiah Blocki, Avrim Blum, Anupam Datta, and Or Sheffet. 2013. Differentially private data analysis of social networks via restricted sensitivity. InProceedings of the 4th Conference on Innovations in Theoretical Computer Science(Berkeley, California, USA)(ITCS ’13). Association for Computing Machinery, New York, NY, USA, 87–96. doi:10.1145/2422436.2422449
-
[11]
Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefeb- vre. 2008. Fast unfolding of communities in large networks.J. Stat. Mech: Theory Exp.2008, 10 (oct 2008), P10008. doi:10.1088/1742-5468/2008/10/P10008
-
[12]
Angela Bonifati, Irena Holubová, Arnau Prat-Pérez, and Sherif Sakr. 2020. Graph Generators: State of the Art and Open Challenges.ACM Comput. Surv.53, 2, Article 36 (April 2020), 30 pages. doi:10.1145/3379445
-
[13]
Felipe T. Brito, Victor A. E. Farias, Cheryl Flynn, Subhabrata Majumdar, Javam C. Machado, and Divesh Srivastava. 2023. Global and Local Differentially Private Release of Count-Weighted Graphs.Proc. ACM Manag. Data1, 2, Article 154 (June 2023), 25 pages. doi:10.1145/3589299
-
[14]
Sébastien Bubeck. 2015. Convex Optimization: Algorithms and Complexity. Found. Trends Mach. Learn.8, 3-4 (2015), 231–357. doi:10.1561/2200000050
-
[15]
Jordi Casas-Roma, Jordi Herrera-Joancomartí, and Vicenç Torra. 2017. k-Degree anonymity and edge selection: improving data utility in large networks.Knowl. Inf. Syst.50, 2 (01 Feb 2017), 447–474. doi:10.1007/s10115-016-0947-7
-
[16]
Jordi Casas-Roma, Jordi Herrera-Joancomartí, and Vicenç Torra. 2017. A survey of graph-modification techniques for privacy-preserving on networks.Artif. Intell. Rev.47, 3 (March 2017), 341–366. doi:10.1007/s10462-016-9484-8
-
[17]
Liu Changchang, Chakraborty Supriyo, and Mittal Prateek. 2016. Dependence Makes You Vulnerable: Differential Privacy Under Dependent Tuples.Proc. 2016 Netw. Distrib. Syst. Secur. Symp.(2016). doi:10.14722/ndss.2016.23279
-
[18]
Kai Chen, Xiaochen Li, Chen GONG, Ryan McKenna, and Tianhao Wang. 2025. Benchmarking Differentially Private Tabular Data Synthesis Algorithms. In Will Synthetic Data Finally Solve the Data Access Problem? SynthData Workshop at the Thirteenth International Conference on Learning Representations (ICLR 2025). https://openreview.net/forum?id=0bvWk1HuJC
work page 2025
-
[19]
Rui Chen, Benjamin C. M. Fung, Philip S. Yu, and Bipin C. Desai. 2014. Correlated network data publication via differential privacy.VLDB J.23, 4 (01 Aug 2014), 653–676. doi:10.1007/s00778-013-0344-8
-
[20]
Xihui Chen, Sjouke Mauw, and Yunior Ramírez-Cruz. 2020. Publishing Community-Preserving Attributed Social Graphs with a Differential Privacy Guarantee.Proc. Priv. Enhancing Technol.2020 (10 2020), 131–152. doi:10.2478/ popets-2020-0066
work page 2020
-
[21]
Albert Cheu, Adam Smith, Jonathan Ullman, David Zeber, and Maxim Zhilyaev
-
[22]
InAdvances in Cryptology – EUROCRYPT 2019, Yuval Ishai and Vincent Rijmen (Eds.)
Distributed Differential Privacy via Shuffling. InAdvances in Cryptology – EUROCRYPT 2019, Yuval Ishai and Vincent Rijmen (Eds.). Springer International Publishing, Cham, 375–403
work page 2019
-
[23]
Haraty, Mourad Oussalah, Djamal Benslimane, and Mansour Naser Alraja
Elie Chicha, Bechara Al Bouna, Mohamed Nassar, Richard Chbeir, Ramzi A. Haraty, Mourad Oussalah, Djamal Benslimane, and Mansour Naser Alraja. 2021. A User-Centric Mechanism for Sequentially Releasing Graph Datasets under Blowfish Privacy.ACM Trans. Internet Technol.21, 1, Article 20 (Feb. 2021), 25 pages. doi:10.1145/3431501
-
[24]
Eunjoon Cho, Seth A. Myers, and Jure Leskovec. 2011. Friendship and mobility: user movement in location-based social networks. InProceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (San Diego, California, USA)(KDD ’11). Association for Computing Machinery, New York, NY, USA, 1082–1090. doi:10.1145/2020408.2020579
-
[25]
Aaron Clauset, Cristopher Moore, and M. E. J. Newman. 2008. Hierarchical structure and the prediction of missing links in networks.Nature453, 7191 (01 May 2008), 98–101. doi:10.1038/nature06830
-
[26]
Edith Cohen, Daniel Delling, Thomas Pajor, and Renato Werneck. 2014. Com- puting Classic Closeness Centrality, at Scale.COSN 2014 - Proc. 2014 ACM Conf. Online Soc. Netw.(08 2014). doi:10.1145/2660460.2660465
-
[27]
Francesc Comellas and Jordi Diaz-Lopez. 2008. Spectral reconstruction of complex networks.Physica A387, 25 (2008), 6436–6442. doi:10.1016/j.physa. 2008.07.032
- [28]
-
[29]
Apple Differential Privacy Team. 2017. Learning with Privacy at Scale. https://docs-assets.developer.apple.com/ml-research/papers/learning- with-privacy-at-scale.pdf
work page 2017
-
[30]
Cynthia Dwork. 2006. Differential privacy. InProceedings of the 33rd In- ternational Conference on Automata, Languages and Programming - Volume Part II(Venice, Italy)(ICALP’06). Springer-Verlag, Berlin, Heidelberg, 1–12. doi:10.1007/11787006_1
-
[31]
Cynthia Dwork and Aaron Roth. 2014. The Algorithmic Foundations of Dif- ferential Privacy.Found. Trends Theor. Comput. Sci.9, 3–4 (2014), 211–407. SoK: Practical Aspects of Releasing Differentially Private Graphs doi:10.1561/0400000042
-
[32]
Mohammed El-Kebir, Jaap Heringa, and Gunnar W. Klau. 2015. Natalie 2.0: Sparse Global Network Alignment as a Special Case of Quadratic Assignment. Algorithms8, 4 (2015), 1035–1051. doi:10.3390/a8041035
-
[33]
Marek Eliáš, Michael Kapralov, Janardhan Kulkarni, and Yin Tat Lee. 2020. Differentially private release of synthetic graphs. InProceedings of the Thirty- First Annual ACM-SIAM Symposium on Discrete Algorithms(Salt Lake City, Utah) (SODA ’20). Society for Industrial and Applied Mathematics, USA, 560–578
work page 2020
-
[34]
Dóra Erdős, Rainer Gemulla, and Evimaria Terzi. 2014. Reconstructing Graphs from Neighborhood Data.ACM Trans. Knowl. Discov. Data8, 4, Article 23 (Aug. 2014), 22 pages. doi:10.1145/2641761
-
[35]
Úlfar Erlingsson, Vasyl Pihur, and Aleksandra Korolova. 2014. RAPPOR: Ran- domized Aggregatable Privacy-Preserving Ordinal Response. InProceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (Scottsdale, Arizona, USA)(CCS ’14). Association for Computing Machinery, New York, NY, USA, 1054–1067. doi:10.1145/2660267.2660348
-
[36]
Faezeh Faez, Yassaman Ommi, Mahdieh Soleymani Baghshah, and Hamid R. Rabiee. 2021. Deep Graph Generators: A Survey.IEEE Access9 (2021), 106675– 106702. doi:10.1109/ACCESS.2021.3098417
-
[37]
Cartesian vs. Radial – A Comparative Evaluation of Two Visualization Tools
Michael Fire, Gilad Katz, Lior Rokach, and Yuval Elovici. 2013.Links Recon- struction Attack. Springer New York, New York, NY, 181–196. doi:10.1007/978- 1-4614-4139-7_9
-
[38]
Ove Frank and David Strauss. 1986. Markov Graphs.J. Am. Stat. Assoc.81, 395 (1986), 832–842. doi:10.1080/01621459.1986.10478342
-
[39]
Zhichun Fu, Peter Christen, and Jun Zhou. 2014. A Graph Matching Method for Historical Census Household Linkage. InAdvances in Knowledge Discovery and Data Mining, Vincent S. Tseng, Tu Bao Ho, Zhi-Hua Zhou, Arbee L. P. Chen, and Hung-Yu Kao (Eds.). Springer International Publishing, Cham, 485–496
work page 2014
-
[40]
Tianchong Gao and Feng Li. 2019. De-Anonymization of Dynamic Online Social Networks via Persistent Structures. InICC 2019 - 2019 IEEE International Conference on Communications (ICC). 1–6. doi:10.1109/ICC.2019.8761563
-
[41]
Tianchong Gao and Feng Li. 2019. PHDP: Preserving Persistent Homology in Differentially Private Graph Publications. InIEEE INFOCOM 2019 - IEEE Conference on Computer Communications. 2242–2250. doi:10.1109/INFOCOM. 2019.8737584
-
[42]
Tianchong Gao, Feng Li, Yu Chen, and XuKai Zou. 2018. Local Differential Privately Anonymizing Online Social Networks Under HRG-Based Model.IEEE Trans. Comput. Social Syst.5, 4 (2018), 1009–1020. doi:10.1109/TCSS.2018. 2877045
-
[43]
Neil Zhenqiang Gong and Bin Liu. 2016. You Are Who You Know and How You Behave: Attribute Inference Attacks via Users’ Social Friends and Behaviors. In 25th USENIX Security Symposium (USENIX Security 16). USENIX Association, Austin, TX, 979–995. https://www.usenix.org/conference/usenixsecurity16/ technical-sessions/presentation/gong
work page 2016
-
[44]
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde- Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative Ad- versarial Nets. InAdvances in Neural Information Processing Systems, Z. Ghahra- mani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger (Eds.), Vol. 27. Curran Associates, Inc. https://proceedings....
work page 2014
-
[45]
Aditya Grover and Jure Leskovec. 2016. node2vec: Scalable Feature Learning for Networks. InProceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(San Francisco, California, USA) (KDD ’16). Association for Computing Machinery, New York, NY, USA, 855–864. doi:10.1145/2939672.2939754
-
[46]
Xiaojie Guo and Liang Zhao. 2023. A Systematic Survey on Deep Generative Models for Graph Generation.IEEE Trans. Pattern Anal. Mach. Intell.45, 5 (2023), 5370–5390. doi:10.1109/TPAMI.2022.3214832
- [47]
-
[48]
Michael Hay, Ashwin Machanavajjhala, Gerome Miklau, Yan Chen, and Dan Zhang. 2016. Principled Evaluation of Differentially Private Algorithms using DPBench. InProceedings of the 2016 International Conference on Management of Data(San Francisco, California, USA)(SIGMOD ’16). Association for Computing Machinery, New York, NY, USA, 139–154. doi:10.1145/28829...
-
[49]
Xi He, Ashwin Machanavajjhala, and Bolin Ding. 2014. Blowfish privacy: tuning privacy-utility trade-offs using policies. InProceedings of the 2014 ACM SIGMOD International Conference on Management of Data(Snowbird, Utah, USA)(SIGMOD ’14). Association for Computing Machinery, New York, NY, USA, 1447–1458. doi:10.1145/2588555.2588581
-
[50]
Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt
Paul W. Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. 1983. Stochastic blockmodels: First steps.Soc. Netw.5, 2 (1983), 109–137. doi:10.1016/ 0378-8733(83)90021-7
work page 1983
-
[51]
Sameera Horawalavithana, Juan Arroyo Flores, John Skvoretz, and Adriana Iamnitchi. 2019. The risk of node re-identification in labeled social graphs.Appl. Netw. Sci.4, 1 (13 Jun 2019), 33. doi:10.1007/s41109-019-0148-x
-
[52]
Lihe Hou, Weiwei Ni, Sen Zhang, Nan Fu, and Dongyue Zhang. 2023. PPDU: dynamic graph publication with local differential privacy.Knowl. Inf. Syst.65, 7 (March 2023), 2965–2989. doi:10.1007/s10115-023-01838-1
-
[53]
Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open Graph Benchmark: Datasets for Machine Learning on Graphs. InAdvances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 22118–22133. http...
work page 2020
-
[54]
Yuzheng Hu, Fan Wu, Qinbin Li, Yunhui Long, Gonzalo Munilla Garrido, Chang Ge, Bolin Ding, David Forsyth, Bo Li, and Dawn Song. 2024. SoK: Privacy- Preserving Data Synthesis. In2024 IEEE Symposium on Security and Privacy (SP). 4696–4713. doi:10.1109/SP54263.2024.00002
-
[55]
Zhongzhao Hu, Luoyi Fu, and Xiaoying Gan. 2019. De-anonymize social net- work under partial overlap. InProceedings of the ACM Turing Celebration Confer- ence - China(Chengdu, China)(ACM TURC ’19). Association for Computing Ma- chinery, New York, NY, USA, Article 16, 5 pages. doi:10.1145/3321408.3321577
-
[56]
Hongyu Huang, Yao Yang, and Yantao Li. 2021. PSG: Local Privacy Preserving Synthetic Social Graph Generation. InCollaborative Computing: Networking, Applications and Worksharing, Honghao Gao and Xinheng Wang (Eds.). Springer International Publishing, Cham, 389–404
work page 2021
-
[57]
Lawrence Hubert and Phipps Arabie. 1985. Comparing partitions.J. Classif.2, 1 (01 Dec 1985), 193–218. doi:10.1007/BF01908075
-
[58]
ICORE Rankings. 2025. ICORE Conference Portal. https://portal.core.edu.au/ conf-ranks/
work page 2025
-
[59]
Masooma Iftikhar, Qing Wang, and Yu Lin. 2020. dK-Microaggregation: Anonymizing Graphs with Differential Privacy Guarantees. InAdvances in Knowledge Discovery and Data Mining, Hady W. Lauw, Raymond Chi-Wing Wong, Alexandros Ntoulas, Ee-Peng Lim, See-Kiong Ng, and Sinno Jialin Pan (Eds.). Springer International Publishing, Cham, 191–203
work page 2020
-
[60]
Kakade, Praneeth Netrapalli, and Aaron Sidford
Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, and Aaron Sidford
-
[61]
In29th Annual Conference on Learning Theory (Proceedings of Machine Learning Research, Vol
Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja’s Algorithm. In29th Annual Conference on Learning Theory (Proceedings of Machine Learning Research, Vol. 49), Vitaly Feldman, Alexander Rakhlin, and Ohad Shamir (Eds.). PMLR, Columbia University, New York, New York, USA, 1147–1164. https://proceedings.mlr.press/v49/...
- [62]
-
[63]
Shouling Ji, Weiqing Li, Prateek Mittal, Xin Hu, and Raheem Beyah. 2015. SecGraph: A Uniform and Open-source Evaluation System for Graph Data Anonymization and De-anonymization. In24th USENIX Security Sympo- sium (USENIX Security 15). USENIX Association, Washington, D.C., 303–
work page 2015
-
[64]
https://www.usenix.org/conference/usenixsecurity15/technical-sessions/ presentation/ji
-
[65]
Shouling Ji, Weiqing Li, Mudhakar Srivatsa, and Raheem Beyah. 2014. Struc- tural Data De-anonymization: Quantification, Practice, and Implications. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communica- tions Security(Scottsdale, Arizona, USA)(CCS ’14). Association for Computing Machinery, New York, NY, USA, 1040–1053. doi:10.1145/266...
-
[66]
Shouling Ji, Weiqing Li, Mudhakar Srivatsa, Jing Selena He, and Raheem Beyah
-
[67]
General Graph Data De-Anonymization: From Mobility Traces to Social Networks.ACM Trans. Inf. Syst. Secur.18, 4, Article 12 (April 2016), 29 pages. doi:10.1145/2894760
-
[68]
Shouling Ji, Prateek Mittal, and Raheem Beyah. 2017. Graph Data Anonymiza- tion, De-Anonymization Attacks, and De-Anonymizability Quantification: A Survey.IEEE Commun. Surv. Tutor.19, 2 (2017), 1305–1326. doi:10.1109/COMST. 2016.2633620
- [69]
-
[70]
Tianxi Ji, Changqing Luo, Yifan Guo, Jinlong Ji, Weixian Liao, and Pan Li. 2019. Differentially Private Community Detection in Attributed Social Networks. In Proceedings of The Eleventh Asian Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 101), Wee Sun Lee and Taiji Suzuki (Eds.). PMLR, 16–31. https://proceedings.mlr.press/...
work page 2019
-
[71]
Jinyuan Jia, Binghui Wang, Le Zhang, and Neil Zhenqiang Gong. 2017. AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields. InProceedings of the 26th International Conference on World Wide Web (Perth, Australia)(WWW ’17). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 15...
-
[72]
Xun Jian, Yue Wang, and Lei Chen. 2023. Publishing Graphs Under Node Differential Privacy.IEEE Trans. Knowl. Data Eng.35, 4 (2023), 4164–4177. doi:10.1109/TKDE.2021.3128946 D’Silva et al
-
[73]
Zach Jorgensen, Ting Yu, and Graham Cormode. 2016. Publishing Attributed Social Graphs with Formal Privacy Guarantees. InProceedings of the 2016 Inter- national Conference on Management of Data(San Francisco, California, USA) (SIGMOD ’16). Association for Computing Machinery, New York, NY, USA, 107–122. doi:10.1145/2882903.2915215
-
[74]
Peter Kairouz, Sewoong Oh, and Pramod Viswanath. 2015. The Composi- tion Theorem for Differential Privacy. InProceedings of the 32nd International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 37), Francis Bach and David Blei (Eds.). PMLR, Lille, France, 1376–1385. https://proceedings.mlr.press/v37/kairouz15.html
work page 2015
-
[75]
Vishesh Karwa, Sofya Raskhodnikova, Adam Smith, and Grigory Yaroslavtsev
-
[76]
Private analysis of graph structure.Proc. VLDB Endow.4, 11 (Aug. 2011), 1146–1157. doi:10.14778/3402707.3402749
-
[77]
Lee, Kobbi Nissim, Sofya Raskhod- nikova, and Adam Smith
Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhod- nikova, and Adam Smith. 2011. What Can We Learn Privately?SIAM J. Comput. 40, 3 (2011), 793–826. doi:10.1137/090756090
-
[78]
Shiva Prasad Kasiviswanathan, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. 2013. Analyzing Graphs with Node Differential Privacy. InTheory of Cryptography, Amit Sahai (Ed.). Springer Berlin Heidelberg, Berlin, Heidelberg, 457–476
work page 2013
-
[79]
David Kempe, Jon Kleinberg, and Éva Tardos. 2003. Maximizing the spread of influence through a social network. InProceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(Washington, D.C.)(KDD ’03). Association for Computing Machinery, New York, NY, USA, 137–146. doi:10.1145/956750.956769
-
[80]
Daniel Kifer and Ashwin Machanavajjhala. 2011. No free lunch in data privacy. InProceedings of the 2011 ACM SIGMOD International Conference on Management of Data(Athens, Greece)(SIGMOD ’11). Association for Computing Machinery, New York, NY, USA, 193–204. doi:10.1145/1989323.1989345
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.