Recognition: no theorem link
Protecting and Preserving Protest Dynamics for Responsible Analysis
Pith reviewed 2026-05-10 18:50 UTC · model grok-4.3
The pith
A framework uses conditional image synthesis to create labeled synthetic protest images that support collective analysis without exposing individuals.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper's central claim is that replacing sensitive protest imagery with well-labeled synthetic reproductions using conditional image synthesis enables analysis of collective patterns without direct exposure of identifiable individuals, while producing realistic and diverse imagery, balancing analytical utility with privacy risk reduction, and assessing demographic fairness in the generated data.
What carries the argument
The responsible computing framework that integrates privacy risk assessment, conditional image synthesis for creating labeled synthetic images, downstream collective pattern analysis, and demographic fairness evaluation.
If this is right
- Protest dynamics can be studied at scale using only synthetic data that carries the necessary labels for pattern detection.
- Privacy risks from foundation models memorizing or leaking protest imagery are reduced by training or analyzing on synthetics instead.
- Demographic fairness assessments become possible on the synthetic dataset to check for disproportionate effects on subgroups.
- Analysis pipelines gain a pragmatic, harm-mitigating option that acknowledges residual risks rather than promising absolute privacy.
Where Pith is reading between the lines
- The same synthesis approach could apply to other high-risk visual datasets involving crowds or public events where individual exposure carries similar risks.
- Combining this framework with existing de-identification techniques might further lower re-identification probabilities in cross-platform settings.
- Testing whether models trained on these synthetics generalize to real-world protest scenarios would clarify the limits of utility preservation.
Load-bearing premise
Conditional image synthesis can produce images that retain the essential collective protest dynamics and required labels for valid analysis while meaningfully lowering privacy risks and avoiding demographic bias.
What would settle it
A direct comparison showing that key collective statistics or downstream model performance on the synthetic images differ substantially from results on the original images, or that re-identification of individuals remains possible from the synthetics.
Figures
read the original abstract
Protest-related social media data are valuable for understanding collective action but inherently high-risk due to concerns surrounding surveillance, repression, and individual privacy. Contemporary AI systems can identify individuals, infer sensitive attributes, and cross-reference visual information across platforms, enabling surveillance that poses risks to protesters and bystanders. In such contexts, large foundation models trained on protest imagery risk memorizing and disclosing sensitive information, leading to cross-platform identity leakage and retroactive participant identification. Existing approaches to automated protest analysis do not provide a holistic pipeline that integrates privacy risk assessment, downstream analysis, and fairness considerations. To address this gap, we propose a responsible computing framework for analyzing collective protest dynamics while reducing risks to individual privacy. Our framework replaces sensitive protest imagery with well-labeled synthetic reproductions using conditional image synthesis, enabling analysis of collective patterns without direct exposure of identifiable individuals. We demonstrate that our approach produces realistic and diverse synthetic imagery while balancing downstream analytical utility with reductions in privacy risk. We further assess demographic fairness in the generated data, examining whether synthetic representations disproportionately affect specific subgroups. Rather than offering absolute privacy guarantees, our method adopts a pragmatic, harm-mitigating approach that enables socially sensitive analysis while acknowledging residual risks.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes a responsible computing framework for protest imagery analysis that replaces real high-risk social media images with synthetic reproductions generated via conditional image synthesis. The framework aims to enable study of collective dynamics (spatial arrangements, interaction patterns, sign semantics, density) while mitigating individual privacy risks from surveillance and identity leakage, and includes evaluation of demographic fairness in the outputs. It positions itself as a pragmatic, harm-mitigating pipeline integrating privacy assessment, analysis utility, and fairness rather than providing absolute guarantees.
Significance. If the synthetic data can be validated to preserve downstream analytical equivalence on collective features, the framework would offer a valuable contribution to ethical computer vision and social computing by enabling safer research on sensitive collective-action topics without direct exposure of participants.
major comments (2)
- [Abstract] Abstract: the claim that the approach 'demonstrate[s] ... balancing downstream analytical utility with reductions in privacy risk' is unsupported by any quantitative metrics, error analysis, or validation experiments (e.g., no reported correlations for crowd-counting outputs, optical-flow similarity, graph-based interaction fidelity, or privacy-leakage measures between real and synthetic images on matched events).
- [Framework description] Framework description (and any methods/results sections): the core assumption that conditional image synthesis preserves essential collective protest dynamics and labels for valid downstream analysis is presented without ablation studies on conditioning-signal construction or equivalence tests on real vs. synthetic data, which is load-bearing for the claimed utility.
minor comments (2)
- [Abstract] Clarify how labels are generated or transferred to the synthetic images to ensure they accurately reflect the generated content rather than inheriting from real images.
- The manuscript would benefit from explicit discussion of the specific generative model architecture and conditioning mechanisms used.
Simulated Author's Rebuttal
Thank you for the detailed and constructive review of our manuscript. We appreciate the focus on empirical validation and will strengthen the paper accordingly. We address each major comment below.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim that the approach 'demonstrate[s] ... balancing downstream analytical utility with reductions in privacy risk' is unsupported by any quantitative metrics, error analysis, or validation experiments (e.g., no reported correlations for crowd-counting outputs, optical-flow similarity, graph-based interaction fidelity, or privacy-leakage measures between real and synthetic images on matched events).
Authors: We agree that the abstract claim would be better supported by explicit quantitative evidence. The current manuscript emphasizes the framework design, qualitative demonstrations of synthetic image realism and diversity, and initial assessments of privacy risk reduction and demographic fairness, but does not include direct equivalence metrics such as crowd-counting correlations, optical-flow similarity, or interaction graph fidelity on matched real-synthetic pairs. In the revised version, we will moderate the abstract language to indicate that the approach 'provides a framework for balancing' or 'preliminarily supports balancing' utility with privacy reductions. We will also add a dedicated validation section reporting quantitative comparisons on downstream collective dynamics tasks using available data. revision: partial
-
Referee: [Framework description] Framework description (and any methods/results sections): the core assumption that conditional image synthesis preserves essential collective protest dynamics and labels for valid downstream analysis is presented without ablation studies on conditioning-signal construction or equivalence tests on real vs. synthetic data, which is load-bearing for the claimed utility.
Authors: This is a fair observation; the preservation of collective dynamics is indeed central to the framework's value. The manuscript describes the conditional synthesis process and illustrates label transfer for elements such as spatial arrangements and interaction patterns through examples, but lacks systematic ablations on conditioning signals (e.g., varying semantic maps or pose conditions) and formal equivalence tests against real data. We will incorporate these in the revision by adding ablation studies on conditioning components and quantitative equivalence evaluations (including optical flow and graph-based interaction metrics) on real versus synthetic images from comparable events. revision: yes
Circularity Check
No circularity: framework proposal uses external synthesis techniques without self-referential derivations
full rationale
The paper proposes a responsible computing framework that replaces real protest images with synthetic ones via conditional image synthesis. No equations, fitted parameters, or derivation steps appear in the abstract or described content. Claims about preserving collective dynamics and balancing utility/privacy rely on external generative models and downstream analysis tools rather than any self-definitional or fitted-input reduction. No self-citations are invoked as load-bearing uniqueness theorems. This is a standard non-circular methodological proposal.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Conditional image synthesis can generate realistic, diverse, and well-labeled images that retain collective protest dynamics for analysis.
- domain assumption Replacing real imagery with synthetics reduces privacy risks without invalidating downstream analytical utility or introducing unfair demographic effects.
Reference graph
Works this paper leans on
-
[1]
[n. d.]. https://cloud.google.com/vision
-
[2]
General Data Protection Regulation (GDPR)
2016. General Data Protection Regulation (GDPR). https://eur-lex.europa.eu/eli/reg/2016/679/oj. Regulation (EU) 2016/679 of the European Parliament and of the Council
2016
-
[3]
California Consumer Privacy Act (CCPA)
2018. California Consumer Privacy Act (CCPA). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375. California Civil Code Title 1.81.5, Sections 1798.100–1798.199
2018
-
[4]
Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang
Martin Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep Learning with Differential Privacy. InProceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security(Vienna, Austria)(CCS ’16). Association for Computing Machinery, New York, NY, USA, 308–318. doi:10.1145/2976749.2978318
-
[5]
Quratulain Alvi, Syed Farooq, Bilal Ahmed, Nadeem Khan, Mazhar Awan, and Haitham Nobanee. 2023. On the frontiers of Twitter data and sentiment analysis in election prediction: a review.PeerJ Computer Science9 (08 2023), e1517. doi:10.7717/peerj-cs.1517
-
[6]
Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, Hugh Brendan McMahan, and Vinith Menon Suriyakumar. 2024. One-shot Empirical Privacy Estimation for Federated Learning. InThe Twelfth International Conference on Learning Representations. https://openreview.net/forum?id= 0BqyZSWfzo
2024
-
[7]
Martin Arjovsky, Soumith Chintala, and Léon Bottou. 2017. Wasserstein Generative Adversarial Networks. InProceedings of the 34th International Conference on Machine Learning. PMLR. https://proceedings.mlr.press/v70/arjovsky17a.html
2017
-
[8]
Mikołaj Bińkowski, Danica J. Sutherland, et al. 2021. Demystifying MMD GANs. arXiv:1801.01401 [stat.ML]
work page internal anchor Pith review arXiv 2021
-
[9]
Alexander Boudewijn and Andrea F Ferraris. 2024. Legal and Regulatory Perspectives on Synthetic Data as an Anonymization Strategy.J. Pers. Data Prot. L.(2024), 17
2024
-
[10]
Nicolas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramer, Borja Balle, Daphne Ippolito, and Eric Wallace. 2023. Extracting training data from diffusion models. In32nd USENIX security symposium (USENIX Security 23). 5253–5270
2023
-
[11]
Jiankang Deng, Jia Guo, Evangelos Ververas, Irene Kotsia, and Stefanos Zafeiriou. 2020. RetinaFace: Single-Shot Multi-Level Face Localisation in the Wild. In2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). doi:10.1109/CVPR42600.2020.00525
-
[12]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. InProceedings of the 3rd innovations in theoretical computer science conference. 214–226
2012
-
[13]
Cynthia Dwork and Aaron Roth. 2014. The Algorithmic Foundations of Differential Privacy.Found. Trends Theor. Comput. Sci.(2014). https: //doi.org/10.1561/0400000042
-
[14]
Fahim Faisal, Noman Mohammed, Carson K. Leung, and Yang Wang. 2022. Generating Privacy Preserving Synthetic Medical Data. In2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA). doi:10.1109/DSAA54385.2022.10032429
-
[15]
2020.Global Peace Index 2020: Measuring Peace in a Complex World
Institute for Economics & Peace. 2020.Global Peace Index 2020: Measuring Peace in a Complex World. http://visionofhumanity.org/reports
2020
-
[16]
GDELT. [n. d.]. VGKG: A Massive New 5.5 Million Global Protest Image Annotations Dataset From Worldwide News. https://blog.gdeltproject.org/
-
[17]
Goodfellow, Jean Pouget-Abadie, et al
Ian J. Goodfellow, Jean Pouget-Abadie, et al. 2014. Generative Adversarial Nets. InNeural Information Processing Systems. https://api.semanticscholar. org/CorpusID:261560300
2014
-
[18]
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. 2017. Improved Training of Wasserstein GANs. In Neural Information Processing Systems. https://api.semanticscholar.org/CorpusID:10894094
2017
-
[19]
Frederik Harder, Kamil Adamczewski, and Mijung Park. 2020. DP-MERF: Differentially Private Mean Embeddings with RandomFeatures for Practical Privacy-preserving Data Generation. InInternational Conference on Artificial Intelligence and Statistics. https://api.semanticscholar.org/CorpusID: 225077562
2020
-
[20]
Sutherland, and Mijung Park
Fredrik Harder, Milad Jalali Asadabadi, Danica J. Sutherland, and Mijung Park. 2023. Pre-trained Perceptual Features Improve Differentially Private Image Generation.Transactions on Machine Learning Research(2023). https://openreview.net/forum?id=R6W7zkMz0P
2023
-
[21]
Full-Band General Audio Synthesis with Score-Based Diffusion
Usman Hassan, Dongjie Chen, Sen-Ching S. Cheung, and Chen-Nee Chuah. 2023. He-Gan: Differentially Private Gan Using Hamiltonian Monte Carlo Based Exponential Mechanism. doi:10.1109/ICASSP49357.2023.10095481
-
[22]
Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. 2019. LOGAN: Membership Inference Attacks Against Generative Models. Proc. Priv. Enhancing Technol.2019, 1 (2019), 133–152. doi:10.2478/POPETS-2019-0008
-
[23]
Martin Heusel, Hubert Ramsauer, et al . 2018. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv:1706.08500 [cs.LG]
work page Pith review arXiv 2018
-
[24]
Håkon Hukkelås and Frank Lindseth. 2023. DeepPrivacy2: Towards Realistic Full-Body Anonymization. In2023 IEEE/CVF Winter Conference on Applications of Computer Vision (W ACV). 1329–1338. doi:10.1109/WACV56688.2023.00138
- [25]
-
[26]
Haroon Idrees, Muhmmad Tayyab, et al. 2018. Composition Loss for Counting, Density Map Estimation and Localization in Dense Crowds. InECCV 2018 Proceedings. doi:10.1007/978-3-030-01216-8_33
-
[27]
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. 2017. Image-to-Image Translation with Conditional Adversarial Networks.CVPR (2017)
2017
-
[28]
Arik, and James Y
Zachary Izzo, Jinsung Yoon, Sercan Ö. Arik, and James Y. Zou. 2022. Provable Membership Inference Privacy.Trans. Mach. Learn. Res.2024 (2022). https://api.semanticscholar.org/CorpusID:253510348
2022
-
[29]
Matthew Jagielski, Jonathan Ullman, and Alina Oprea. 2020. Auditing differentially private machine learning: How private is private sgd?Advances in Neural Information Processing Systems33 (2020), 22205–22216
2020
-
[30]
Bargav Jayaraman and David Evans. 2022. Are Attribute Inference Attacks Just Imputation?. InProceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security(Los Angeles, CA, USA)(CCS ’22). Association for Computing Machinery, New York, NY, USA, 1569–1582. doi:10.1145/3548606.3560663
-
[31]
Jinyuan Jia and Neil Zhenqiang Gong. 2018. AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning. InUSENIX Security Symposium
2018
-
[32]
Gautam Kamath, Argyris Mouzakis, Matthew Regehr, Vikrant Singhal, Thomas Steinke, and Jonathan Ullman. 2025. A bias-accuracy-privacy trilemma for statistical estimation.J. Amer. Statist. Assoc.(2025), 1–12
2025
-
[33]
Tero Karras, Miika Aittala, et al . 2021. Alias-Free Generative Adversarial Networks. InNeural Information Processing Systems. https://api. semanticscholar.org/CorpusID:235606261
2021
-
[34]
T. Karras, S. Laine, et al. 2020. Analyzing and Improving the Image Quality of StyleGAN. In2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE Computer Society. https://doi.ieeecomputersociety.org/10.1109/CVPR42600.2020.00813
-
[35]
Tero Karras, Samuli Laine, and Timo Aila. 2021. A Style-Based Generator Architecture for Generative Adversarial Networks.IEEE Transactions on Pattern Analysis & Machine Intelligence(2021)
2021
-
[36]
Nonlinearity, Information and Robotics
Patrik Joslin Kenfack, Daniil Dmitrievich Arapovy, Rasheed Hussain, S. M. Ahsan Kazmi, and A. Khan. 2021. On the Fairness of Generative Adversarial Networks (GANs).2021 International Conference "Nonlinearity, Information and Robotics" (NIR)(2021), 1–7. https://api.semanticscholar.org/CorpusID: 232076023
2021
-
[37]
Tamara Kharroub and Ozen Bas. 2016. Social media and protests: An examination of Twitter images of the 2011 Egyptian revolution.New Media & Society18 (2016), 1973 – 1992. https://api.semanticscholar.org/CorpusID:45209755
2016
-
[38]
Minchul Kim and Ozen Bas. 2023. Seeing the Black Lives Matter Movement Through Computer Vision? An Automated Visual Analysis of News Media Images on Facebook.Social Media + Society9, 3 (2023), 20563051231195582. arXiv:https://doi.org/10.1177/20563051231195582 doi:10.1177/20563051231195582
-
[39]
Gizem Korkmaz, Jose Cadena, et al. 2016. Multi-source models for civil unrest forecasting.Social Network Analysis and Mining(2016). https: //api.semanticscholar.org/CorpusID:16502947
2016
-
[40]
Kimmo Kärkkäinen and Jungseock Joo. 2021. FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age for Bias Measurement and Mitigation. In2021 IEEE Winter Conference on Applications of Computer Vision (W ACV). 1547–1557. doi:10.1109/WACV48630.2021.00159
-
[41]
Zinan Lin, Vyas Sekar, and Giulia Fanti. 2022. On the Privacy Properties of GAN-generated Samples. InInternational Conference on Artificial Intelligence and Statistics. https://api.semanticscholar.org/CorpusID:233237159
2022
-
[42]
Yunhui Long, Boxin Wang, et al. 2024. G-PATE: scalable differentially private data generator via private aggregation of teacher discriminators(NIPS ’21)
2024
-
[43]
Ilya Loshchilov and Frank Hutter. 2017. Decoupled Weight Decay Regularization. InInternational Conference on Learning Representations. https: //api.semanticscholar.org/CorpusID:53592270
2017
-
[44]
Fred Lu, Joseph Munoz, Maya Fuchs, Tyler LeBlond, Elliott Zaresky-Williams, Edward Raff, Francis Ferraro, and Brian Testa. 2022. A general framework for auditing differentially private machine learning.Advances in Neural Information Processing Systems35 (2022), 4165–4176
2022
-
[45]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Ani Saxena, Kristina Lerman, and A. G. Galstyan. 2019. A Survey on Bias and Fairness in Machine Learning.ACM Computing Surveys (CSUR)54 (2019), 1 – 35. https://api.semanticscholar.org/CorpusID:201666566
2019
-
[46]
Mescheder, Andreas Geiger, and Sebastian Nowozin
Lars M. Mescheder, Andreas Geiger, and Sebastian Nowozin. 2018. Which Training Methods for GANs do actually Converge?. InInternational Conference on Machine Learning. https://api.semanticscholar.org/CorpusID:3345317
2018
- [47]
-
[48]
Vahid Mirjalili, Sebastian Raschka, and Arun Ross. 2020. PrivacyNet: Semi-Adversarial Networks for Multi-Attribute Face Privacy.IEEE Transactions on Image Processing29 (2020), 9400–9412. doi:10.1109/tip.2020.3024026
-
[49]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning. In2019 IEEE Symposium on Security and Privacy (SP). IEEE, 739–753. doi:10.1109/sp.2019.00065
- [50]
-
[51]
Jyoti Ramteke, Samarth Shah, Darshan Godhia, and Aadil Shaikh. 2016. Election result prediction using Twitter sentiment analysis.2016 International Conference on Inventive Computation Technologies (ICICT)1 (2016), 1–5. https://api.semanticscholar.org/CorpusID:44828921 Manuscript submitted to ACM Protecting and Preserving Protest Dynamics for Responsible A...
2016
-
[52]
Shahbaz Rezaei and Xin Liu. 2021. On the Difficulty of Membership Inference Attacks. In2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 7888–7896. doi:10.1109/CVPR46437.2021.00780
-
[53]
Ahmed Salem, Yang Zhang, et al. 2019. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. InProceedings of the 26th Annual Network and Distributed System Security Symposium (NDSS)
2019
-
[54]
Tim Salimans, Ian Goodfellow, et al. 2016. Improved Techniques for Training GANs. arXiv:1606.03498 [cs.LG]
work page Pith review arXiv 2016
-
[55]
Sen-Ching Samson Cheung, Herb Wildfeuer, Mehdi Nikkhah, Xiaoqing Zhu, and Waitian Tan. 2018. Learning Sensitive Images Using Generative Models. In2018 25th IEEE International Conference on Image Processing (ICIP). doi:10.1109/ICIP.2018.8451239
-
[56]
Parang Saraf and Naren Ramakrishnan. 2016. EMBERS AutoGSR: Automated Coding of Civil Unrest Events. InProceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(San Francisco, California, USA)(KDD ’16). Association for Computing Machinery, New York, NY, USA, 599–608. doi:10.1145/2939672.2939737
-
[57]
Sefik Ilkin Serengil and Alper Ozpinar. 2021. HyperExtended LightFace: A Facial Attribute Analysis Framework. In2021 International Conference on Engineering and Emerging Technologies (ICEET). 1–4. doi:10.1109/ICEET53442.2021.9659697
-
[58]
Xudong Shen, Chao Du, Tianyu Pang, Min Lin, Yongkang Wong, and Mohan Kankanhalli. 2024. Finetuning Text-to-Image Diffusion Models for Fairness. InThe Twelfth International Conference on Learning Representations. https://openreview.net/forum?id=hnrB5YHoYu
2024
-
[59]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership Inference Attacks Against Machine Learning Models . In 2017 IEEE Symposium on Security and Privacy (SP). IEEE Computer Society, Los Alamitos, CA, USA, 3–18. doi:10.1109/SP.2017.41
-
[60]
Sindagi, Rajeev Yasarla, and Vishal M
Vishwanath A. Sindagi, Rajeev Yasarla, and Vishal M. Patel. 2020. JHU-CROWD++: Large-Scale Crowd Counting Dataset and A Benchmark Method. IEEE Transactions on Pattern Analysis and Machine Intelligence(2020). https://api.semanticscholar.org/CorpusID:215415929
2020
-
[61]
Zachary Steinert-Threlkeld, Alexander Chan, and Jungseock Joo. 2022. How State and Protester Violence Affect Protest Dynamics.The Journal of Politics(2022). arXiv:https://doi.org/10.1086/715600 https://doi.org/10.1086/715600
-
[62]
Lukas Struppek, Dominik Hintersdorf, Felix Friedrich, Manuel Brack, Patrick Schramowski, and Kristian Kersting. 2024. Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations. InConference on Neural Information Processing Systems (NeurIPS) - Workshop on New Frontiers in Adversarial Machine Learning
2024
-
[63]
Hanxi Sun, Jason Plawinski, Sajanth Subramaniam, Amir Jamaludin, Timor Kadir, Aimee Readie, Gregory Ligozio, David Ohlssen, Mark Baillie, and Thibaud Coroller. 2023. A deep learning approach to private data sharing of medical images using conditional generative adversarial networks (GANs).PLOS ONE18, 7 (07 2023), 1–15. doi:10.1371/journal.pone.0280316
-
[64]
Qi Wang, Junyu Gao, Wei Lin, and Xuelong Li. 2021. NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting and Localization.IEEE Transactions on Pattern Analysis and Machine Intelligence43 (2021). doi:10.1109/tpami.2020.3013269
-
[65]
Steinert-Threlkeld, and Jungseock Joo
Donghyeon Won, Zachary C. Steinert-Threlkeld, and Jungseock Joo. 2017. Protest Activity Detection and Perceived Violence Estimation from Social Media Images. InProceedings of the 25th ACM International Conference on Multimedia. Association for Computing Machinery, New York, NY, USA. doi:10.1145/3123266.3123282
-
[66]
Yuxin Wu and Kaiming He. 2018. Group Normalization.International Journal of Computer Vision128 (2018), 742 – 755. https://api.semanticscholar. org/CorpusID:4076251
2018
- [67]
-
[68]
Zuobin Xiong, Wei Li, Qilong Han, and Zhipeng Cai. 2019. Privacy-Preserving Auto-Driving: A GAN-Based Approach to Protect Vehicular Camera Data. In2019 IEEE International Conference on Data Mining (ICDM). doi:10.1109/ICDM.2019.00077
-
[69]
Depeng Xu, Shuhan Yuan, Lu Zhang, and Xintao Wu. 2018. FairGAN: Fairness-aware Generative Adversarial Networks.2018 IEEE International Conference on Big Data (Big Data)(2018), 570–575. https://api.semanticscholar.org/CorpusID:44106659
2018
-
[70]
Jinsung Yoon, James Jordon, and Mihaela van der Schaar. 2019. PATE-GAN: Generating Synthetic Data with Differential Privacy Guarantees. In International Conference on Learning Representations. https://openreview.net/forum?id=S1zk9iRqF7
2019
-
[71]
Han Zhang and Jennifer Pan. 2019. CASM: A Deep-Learning Approach for Identifying Collective Action Events with Text and Image Data from Social Media.Sociological Methodology(2019). arXiv:https://doi.org/10.1177/0081175019860244 https://doi.org/10.1177/0081175019860244
-
[72]
Yuheng Zhang, Ruoxi Jia, Hengzhi Pei, Wenxiao Wang, Bo Li, and Dawn Song. 2020. The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks. In2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 250–258. doi:10.1109/CVPR42600.2020.00033
-
[73]
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. 2017. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. InComputer Vision (ICCV), 2017 IEEE International Conference on. A Facial Analysis Ablation To assess the performance of our face analysis pipeline on protest imagery, we conducted a manual labeling experimen...
2017
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.