pith. machine review for the scientific record. sign in

arxiv: 2604.11376 · v1 · submitted 2026-04-13 · 💻 cs.CV · cs.AI

Recognition: 2 theorem links

· Lean Theorem

From Redaction to Restoration: Deep Learning for Medical Image Anonymization and Reconstruction

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:57 UTC · model grok-4.3

classification 💻 cs.CV cs.AI
keywords medical image deidentificationPHI redactionlatent diffusion inpaintingdeep learning pipelineimage restorationanonymizationCRNNStable Diffusion
0
0 comments X

The pith

A deep learning pipeline redacts patient identifiers from medical images and restores the areas with plausible anatomy to keep them usable for analysis.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents an end-to-end framework that first uses a CRNN model to detect and redact regions likely containing protected health information such as burned-in text, then applies a latent-diffusion inpainting module based on Stable Diffusion 2 to fill those regions with anatomically and imaging-plausible content. This addresses the common problem where standard de-identification removes useful non-identifying details and harms performance on downstream tasks like image analysis or diagnosis. The authors evaluate the output with privacy metrics that measure residual PHI and redaction success, plus image-quality and task-based metrics that check fidelity for representative deep learning applications. If successful, the approach allows automated creation of de-identified yet analysis-ready datasets, easing data sharing and multi-institutional work without the usual privacy-utility trade-off.

Core claim

The central claim is that an automated pipeline combining CRNN-based redaction of PHI regions with latent-diffusion inpainting produces de-identified medical image volumes that remain visually coherent, maintain fidelity for downstream models, and substantially reduce patient re-identification risk.

What carries the argument

A lightweight hybrid architecture that pairs CRNN-based redaction for detecting and removing protected health information with latent-diffusion inpainting based on Stable Diffusion 2 to restore the redacted regions.

If this is right

  • Downstream deep learning models trained on the restored images achieve performance comparable to those trained on originals.
  • Privacy metrics confirm lower residual PHI and higher redaction success than traditional methods.
  • The single automated workflow enables sharing of large medical imaging collections while preserving analysis utility.
  • Task-based evaluations show the restored volumes remain effective for representative clinical deep learning applications.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The pipeline could extend to other modalities such as CT or MRI by retraining the inpainting model on domain-specific data.
  • Hospital systems might adopt it for on-site anonymization before any external data release or collaboration.
  • If the restored images avoid introducing new biases, they could improve model robustness in multi-site training scenarios.
  • The method lowers a practical barrier to open-science datasets, potentially accelerating development of privacy-compliant medical AI.

Load-bearing premise

The generative inpainting produces anatomically plausible content free of artifacts or biases that would degrade downstream clinical tasks, and the redaction step catches all PHI instances without false negatives.

What would settle it

A test where a re-identification attack succeeds on the restored images at rates close to the originals, or where a representative downstream task shows clear accuracy loss compared with models trained on the original images.

Figures

Figures reproduced from arXiv: 2604.11376 by Abhijit Gaonkar, Adrienne Kline, Chris Kuehn, Daniel Pittman, Nils Forkert.

Figure 1
Figure 1. Figure 1: Typical example of an echocardiogram with a synthetic PHI overlay [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Pipeline and architecture of detecting private health information and subsequent inpainting. The redaction [PITH_FULL_IMAGE:figures/full_fig_p009_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Example data entry of synthetic PHI overlay (left) with ground truth redaction mask (middle) and predicted [PITH_FULL_IMAGE:figures/full_fig_p010_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Masking and inpainting of 3 different techniques and their difference with the original image [PITH_FULL_IMAGE:figures/full_fig_p011_4.png] view at source ↗
read the original abstract

Removing patient-specific information from medical images is crucial to enable sharing and open science without compromising patient identities. However, many methods currently used for deidentification have negative effects on downstream image analysis tasks because of removal of relevant but non-identifiable information. This work presents an end-to-end deep learning framework for transforming raw clinical image volumes into de-identified, analysis-ready datasets without compromising downstream utility. The methodology developed and tested in this work first detects and redacts regions likely to contain protected health information (PHI), such as burned-in text and metadata, and then uses a generative deep learning model to inpaint the redacted areas with anatomically and imaging plausible content. The proposed pipeline leverages a lightweight hybrid architecture, combining CRNN-based redaction with a latent-diffusion inpainting restoration module (Stable Diffusion 2). We evaluate the approach using both privacy-oriented metrics, which quantify residual PHI and success of redaction, and image-quality and task-based metrics, which assess the fidelity of restored volumes for representative deep learning applications. Our results suggest that the proposed method yields de-identified medical images that are visually coherent, maintaining fidelity for downstream models, while substantially reducing the risk of patient re-identification. By automating anonymization and image reconstruction within a single workflow, and dissemination of large-scale medical imaging collections, thereby lowering a key barrier to data sharing and multi-institutional collaboration in medical imaging AI.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes an end-to-end deep learning framework for medical image anonymization consisting of CRNN-based detection and redaction of PHI regions followed by restoration via latent-diffusion inpainting with Stable Diffusion 2. It claims that the resulting volumes are visually coherent, substantially reduce re-identification risk, and preserve fidelity for downstream deep-learning tasks as measured by privacy, image-quality, and task-based metrics.

Significance. If the quantitative claims hold under rigorous validation, the work could meaningfully lower barriers to sharing large-scale medical imaging datasets for AI research by automating a privacy-preserving pipeline that avoids the utility loss typical of conventional redaction. The hybrid CRNN-plus-diffusion design is a practical contribution, but the reliance on a natural-image-pretrained diffusion model for medical restoration introduces a non-trivial risk to anatomical fidelity that must be demonstrated rather than asserted.

major comments (2)
  1. [Evaluation] Evaluation section: the abstract states that task-based metrics (segmentation, detection, classification) were used to assess fidelity, yet no numerical results, baselines, error bars, statistical tests, or details on data splits/exclusions are reported. This absence directly undermines the central claim that downstream utility is maintained.
  2. [Methodology] Methodology, inpainting module: Stable Diffusion 2 is pretrained exclusively on natural images; the manuscript provides no analysis or ablation showing that the generated content remains anatomically plausible and does not alter intensity distributions or feature statistics used by clinical networks. This is load-bearing for the “maintaining fidelity” assertion.
minor comments (2)
  1. [Abstract] Abstract, final sentence: the phrasing is incomplete and grammatically awkward (“By automating anonymization and image reconstruction within a single workflow, and dissemination of large-scale medical imaging collections…”).
  2. [Methodology] Notation and architecture description: the hybrid CRNN-diffusion pipeline is described at a high level; explicit diagrams or pseudocode for the redaction mask propagation and latent-space conditioning would improve reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

Thank you for the constructive review and for highlighting areas where the manuscript can be strengthened. We address each major comment below and will revise the manuscript to incorporate the requested details and analyses.

read point-by-point responses
  1. Referee: [Evaluation] Evaluation section: the abstract states that task-based metrics (segmentation, detection, classification) were used to assess fidelity, yet no numerical results, baselines, error bars, statistical tests, or details on data splits/exclusions are reported. This absence directly undermines the central claim that downstream utility is maintained.

    Authors: We agree that the current manuscript does not report the specific numerical results, baselines, error bars, statistical tests, or data-split details for the task-based metrics, even though the abstract and evaluation description reference their use. This is a clear presentation gap that weakens the central claim. In the revised manuscript we will expand the Evaluation section with a dedicated table and accompanying text that reports full numerical results for segmentation, detection, and classification tasks. The additions will include performance metrics on restored versus original and redacted-only volumes, appropriate baselines, error bars from repeated runs or cross-validation, statistical significance tests, and explicit information on data splits, patient exclusions, and dataset characteristics. These results exist from our experiments and will be presented in full. revision: yes

  2. Referee: [Methodology] Methodology, inpainting module: Stable Diffusion 2 is pretrained exclusively on natural images; the manuscript provides no analysis or ablation showing that the generated content remains anatomically plausible and does not alter intensity distributions or feature statistics used by clinical networks. This is load-bearing for the “maintaining fidelity” assertion.

    Authors: We acknowledge that the manuscript does not include targeted ablations or analyses demonstrating anatomical plausibility, intensity-distribution preservation, or invariance of clinical feature statistics for the Stable Diffusion 2 inpainting module. While overall fidelity is assessed via image-quality and task-based metrics, this specific validation is missing. In the revision we will add a new subsection (under Methodology or Evaluation) that provides the requested analysis. It will contain comparisons of intensity histograms and statistics before and after inpainting, feature-embedding similarity measures obtained from representative clinical networks, and quantitative or expert visual checks for anatomical coherence. Any domain-specific adaptations or fine-tuning steps applied to the diffusion model will also be described. These additions will directly address the load-bearing concern. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical pipeline with external metrics and no derivations

full rationale

The paper presents an applied ML pipeline (CRNN redaction + Stable Diffusion 2 inpainting) evaluated on privacy, image-quality, and task-based metrics. No equations, first-principles derivations, fitted parameters renamed as predictions, or self-citation chains appear in the provided text or abstract. Claims rest on empirical results rather than reducing to inputs by construction. This matches the reader's assessment of an empirical approach without circular reductions.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

Only the abstract is available, so the ledger is necessarily incomplete. The method implicitly relies on standard assumptions of deep learning models (e.g., that training data distributions match test distributions) and on the generative model's ability to produce medically plausible content without external validation.

axioms (2)
  • domain assumption The generative model (Stable Diffusion 2) can produce anatomically and imaging-plausible content in redacted regions.
    Invoked in the description of the inpainting restoration module.
  • domain assumption Redaction via CRNN fully removes all PHI without missing instances.
    Central to the privacy guarantee stated in the abstract.

pith-pipeline@v0.9.0 · 5560 in / 1376 out tokens · 46333 ms · 2026-05-10T15:57:28.670606+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

43 extracted references · 4 canonical work pages · 2 internal anchors

  1. [1]

    Human–machine partnership with artificial intelligence for chest radiograph diagnosis.NPJ digital medicine, 2(1):111, 2019

    Bhavik N Patel, Louis Rosenberg, Gregg Willcox, David Baltaxe, Mimi Lyons, Jeremy Irvin, Pranav Rajpurkar, Timothy Amrhein, Rajan Gupta, Safwan Halabi, et al. Human–machine partnership with artificial intelligence for chest radiograph diagnosis.NPJ digital medicine, 2(1):111, 2019

  2. [2]

    Optic-net: A novel convolutional neural network for diagnosis of retinal diseases from optical tomography images

    Sharif Amit Kamran, Sourajit Saha, Ali Shihab Sabbir, and Alireza Tavakkoli. Optic-net: A novel convolutional neural network for diagnosis of retinal diseases from optical tomography images. In2019 18th IEEE international conference on machine learning and applications (ICMLA), pages 964–971. IEEE, 2019

  3. [3]

    Fast and accurate view classification of echocardiograms using deep learning.NPJ digital medicine, 1(1):6, 2018

    Ali Madani, Ramy Arnaout, Mohammad Mofrad, and Rima Arnaout. Fast and accurate view classification of echocardiograms using deep learning.NPJ digital medicine, 1(1):6, 2018

  4. [4]

    Alzheimer’s disease neuroimaging initiative (adni)

    Alzheimer’s Disease Neuroimaging Initiative (ADNI). Alzheimer’s disease neuroimaging initiative (adni). Data resource, 2022. Alzheimer’s Disease Neuroimaging Initiative (ADNI)

  5. [5]

    The osteoarthritis initiative (oai)

    National Institutes of Health (NIH). The osteoarthritis initiative (oai). Data resource, 2022. NIH: The Osteoarthritis Initiative (OAI)

  6. [6]

    Alistair E. W. Johnson, Tom J. Pollard, Roger G. Mark, Seth J. Berkowitz, and Steven Horng. MIMIC-CXR (version 2.1.0). PhysioNet, 2024

  7. [7]

    State-of-the-art review on deep learning in medical imaging.Frontiers in Bioscience-Landmark, 24(3):380–406, 2019

    Mainak Biswas, Venkatanareshbabu Kuppili, Luca Saba, Damodar Reddy Edla, Harman S Suri, Elisa Cuadrado-Godia, John R Laird, Rui Tato Marinhoe, Joao M Sanches, Andrew Nicolaides, et al. State-of-the-art review on deep learning in medical imaging.Frontiers in Bioscience-Landmark, 24(3):380–406, 2019

  8. [8]

    Alessa Hering, Lasse Hansen, Tony CW Mok, Albert CS Chung, Hanna Siebert, Stephanie Häger, Annkristin Lange, Sven Kuckertz, Stefan Heldmann, Wei Shao, et al. Learn2reg: comprehensive multi-task medical image registration challenge, dataset and evaluation in the era of deep learning.IEEE Transactions on Medical Imaging, 42(3):697–712, 2022

  9. [9]

    Anonymization of dicom electronic medical records for radiation therapy.Computers in biology and medicine, 53:134–140, 2014

    Wayne Newhauser, Timothy Jones, Stuart Swerdloff, Warren Newhauser, Mark Cilia, Robert Carver, Andy Halloran, and Rui Zhang. Anonymization of dicom electronic medical records for radiation therapy.Computers in biology and medicine, 53:134–140, 2014

  10. [10]

    Baoguang Shi, Xiang Bai, and Cong Yao. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition.IEEE transactions on pattern analysis and machine intelligence, 39(11):2298–2304, 2016

  11. [11]

    Deep anonymization of medical imaging.Multimedia Tools and Applications, 82(6):9533–9547, 2023

    Lobna Fezai, Thierry Urruty, Pascal Bourdon, Chrsitine Fernandez-Maloigne, and Alzheimer’s Disease Neuroimaging Initiative. Deep anonymization of medical imaging.Multimedia Tools and Applications, 82(6):9533–9547, 2023

  12. [12]

    A de-identification pipeline for ultrasound medical images in dicom format.Journal of medical systems, 41(5):89, 2017

    Eriksson Monteiro, Carlos Costa, and José Luís Oliveira. A de-identification pipeline for ultrasound medical images in dicom format.Journal of medical systems, 41(5):89, 2017

  13. [13]

    Privacy preservation and information security protection for patients’ portable electronic health records.Computers in Biology and Medicine, 39(9):743–750, 2009

    Lu-Chou Huang, Huei-Chung Chu, Chung-Yueh Lien, Chia-Hung Hsiao, and Tsair Kao. Privacy preservation and information security protection for patients’ portable electronic health records.Computers in Biology and Medicine, 39(9):743–750, 2009

  14. [14]

    Canadian association of radiologists white paper on de-identification of medical imaging: part 1, general principles.Canadian Association of Radiologists Journal, 72(1):13–24, 2021

    William Parker, Jacob L Jaremko, Mark Cicero, Marleine Azar, Khaled El-Emam, Bruce G Gray, Casey Hurrell, Flavie Lavoie-Cardinal, Benoit Desjardins, Andrea Lum, et al. Canadian association of radiologists white paper on de-identification of medical imaging: part 1, general principles.Canadian Association of Radiologists Journal, 72(1):13–24, 2021

  15. [15]

    A review on visual privacy preservation techniques for active and assisted living.Multimedia Tools and Applications, 83(5):14715–14755, 2024

    Siddharth Ravi, Pau Climent-Pérez, and Francisco Florez-Revuelta. A review on visual privacy preservation techniques for active and assisted living.Multimedia Tools and Applications, 83(5):14715–14755, 2024

  16. [16]

    Evaluating the impact of different deface algorithms on deep learning segmentation software performance.Frontiers in Oncology, 15:1603593, 2025

    Ali Ammar, Libing Zhu, Shep Bryan IV , Nathan Y Yu, Carlos Vargas, Yi Rong, and Quan Chen. Evaluating the impact of different deface algorithms on deep learning segmentation software performance.Frontiers in Oncology, 15:1603593, 2025

  17. [17]

    The role of deep learning in medical image inpainting: A systematic review.ACM Transactions on Computing for Healthcare, 6(3):1–24, 2025

    Joana Cristo Santos, Hugo Tomás Pereira Alexandre, Miriam Seoane Santos, and Pedro Henriques Abreu. The role of deep learning in medical image inpainting: A systematic review.ACM Transactions on Computing for Healthcare, 6(3):1–24, 2025. 13 Deep Learning for Medical Image Anonymization and Reconstruction

  18. [18]

    A comparative evaluation of transformer models for de-identification of clinical text data.arXiv preprint arXiv:2204.07056, 2022

    Christopher Meaney, Wali Hakimpour, Sumeet Kalia, and Rahim Moineddin. A comparative evaluation of transformer models for de-identification of clinical text data.arXiv preprint arXiv:2204.07056, 2022

  19. [19]

    DICOM de-identification at scale in Visual NLP (1/3).John Snow Labs Blog, September 2023

    Mykola Melnyk. DICOM de-identification at scale in Visual NLP (1/3).John Snow Labs Blog, September 2023. Accessed: 2025-12-09

  20. [20]

    De-identification of medical imaging data: a comprehensive tool for ensuring patient privacy.European radiology, pages 1–10, 2025

    Moritz Rempe, Lukas Heine, Constantin Seibold, Fabian Hörst, and Jens Kleesiek. De-identification of medical imaging data: a comprehensive tool for ensuring patient privacy.European radiology, pages 1–10, 2025

  21. [21]

    A two-stage de-identification process for privacy-preserving medical image analysis

    Arsalan Shahid, Mehran H Bazargani, Paul Banahan, Brian Mac Namee, Tahar Kechadi, Ceara Treacy, Gilbert Regan, and Peter MacMahon. A two-stage de-identification process for privacy-preserving medical image analysis. InHealthcare, volume 10, page 755. MDPI, 2022

  22. [22]

    Documenting the de-identification process of clinical and imaging data for ai for health imaging projects.Insights into Imaging, 15(1):130, 2024

    Haridimos Kondylakis, Rocio Catalan, Sara Martinez Alabart, Caroline Barelle, Paschalis Bizopoulos, Maciej Bobowicz, Jonathan Bona, Dimitrios I Fotiadis, Teresa Garcia, Ignacio Gomez, et al. Documenting the de-identification process of clinical and imaging data for ai for health imaging projects.Insights into Imaging, 15(1):130, 2024

  23. [23]

    Privacy preserving federated learning in medical imaging with uncertainty estimation.arXiv preprint arXiv:2406.12815, 2024

    Nikolas Koutsoubis, Yasin Yilmaz, Ravi P Ramachandran, Matthew Schabath, and Ghulam Rasool. Privacy preserving federated learning in medical imaging with uncertainty estimation.arXiv preprint arXiv:2406.12815, 2024

  24. [24]

    Medical image synthesis for data augmentation and anonymization using generative adversarial networks

    Hoo-Chang Shin, Neil A Tenenholtz, Jameson K Rogers, Christopher G Schwarz, Matthew L Senjem, Jeffrey L Gunter, Katherine P Andriole, and Mark Michalski. Medical image synthesis for data augmentation and anonymization using generative adversarial networks. InInternational workshop on simulation and synthesis in medical imaging, pages 1–11. Springer, 2018

  25. [25]

    Deep learning-based patient re-identification is able to exploit the biometric nature of medical chest x-ray data.Scientific Reports, 12(1):14851, 2022

    Kai Packhäuser, Sebastian Gündel, Nicolas Münster, Christopher Syben, Vincent Christlein, and Andreas Maier. Deep learning-based patient re-identification is able to exploit the biometric nature of medical chest x-ray data.Scientific Reports, 12(1):14851, 2022

  26. [26]

    De-identify medical images with the help of amazon comprehend medical and amazon rekognition.AWS Machine Learning Blog

    James Wiggins. De-identify medical images with the help of amazon comprehend medical and amazon rekognition.AWS Machine Learning Blog. AWS ML Blog, March 2019. Accessed: 2025-12-02

  27. [27]

    Federated learning and differential privacy for medical image analysis.Scientific reports, 12(1):1953, 2022

    Mohammed Adnan, Shivam Kalra, Jesse C Cresswell, Graham W Taylor, and Hamid R Tizhoosh. Federated learning and differential privacy for medical image analysis.Scientific reports, 12(1):1953, 2022

  28. [28]

    Department of Health and Human Services, Office for Civil Rights

    U.S. Department of Health and Human Services, Office for Civil Rights. Guidance regarding methods for de-identification of protected health information in accordance with the health insurance portability and accountability act (hipaa) privacy rule. HHS.gov, February 2025. Content last reviewed February 3, 2025. Accessed: 2025-12-23

  29. [29]

    Icdar 2013 robust reading competition

    Dimosthenis Karatzas, Faisal Shafait, Seiichi Uchida, Masakazu Iwamura, Lluis Gomez i Bigorda, Sergi Robles Mestre, Joan Mas, David Fernandez Mota, Jon Almazan Almazan, and Lluis Pere De Las Heras. Icdar 2013 robust reading competition. In 2013 12th international conference on document analysis and recognition, pages 1484–1493. IEEE, 2013

  30. [30]

    Scene text recognition using higher order language priors

    Anand Mishra, Karteek Alahari, and CV Jawahar. Scene text recognition using higher order language priors. InBMVC-British machine vision conference. BMV A, 2012

  31. [31]

    End-to-end scene text recognition

    Kai Wang, Boris Babenko, and Serge Belongie. End-to-end scene text recognition. In2011 International conference on computer vision, pages 1457–1464. IEEE, 2011

  32. [32]

    Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks

    Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. InProceedings of the 23rd international conference on Machine learning, pages 369–376, 2006

  33. [33]

    Real-time scene text detection with differentiable binarization and adaptive scale fusion.IEEE transactions on pattern analysis and machine intelligence, 45(1):919–931, 2022

    Minghui Liao, Zhisheng Zou, Zhaoyi Wan, Cong Yao, and Xiang Bai. Real-time scene text detection with differentiable binarization and adaptive scale fusion.IEEE transactions on pattern analysis and machine intelligence, 45(1):919–931, 2022

  34. [34]

    An image inpainting technique based on the fast marching method.Journal of graphics tools, 9(1):23–34, 2004

    Alexandru Telea. An image inpainting technique based on the fast marching method.Journal of graphics tools, 9(1):23–34, 2004

  35. [35]

    Stable diffusion 2 inpainting.Hugging Face model card, 2022

    sd2-community. Stable diffusion 2 inpainting.Hugging Face model card, 2022. Accessed: 2025-12-09

  36. [36]

    High-resolution image synthesis with latent diffusion models

    Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022

  37. [37]

    Progressive Distillation for Fast Sampling of Diffusion Models

    Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models.arXiv preprint arXiv:2202.00512, 2022

  38. [38]

    Classifier-Free Diffusion Guidance

    Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance.arXiv preprint arXiv:2207.12598, 2022

  39. [39]

    Stable diffusion pipelines.Hugging Face Diffusers Documentation, 2025

    Hugging Face. Stable diffusion pipelines.Hugging Face Diffusers Documentation, 2025. Accessed: 2025-12-09

  40. [40]

    Creativeml openrail-m license.Stable Diffusion License Text, 2022

    CompVis, Stability AI, and Runway. Creativeml openrail-m license.Stable Diffusion License Text, 2022. Accessed: 2025-12-09

  41. [41]

    Open source tools for standardized privacy protection of medical images

    Chung-Yueh Lien, Michael Onken, Marco Eichelberg, Tsair Kao, and Andreas Hein. Open source tools for standardized privacy protection of medical images. InMedical Imaging 2011: Advanced PACS-based Imaging Informatics and Therapeutic Applications, volume 7967, pages 177–183. SPIE, 2011

  42. [42]

    An open source toolkit for medical imaging de-identification.European radiology, 20(8):1896–1904, 2010

    David Rodríguez González, Trevor Carpenter, Jano I van Hemert, and Joanna Wardlaw. An open source toolkit for medical imaging de-identification.European radiology, 20(8):1896–1904, 2010

  43. [43]

    Multimodal machine learning in precision health: A scoping review.NPJ digital medicine, 5(1):171, 2022

    Adrienne Kline, Hanyin Wang, Yikuan Li, Saya Dennis, Meghan Hutch, Zhenxing Xu, Fei Wang, Feixiong Cheng, and Yuan Luo. Multimodal machine learning in precision health: A scoping review.NPJ digital medicine, 5(1):171, 2022. 14