Recognition: unknown
Cross Modality Image Translation In Medical Imaging Using Generative Frameworks
Pith reviewed 2026-05-14 20:14 UTC · model grok-4.3
The pith
GANs outperform latent models in standardized 3D medical image translation across 11 oncology datasets.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Under identical preprocessing, splitting, training, and evaluation conditions, generative adversarial networks consistently exceed the performance of latent generative models in cross-modality 3D image synthesis for oncology, with SRGAN achieving statistically significant superiority; lesion-level breakdowns indicate reliable shape preservation but weaker handling of small structures and absolute uptake intensities in CT-to-PET tasks, and a visual Turing test with physicians yields 56.7 percent classification accuracy.
What carries the argument
The standardized comparative evaluation framework that enforces uniform preprocessing, data splits, inference rules, and multi-level metrics including lesion analysis and visual Turing tests across 77 experiments on 11 datasets.
If this is right
- SRGAN becomes the default starting point for virtual scanning pipelines in head/neck, lung, and pelvis oncology.
- All synthesis methods require targeted improvements for small-lesion fidelity and PET uptake accuracy.
- Standardized 3D benchmarks replace isolated 2D task evaluations to enable fair model comparisons.
- Clinical workflows can incorporate synthetic volumes once perceptual tests confirm indistinguishability from real acquisitions.
Where Pith is reading between the lines
- Reducing the need for multiple physical scans could lower patient radiation dose and scan time in routine oncology follow-up.
- The gap between quantitative metrics and physician preference points to a need for perceptual loss terms or clinician-in-the-loop training.
- Hybrid architectures that combine adversarial training with diffusion-style stability may close the remaining performance differences on small structures.
Load-bearing premise
Uniform preprocessing, splitting, and inference rules applied to heterogeneous datasets and modalities do not inadvertently favor GAN architectures over latent models.
What would settle it
Retraining the latent models on the same eleven datasets with hyperparameters and augmentation choices tuned specifically for them, then re-running the full lesion-level and physician evaluation, would show whether they can match or exceed GAN scores.
Figures
read the original abstract
Medical image-to-image (I2I) translation enables virtual scanning, i.e. the synthesis of a target imaging modality from a source one without additional acquisitions. Despite growing interest, most proposed methods operate on 2D slices, are evaluated on isolated tasks with different experimental set-ups and lack clinical validation. The primary contribution of this work is a reproducible, standardized comparative evaluation of 3D I2I translation methods in oncological imaging, designed to standardize preprocessing, splitting, inference, and multi-level evaluation across heterogeneous clinical tasks. Within this framework, we compare seven generative models, three Generative Adversarial Networks (GANs: Pix2Pix, CycleGAN, SRGAN) and four latent generative models (Latent Diffusion Model, Latent Diffusion Model+ControlNet, Brownian Bridge, Flow Matching), across eleven datasets spanning three anatomical regions (head/neck, lung, pelvis) and four translation directions (cone-beam CT to CT, MRI to CT, CT to PET, MRI T2-weighted to T2-FLAIR), for a total of 77 experiments under uniform training, inference, and evaluation conditions. The results show that GANs outperform latent generative models across all tasks, with SRGAN achieving statistically significant superiority. Our lesion-level analysis reveals that all models struggle with small lesions and that, in CT to PET synthesis, models reproduce lesion shape more reliably than absolute uptake-related intensity. We also performed a Visual Turing test administered to 17 physicians, including 15 radiologists, which shows near-chance classification accuracy (56.7%), confirming that synthetic volumes are largely indistinguishable from real acquisitions, while exposing a dissociation between quantitative metrics and clinical preference.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript presents a standardized comparative evaluation of seven 3D generative models for cross-modality image-to-image translation in oncological imaging. It compares three GANs (Pix2Pix, CycleGAN, SRGAN) and four latent models (LDM, LDM+ControlNet, Brownian Bridge, Flow Matching) across eleven datasets spanning head/neck, lung, and pelvis regions and four translation directions, for a total of 77 experiments under uniform preprocessing, splitting, training, and inference protocols. Results indicate GANs outperform latent models with SRGAN achieving statistically significant superiority; lesion-level analysis shows struggles with small lesions and better shape than intensity reproduction in CT-to-PET; a visual Turing test with 17 physicians yields 56.7% accuracy, indicating synthetic volumes are largely indistinguishable from real acquisitions.
Significance. If the results hold, this work delivers a reproducible benchmark for 3D medical I2I translation by enforcing consistent experimental conditions across heterogeneous tasks and modalities. The scale (77 experiments), inclusion of statistical tests, lesion-specific breakdowns, and physician visual Turing test provide concrete empirical grounding and clinical relevance that could inform model selection and highlight persistent challenges such as small-lesion fidelity and PET uptake accuracy.
major comments (1)
- [Experimental Setup] Experimental Setup: The central claim that GANs (particularly SRGAN) outperform latent generative models rests on a single shared preprocessing, splitting, and training recipe applied uniformly to all models. While this protocol enables direct comparability, it may systematically favor GAN architectures, which often converge reliably under standard medical intensity normalization and short schedules, whereas latent diffusion and flow models frequently require longer training, modality-specific noise schedules, or augmentations. The manuscript should explicitly discuss whether per-model hyperparameter optimization was considered and, if not, justify why the uniform protocol is the appropriate basis for ranking intrinsic capabilities rather than protocol compatibility.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback and positive assessment of our work. We address the single major comment below and will revise the manuscript accordingly to strengthen the discussion of our experimental design.
read point-by-point responses
-
Referee: [Experimental Setup] Experimental Setup: The central claim that GANs (particularly SRGAN) outperform latent generative models rests on a single shared preprocessing, splitting, and training recipe applied uniformly to all models. While this protocol enables direct comparability, it may systematically favor GAN architectures, which often converge reliably under standard medical intensity normalization and short schedules, whereas latent diffusion and flow models frequently require longer training, modality-specific noise schedules, or augmentations. The manuscript should explicitly discuss whether per-model hyperparameter optimization was considered and, if not, justify why the uniform protocol is the appropriate basis for ranking intrinsic capabilities rather than protocol compatibility.
Authors: We appreciate the referee's observation on this key design choice. The uniform protocol was intentionally selected as the core of our contribution: to deliver a reproducible benchmark that enables direct, apples-to-apples comparison of the seven models under identical preprocessing, splitting, training schedules, and inference conditions across 77 experiments. Per-model hyperparameter optimization was deliberately not performed, because doing so would have broken the standardization that allows us to attribute performance differences to the architectures themselves rather than to unequal tuning effort. This setup mirrors a realistic clinical or research scenario in which practitioners apply a single, practical recipe across heterogeneous models. We fully acknowledge that the reported rankings reflect performance under this shared protocol and may not represent the absolute best achievable results for each model with extensive, architecture-specific tuning (e.g., longer diffusion schedules or modality-specific augmentations). We will revise the manuscript to add an explicit paragraph in the Experimental Setup and a dedicated limitations subsection that states this caveat and justifies the uniform protocol as the appropriate basis for ranking relative capabilities under consistent, reproducible conditions. revision: yes
Circularity Check
No circularity: direct empirical comparison under fixed protocols
full rationale
The paper conducts a standardized empirical evaluation of seven existing generative models (Pix2Pix, CycleGAN, SRGAN, LDM, LDM+ControlNet, Brownian Bridge, Flow Matching) across 77 experiments on eleven datasets. No derivations, equations, or predictions are claimed that reduce reported metrics to fitted parameters or self-defined quantities by construction. Performance numbers arise from direct inference on held-out splits using uniform preprocessing and evaluation rules; statistical significance is computed from these independent runs. Any self-citations refer only to the original model papers and do not load-bear the comparative claims. The work is self-contained against external benchmarks and exhibits no self-definitional, fitted-input, or uniqueness-imported circularity.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Standard assumptions in supervised and unsupervised training of generative models hold under the uniform protocol
Reference graph
Works this paper leans on
-
[1]
WHO, WHO compendium of innovative health technologies for low-resource settings 2024, World Health Organization WHO, 2024
2024
-
[2]
Kjelle, et al., Cost of low-value imaging worldwide: a systematic review, Applied health economics and health policy 22 (2024) 485
E. Kjelle, et al., Cost of low-value imaging worldwide: a systematic review, Applied health economics and health policy 22 (2024) 485
2024
-
[3]
Dayarathna, et al., Deep learning-based synthesis of MRI, CT and PET: Review and analysis, Computer Methods and Programs in Biomedicine 257 (2024) 108173
S. Dayarathna, et al., Deep learning-based synthesis of MRI, CT and PET: Review and analysis, Computer Methods and Programs in Biomedicine 257 (2024) 108173
2024
-
[4]
Doan, et al., Bridging modalities with ai: a review of ai advances in multimodal biomedical imaging, Communications Engineering 5 (2026) 30
L. Doan, et al., Bridging modalities with ai: a review of ai advances in multimodal biomedical imaging, Communications Engineering 5 (2026) 30
2026
-
[5]
Sherwani, S
M. Sherwani, S. Gopalakrishnan, A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy, Frontiers in Radiology 4 (2024) 1385742
2024
-
[6]
X.Fu,etal., Asystematicreviewofgenerativeartificialintelligencetechniquesforsyntheticmedicalimagedatasets:Quality,models,public availability and applications, Computer Methods and Programs in Biomedicine (2026) 109331
2026
-
[7]
Rofena, et al., Augmented intelligence for multimodal virtual biopsy in breast cancer using generative artificial intelligence, Journal of Biomedical Informatics (2025) 104971
A. Rofena, et al., Augmented intelligence for multimodal virtual biopsy in breast cancer using generative artificial intelligence, Journal of Biomedical Informatics (2025) 104971
2025
-
[8]
Kazeminia, et al., GANs for medical image analysis, Artificial Intelligence in Medicine 109 (2020) 101938
S. Kazeminia, et al., GANs for medical image analysis, Artificial Intelligence in Medicine 109 (2020) 101938
2020
-
[10]
Bredell, et al., Explicitly minimizing the blur error of variational autoencoders, in: The Eleventh International Conference on Learning Representations, 2023, pp
G. Bredell, et al., Explicitly minimizing the blur error of variational autoencoders, in: The Eleventh International Conference on Learning Representations, 2023, pp. 1–16
2023
-
[11]
1125–1134
P.Isola,etal., Image-to-imagetranslationwithconditionaladversarialnetworks, in:ProceedingsoftheIEEEConferenceonComputerVision and Pattern Recognition, 2017, pp. 1125–1134
2017
-
[12]
Zhu, et al., Unpaired image-to-image translation using cycle-consistent adversarial networks, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp
J. Zhu, et al., Unpaired image-to-image translation using cycle-consistent adversarial networks, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 2223–2232
2017
-
[13]
Nie, et al., Medical image synthesis with deep convolutional adversarial networks, IEEE Transactions on Biomedical Engineering 65 (2018) 2720–2730
D. Nie, et al., Medical image synthesis with deep convolutional adversarial networks, IEEE Transactions on Biomedical Engineering 65 (2018) 2720–2730
2018
-
[14]
Wolterink, et al., Deep MR to CT synthesis using unpaired data, in: International Workshop on Simulation and Synthesis in Medical Imaging, Springer, 2017, pp
J. Wolterink, et al., Deep MR to CT synthesis using unpaired data, in: International Workshop on Simulation and Synthesis in Medical Imaging, Springer, 2017, pp. 14–23
2017
-
[15]
Y. Liu, et al., Magnetic resonance image synthesis from brain computed tomography images based on deep learning methods for magnetic resonance-guided radiotherapy, Quantitative Imaging in Medicine and Surgery 10 (2020) 1358
2020
-
[16]
Romoli et al.:Preprint submitted to ElsevierPage 19 of 32 I2I in Medical Imaging
S.Dar,etal., Imagesynthesisinmulti-contrastMRIwithconditionalgenerativeadversarialnetworks, IEEETransactionsonMedicalImaging 38 (2019) 2375–2388. Romoli et al.:Preprint submitted to ElsevierPage 19 of 32 I2I in Medical Imaging
2019
-
[17]
A.Chartsias,etal., MultimodalMRsynthesisviamodality-invariantlatentrepresentation, IEEETransactionsonMedicalImaging37(2018) 803–814
2018
-
[18]
Yu, et al., Ea-GANs: Edge-aware generative adversarial networks for cross-modality MR image synthesis, IEEE Transactions on Medical Imaging 38 (2019) 1750–1762
B. Yu, et al., Ea-GANs: Edge-aware generative adversarial networks for cross-modality MR image synthesis, IEEE Transactions on Medical Imaging 38 (2019) 1750–1762
2019
-
[19]
Poonkodi, M
S. Poonkodi, M. Kanchana, 3D-MedTranCSGAN: 3D medical image transformation using CSGAN, Computers in Biology and Medicine 153 (2023) 106541
2023
-
[20]
V.Guarrasi,etal., Whole-bodyimage-to-imagetranslationforavirtualscannerinahealthcaredigitaltwin, in:ProceedingsoftheIEEE38th International Symposium on Computer-Based Medical Systems (CBMS), 2025, pp. 528–534
2025
-
[21]
4342–4351
J.Ha,etal., Multi-resolutionguided3DGANsformedicalimagetranslation, in:IEEE/CVFWinterConferenceonApplicationsofComputer Vision (WACV), 2025, pp. 4342–4351
2025
-
[22]
J.Ho,A.Jain,P.Abbeel, Denoisingdiffusionprobabilisticmodels, Advancesinneuralinformationprocessingsystems33(2020)6840–6851
2020
-
[23]
8780–8794
P.Dhariwal,A.Nichol, DiffusionmodelsbeatGANsonimagesynthesis, in:AdvancesinNeuralInformationProcessingSystems,volume34, 2021, pp. 8780–8794
2021
-
[24]
Kazerouni, et al., Diffusion models in medical imaging: A comprehensive survey, Medical Image Analysis 88 (2023) 102846
A. Kazerouni, et al., Diffusion models in medical imaging: A comprehensive survey, Medical Image Analysis 88 (2023) 102846
2023
-
[25]
A. Moschetto, et al., Benchmarking gans, diffusion models, and flow matching for t1w-to-t2w mri translation, in: International Conference on Image Analysis and Processing, Springer, 2025, pp. 429–440
2025
- [26]
-
[27]
Akbar, W
M. Akbar, W. Wang, A. Eklund, Beware of diffusion models for synthesizing medical images – a comparison with GANs in terms of memorizing brain MRI and chest x-ray images, Machine Learning: Science and Technology 6 (2025) 015022
2025
-
[28]
Pan, et al., Synthetic CT generation from MRI using 3D transformer-based denoising diffusion model, Medical Physics 51 (2024) 2538– 2548
S. Pan, et al., Synthetic CT generation from MRI using 3D transformer-based denoising diffusion model, Medical Physics 51 (2024) 2538– 2548
2024
-
[29]
K. Choo, Y. Jun, M. Yun, S. Hwang, Slice-consistent 3D volumetric brain CT-to-MRI translation with 2D Brownian bridge diffusion model, in: Medical Image Computing and Computer Assisted Intervention – MICCAI 2024, Springer, 2024, pp. 657–667
2024
-
[30]
X.Zhu,etal., Introducing3Drepresentationfordensevolume-to-volumetranslationviascorefusion, in:InternationalConferenceonMachine Learning, 2025, pp. 1–22
2025
-
[31]
J. Kim, H. Park, Adaptive latent diffusion model for 3D medical image to image translation: Multi-modal magnetic resonance imaging study, in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 7604–7613
2024
-
[32]
12873–12883
P.Esser,R.Rombach,B.Ommer, Tamingtransformersforhigh-resolutionimagesynthesis, in:ProceedingsoftheIEEE/CVFConferenceon Computer Vision and Pattern Recognition, 2021, pp. 12873–12883
2021
-
[33]
8778–8786
A.Sargood,etal., CoCoLIT:ControlNet-conditionedlatentimagetranslationforMRItoamyloidPETsynthesis, in:ProceedingsoftheAAAI Conference on Artificial Intelligence, volume 40, 2026, pp. 8778–8786
2026
-
[34]
Zhang, A
L. Zhang, A. Rao, M. Agrawala, Adding conditional control to text-to-image diffusion models, in: Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2023, pp. 3836–3847
2023
-
[35]
Graf, et al., Denoising diffusion-based MRI to CT image translation enables automated spinal segmentation, European Radiology Experimental 7 (2023) 70
R. Graf, et al., Denoising diffusion-based MRI to CT image translation enables automated spinal segmentation, European Radiology Experimental 7 (2023) 70
2023
-
[36]
Rajagopal, et al., Synthetic PET via domain translation of 3-D MRI, IEEE Transactions on Radiation and Plasma Medical Sciences 7 (2023) 333–343
A. Rajagopal, et al., Synthetic PET via domain translation of 3-D MRI, IEEE Transactions on Radiation and Plasma Medical Sciences 7 (2023) 333–343
2023
-
[37]
M. Bahloul, et al., Advancements in synthetic CT generation from MRI: A review of techniques and trends in radiation therapy planning, Journal of Applied Clinical Medical Physics (2024)
2024
-
[38]
Thummerer, et al., Synthrad2025 grand challenge dataset: Generating synthetic cts for radiotherapy from head to abdomen, Medical Physics 52 (2025) e17981
A. Thummerer, et al., Synthrad2025 grand challenge dataset: Generating synthetic cts for radiotherapy from head to abdomen, Medical Physics 52 (2025) e17981
2025
-
[39]
F. Bray, et al., Global cancer statistics 2022: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries, CA: A Cancer Journal for Clinicians 74 (2024) 229–263
2022
-
[40]
Siegel, et al., Cancer statistics, 2026, CA: A Cancer Journal for Clinicians 76 (2026)
R. Siegel, et al., Cancer statistics, 2026, CA: A Cancer Journal for Clinicians 76 (2026)
2026
-
[41]
Karimi, et al., Glioblastoma: Clinical presentation, multidisciplinary management, and long-term outcomes, Cancers 17 (2025)
S. Karimi, et al., Glioblastoma: Clinical presentation, multidisciplinary management, and long-term outcomes, Cancers 17 (2025)
2025
-
[42]
A. Thummerer, et al., SynthRAD2023 grand challenge dataset: Generating synthetic CT for radiotherapy, in: Proceedings of the Medical Image Computing and Computer Assisted Intervention (MICCAI) Challenges, 2023, pp. 4664–4674
2023
-
[43]
Kazerooni, et al., The ASNR-MICCAI brain tumor segmentation (BraTS) challenge 2023: Intracranial meningioma, in: Proceedings of MICCAI, 2023, pp
A. Kazerooni, et al., The ASNR-MICCAI brain tumor segmentation (BraTS) challenge 2023: Intracranial meningioma, in: Proceedings of MICCAI, 2023, pp. 1–11
2023
-
[44]
Gatidis, et al., A whole-body FDG-PET/CT dataset with manually annotated tumor lesions, Scientific Data 9 (2022) 601
S. Gatidis, et al., A whole-body FDG-PET/CT dataset with manually annotated tumor lesions, Scientific Data 9 (2022) 601
2022
-
[45]
Ferrara, et al., Sharing a whole-/total-body [18f] fdg-pet/ct dataset with ct-derived segmentations: an enhance
D. Ferrara, et al., Sharing a whole-/total-body [18f] fdg-pet/ct dataset with ct-derived segmentations: an enhance. pet initiative, Scientific Data (2026)
2026
-
[46]
Saharia, et al., Palette: Image-to-image diffusion models, in: ACM SIGGRAPH 2022 Conference Proceedings, 2022, pp
C. Saharia, et al., Palette: Image-to-image diffusion models, in: ACM SIGGRAPH 2022 Conference Proceedings, 2022, pp. 1–10
2022
-
[47]
B. Li, K. Xue, B. Liu, Y. Lai, BBDM: Image-to-image translation with Brownian bridge diffusion models, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 1952–1961
2023
-
[48]
Y.Lipman,R.Chen,H.Ben-Hamu,M.Nickel,M.Le, Flowmatchingforgenerativemodeling, in:ProceedingsoftheInternationalConference on Learning Representations (ICLR), 2023, pp. 1–28
2023
-
[49]
Valls, P
M. Valls, P. Bourdon, C. Fernandez, G. Herpe, D. Helbert, Prob-bbdm: A probabilistic brownian bridge diffusion model for mri sequence image-to-image translation, Computerized Medical Imaging and Graphics (2026) 102745
2026
-
[50]
M.Yazdani,Y.Medghalchi,P.Ashrafian,I.Hacihaliloglu,D.Shahriari, Flowmatchingformedicalimagesynthesis:Bridgingthegapbetween speed and quality, in: Medical Image Computing and Computer Assisted Intervention – MICCAI, 2025, pp. 216–226. Romoli et al.:Preprint submitted to ElsevierPage 20 of 32 I2I in Medical Imaging
2025
-
[51]
Isensee, P
F. Isensee, P. Jaeger, S. Kohl, J. Petersen, K. Maier-Hein, nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation, Nature Methods 18 (2021) 203–211
2021
-
[52]
Di Feola, L
F. Di Feola, L. Tronchin, P. Soda, A comparative study between paired and unpaired image quality assessment in low-dose ct denoising, in: 2023 IEEE 36th International Symposium on Computer-Based Medical Systems (CBMS), IEEE, 2023, pp. 471–476
2023
-
[53]
Roberts, et al., Imaging evaluation of a proposed 3d generative model for mri to ct translation in the lumbar spine, The Spine Journal (2023)
M. Roberts, et al., Imaging evaluation of a proposed 3d generative model for mri to ct translation in the lumbar spine, The Spine Journal (2023)
2023
-
[54]
C.Tang,etal.,Incorporatingradiologistknowledgeintomriqualitymetricsformachinelearningusingrank-basedratings,JournalofMagnetic Resonance Imaging (2024)
2024
-
[55]
Guarrasi, et al., Multimodal explainability via latent shift applied to covid-19 stratification, Pattern Recognition 156 (2024) 110825
V. Guarrasi, et al., Multimodal explainability via latent shift applied to covid-19 stratification, Pattern Recognition 156 (2024) 110825
2024
-
[56]
Myong, et al., Evaluating diagnostic content of AI-generated chest radiography: A multi-center visual turing test, PLoS ONE (2023)
Y. Myong, et al., Evaluating diagnostic content of AI-generated chest radiography: A multi-center visual turing test, PLoS ONE (2023)
2023
-
[57]
Jang, et al., Image turing test and its applications on synthetic chest radiographs by using the progressive growing generative adversarial network, Scientific Reports (2023)
M. Jang, et al., Image turing test and its applications on synthetic chest radiographs by using the progressive growing generative adversarial network, Scientific Reports (2023)
2023
-
[58]
Phelps, et al., Pairwise comparison versus likert scale for biomedical image assessment, American Journal of Roentgenology (2015)
A. Phelps, et al., Pairwise comparison versus likert scale for biomedical image assessment, American Journal of Roentgenology (2015)
2015
-
[59]
Hoeijmakers, et al., How subjective CT image quality assessment becomes surprisingly reliable: pairwise comparisons instead of likert scale, European Radiology (2024)
E. Hoeijmakers, et al., How subjective CT image quality assessment becomes surprisingly reliable: pairwise comparisons instead of likert scale, European Radiology (2024)
2024
-
[60]
Friedrich, et al., Deep learning for medical image-to-image translation: Methods, datasets, and evaluation, npj Digital Medicine 7 (2024) 114
L. Friedrich, et al., Deep learning for medical image-to-image translation: Methods, datasets, and evaluation, npj Digital Medicine 7 (2024) 114
2024
-
[61]
Breger, et al., A study of why we need to reassess full reference image quality assessment with medical images, Journal of Imaging Informatics in Medicine 38 (2025) 3444–3469
A. Breger, et al., A study of why we need to reassess full reference image quality assessment with medical images, Journal of Imaging Informatics in Medicine 38 (2025) 3444–3469
2025
-
[62]
Dohmen, M
M. Dohmen, M. Klemens, I. Baltruschat, T. Truong, M. Lenga, Similarity and quality metrics for MR image-to-image translation, Scientific Reports 15 (2025) 3853
2025
-
[63]
10684–10695
R.Rombach,A.Blattmann,D.Lorenz,P.Esser,B.Ommer, High-resolutionimagesynthesiswithlatentdiffusionmodels, in:Proceedingsof the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 10684–10695
2022
-
[64]
Haacke, R
E. Haacke, R. Brown, M. Thompson, R. Venkatesan, Magnetic Resonance Imaging: Physical Principles and Sequence Design, Wiley-Liss, 1999
1999
-
[65]
Bushberg, J
J. Bushberg, J. Seibert, E. Leidholdt, J. Boone, The Essential Physics of Medical Imaging, 3 ed., Lippincott Williams & Wilkins, 2011
2011
-
[66]
Barentsz, et al., ESUR prostate MRI guidelines, European Radiology 22 (2012) 746–757
J. Barentsz, et al., ESUR prostate MRI guidelines, European Radiology 22 (2012) 746–757
2012
-
[67]
Beets-Tan, et al., Magnetic resonance imaging for clinical management of rectal cancer, European Radiology 28 (2018) 1465–1475
R. Beets-Tan, et al., Magnetic resonance imaging for clinical management of rectal cancer, European Radiology 28 (2018) 1465–1475
2018
-
[68]
Wen, et al., Updated response assessment criteria for high-grade gliomas, Journal of Clinical Oncology 28 (2010) 1963–1972
P. Wen, et al., Updated response assessment criteria for high-grade gliomas, Journal of Clinical Oncology 28 (2010) 1963–1972
2010
-
[69]
Louis, et al., The 2021 WHO classification of tumors of the central nervous system, Neuro-Oncology 23 (2021) 1231–1251
D. Louis, et al., The 2021 WHO classification of tumors of the central nervous system, Neuro-Oncology 23 (2021) 1231–1251
2021
-
[70]
Fazekas, et al., MR signal abnormalities at 1.5 T in Alzheimer’s dementia and normal aging, American Journal of Neuroradiology 14 (1993) 1237–1242
F. Fazekas, et al., MR signal abnormalities at 1.5 T in Alzheimer’s dementia and normal aging, American Journal of Neuroradiology 14 (1993) 1237–1242
1993
-
[71]
American Cancer Society, Cancer facts & figures 2026, 2026
2026
-
[72]
Stupp, et al., Radiotherapy plus concomitant and adjuvant temozolomide for glioblastoma, New England Journal of Medicine 352 (2005) 987–996
R. Stupp, et al., Radiotherapy plus concomitant and adjuvant temozolomide for glioblastoma, New England Journal of Medicine 352 (2005) 987–996
2005
-
[73]
A. A. Aizer, et al., Brain metastases: A society for neuro-oncology (SNO) consensus review on current management and future directions, Neuro-Oncology 24 (2022) 1613–1646
2022
-
[74]
Singh, et al., Epidemiology of brain metastases, Neurosurgery Clinics of North America 31 (2020) 481–495
M. Singh, et al., Epidemiology of brain metastases, Neurosurgery Clinics of North America 31 (2020) 481–495
2020
-
[75]
Z.S.Mayo,etal., Radiationnecrosisortumorprogression?Areviewoftheradiographicmodalitiesusedinthediagnosisofcerebralradiation necrosis, Journal of Neuro-Oncology 161 (2023)
2023
-
[76]
M.Spadea,M.Maspero,P.Zaffino,J.Seco, Deeplearningbasedsynthetic-CTgenerationinradiotherapyandPET:Areview, MedicalPhysics 48 (2021) 6537–6566
2021
-
[77]
from head to toe
S. De Pietro, et al., The role of MRI in radiotherapy planning: a narrative review “from head to toe”, Insights into Imaging 15 (2024) 255
2024
-
[78]
Maspero, et al., Deep learning for CT synthesis in radiotherapy, Bioengineering 12 (2025) 1297
M. Maspero, et al., Deep learning for CT synthesis in radiotherapy, Bioengineering 12 (2025) 1297
2025
-
[79]
G.Cordier,etal., GenerativeadversarialnetworkstosynthesizemissingT1andFLAIRMRIsequencesforuseinamultisequencebraintumor segmentation model, Radiology 299 (2021) E209–E219
2021
-
[80]
NationalLungScreeningTrialResearchTeam, Reducedlung-cancermortalitywithlow-dosecomputedtomographicscreening, NewEngland Journal of Medicine 365 (2011) 395–409
2011
-
[81]
de Koning, et al., Reduced lung-cancer mortality with volume CT screening in a randomized trial, New England Journal of Medicine 382 (2020) 503–513
H. de Koning, et al., Reduced lung-cancer mortality with volume CT screening in a randomized trial, New England Journal of Medicine 382 (2020) 503–513
2020
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.