Recognition: 2 theorem links
A Survey on Diffusion Models for Inverse Problems
Pith reviewed 2026-05-17 04:27 UTC · model grok-4.3
The pith
Pre-trained diffusion models serve as unsupervised priors to solve inverse problems such as image restoration and reconstruction without any additional training.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
This survey provides a comprehensive overview of methods that utilize pre-trained diffusion models to solve inverse problems without requiring further training. We introduce taxonomies to categorize these methods based on both the problems they address and the techniques they employ. We analyze the connections between different approaches, offering insights into their practical implementation and highlighting important considerations. We further discuss specific challenges and potential solutions associated with using latent diffusion models for inverse problems.
What carries the argument
Two taxonomies—one organized by inverse-problem type and one organized by incorporation technique—that together classify and relate the surveyed methods.
If this is right
- Practitioners can select an existing method by matching the target inverse problem to the appropriate taxonomy branch.
- Implementation choices become clearer once the connections between sampling, guidance, and conditioning strategies are laid out.
- Latent-space diffusion models require separate handling of the encoder-decoder mapping when used for reconstruction tasks.
- The same taxonomies can flag combinations of problem type and technique that remain unexplored.
Where Pith is reading between the lines
- The problem-based taxonomy could be extended to new domains such as audio or 3-D geometry by adding branches for those data types.
- Future work might test whether a single unified algorithm can replace several of the technique branches while preserving performance.
- The survey's emphasis on zero-shot use suggests that similar taxonomies could be built for other generative priors such as flow-based models.
Load-bearing premise
The chosen papers and the two taxonomies together give a complete and unbiased picture of the field at the time of writing.
What would settle it
A new or previously overlooked method that uses a pre-trained diffusion model for an inverse problem yet fits neither taxonomy, or a sizable set of relevant papers absent from the survey.
read the original abstract
Diffusion models have become increasingly popular for generative modeling due to their ability to generate high-quality samples. This has unlocked exciting new possibilities for solving inverse problems, especially in image restoration and reconstruction, by treating diffusion models as unsupervised priors. This survey provides a comprehensive overview of methods that utilize pre-trained diffusion models to solve inverse problems without requiring further training. We introduce taxonomies to categorize these methods based on both the problems they address and the techniques they employ. We analyze the connections between different approaches, offering insights into their practical implementation and highlighting important considerations. We further discuss specific challenges and potential solutions associated with using latent diffusion models for inverse problems. This work aims to be a valuable resource for those interested in learning about the intersection of diffusion models and inverse problems.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript is a survey on the use of pre-trained diffusion models as unsupervised priors for solving inverse problems, primarily in image restoration and reconstruction. It claims to deliver a comprehensive overview of training-free methods, introduces dual taxonomies (one by problem type and one by technique), analyzes inter-method connections and implementation considerations, and discusses specific challenges arising when latent diffusion models are applied to inverse problems.
Significance. A well-executed survey that accurately organizes the growing body of work on diffusion priors for inverse problems would be a useful reference for the community, particularly if the taxonomies reveal previously under-appreciated connections and if the discussion of latent-model challenges is technically precise. The absence of any stated literature-selection protocol, however, makes it impossible to evaluate whether the claimed comprehensiveness is achieved.
major comments (1)
- Abstract and introduction: the central claim that the survey supplies a 'comprehensive overview' of pre-trained diffusion methods for inverse problems rests on an undocumented literature-selection process. No search protocol, keyword list, database scope, time window, or inclusion/exclusion criteria are provided, which directly undermines the reliability of the introduced taxonomies and the asserted connections among methods.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback on our survey. The primary concern regarding the documentation of the literature selection process is addressed in the point-by-point response below. We outline planned revisions to improve transparency while preserving the manuscript's analytical focus.
read point-by-point responses
-
Referee: Abstract and introduction: the central claim that the survey supplies a 'comprehensive overview' of pre-trained diffusion methods for inverse problems rests on an undocumented literature-selection process. No search protocol, keyword list, database scope, time window, or inclusion/exclusion criteria are provided, which directly undermines the reliability of the introduced taxonomies and the asserted connections among methods.
Authors: We acknowledge that the manuscript does not explicitly document the literature curation process. In the revised version, we will add a new subsection in the Introduction titled 'Literature Scope and Selection' that specifies the time window (primarily works from 2022 onward, aligning with the emergence of diffusion models for inverse problems), key search terms and keywords (e.g., 'diffusion model inverse problems', 'pre-trained diffusion prior', 'score-based generative models for restoration'), primary sources (arXiv, CVPR, ICCV, NeurIPS, ICLR, and related journals), and inclusion criteria (methods using pre-trained diffusion models in a training-free manner to solve inverse problems in imaging). Exclusion criteria will cover works requiring model retraining or fine-tuning and those outside the core scope of unsupervised priors. This addition will clarify the basis for the taxonomies, which organize representative methods by problem type and technique to reveal conceptual connections rather than enumerate every publication. We note that while a formal PRISMA-style protocol is uncommon in fast-moving machine learning surveys, the added documentation will strengthen the reliability of the overview. revision: yes
Circularity Check
Survey structure introduces no derivation chain or self-referential reductions
full rationale
This is a literature survey that organizes and reviews existing methods for using pre-trained diffusion models on inverse problems. It introduces taxonomies for categorization and discusses connections and challenges, but presents no new mathematical derivations, predictions, or first-principles results. No equations, fitted parameters, or claims reduce by construction to the paper's own inputs or self-citations. References to prior work serve as external support rather than load-bearing self-referential justification. The paper is self-contained as a review and exhibits no circularity under the defined patterns.
Axiom & Free-Parameter Ledger
Forward citations
Cited by 17 Pith papers
-
Diffusion-Based Posterior Sampling: A Feynman-Kac Analysis of Bias and Stability
Diffusion posterior samplers produce biased outputs that can be expressed as an Ornstein-Uhlenbeck path expectation via a surrogate Gaussian path and Feynman-Kac representation, with STSL flattening the spatially vary...
-
Image Restoration via Diffusion Models with Dynamic Resolution
Dynamic resolution priors enable faster diffusion-based image restoration by operating in lower-dimensional subspaces, with adapted methods outperforming prior DM approaches on most tasks.
-
Proximal-Based Generative Modeling for Bayesian Inverse Problems
PGM replaces the intractable likelihood score in diffusion models with a closed-form Moreau score computed via proximal operators, enabling non-asymptotic sampling for inverse problems trained only on prior data.
-
FlowADMM: Plug-and-play ADMM with Flow-based Renoise-Denoise Priors
FlowADMM replaces stochastic renoise-denoise steps in flow-based plug-and-play methods with a deterministic expectation operator inside ADMM, yielding convergence guarantees under weak Lipschitz conditions and state-o...
-
Bayesian Rain Field Reconstruction using Commercial Microwave Links and Diffusion Model Priors
Diffusion model priors enable training-free Bayesian sampling for more accurate rain field reconstruction from path-integrated commercial microwave link measurements than Gaussian process baselines.
-
How to Guide Your Flow: Few-Step Alignment via Flow Map Reward Guidance
FMRG is a training-free, single-trajectory guidance method for flow models derived from optimal control that achieves strong reward alignment with only 3 NFEs.
-
Diffusion Inpainting MIMO-OFDM Channels with Limited Noisy Observations
A Conditional Diffusion Transformer recovers full MIMO-OFDM channels from sparse noisy pilots, delivering over 5 dB gain versus baselines even at 1/32 pilot density and completing inference in 10 steps.
-
Efficient Zero-Shot Inpainting with Decoupled Diffusion Guidance
A new decoupled diffusion guidance method enables efficient zero-shot inpainting by avoiding backpropagation through the denoiser while maintaining observation consistency and quality.
-
DVD: Discrete Voxel Diffusion for 3D Generation and Editing
DVD treats voxel occupancy as a discrete variable in a diffusion framework to generate, assess, and edit sparse 3D voxels without continuous thresholding.
-
Learning a Delighting Prior for Facial Appearance Capture in the Wild
A delighting network trained via Dataset Latent Modulation on heterogeneous OLAT and Light Stage data enables high-quality in-the-wild facial reflectance capture from video and produces the NeRSemble-Scan dataset.
-
Local Intrinsic Dimension Unveils Hallucinations in Diffusion Models
Hallucinations in diffusion models are driven by local intrinsic dimension instabilities on the manifold, which Intrinsic Quenching corrects by deflating it.
-
Uncertainty-Aware Spatiotemporal Super-Resolution Data Assimilation with Diffusion Models
DiffSRDA uses denoising diffusion models to perform uncertainty-aware spatiotemporal super-resolution data assimilation, achieving EnKF-like quality from low-resolution forecasts on an ocean jet testbed.
-
Stochastic Generative Plug-and-Play Priors
Noise injection into plug-and-play algorithms using pretrained score-based diffusion denoisers optimizes a Gaussian-smoothed objective and yields better reconstructions for severely ill-posed imaging tasks.
-
Conditional flow matching for physics-constrained inverse problems with finite training data
Conditional flow matching learns a velocity field to sample from measurement-conditioned posteriors in physics inverse problems, with early stopping to prevent variance collapse and selective memorization under finite...
-
Fast and Robust Diffusion Posterior Sampling for MR Image Reconstruction Using the Preconditioned Unadjusted Langevin Algorithm
Preconditioned ULA with exact likelihood enables faster, higher-quality posterior sampling for Cartesian and non-Cartesian MRI reconstructions than annealed sampling or DPS.
-
Principled Design of Diffusion-based Optimizers for Inverse Problems
Reparameterizations create invariances in diffusion inverse-problem solvers, enabling hyperparameter reuse and accelerated inference via the OptDiff optimization framework.
-
A Stability Benchmark of Generative Regularizers for Inverse Problems
Numerical benchmarks indicate generative regularizers deliver strong reconstructions in some imaging inverse problem settings but can be unstable or problematic under imperfect conditions compared to variational methods.
Reference graph
Works this paper leans on
-
[1]
Robust compressed sensing mri with deep generative priors,
A. Jalal, M. Arvinte, G. Daras, E. Price, A. G. Dimakis, an d J. Tamir, “Robust compressed sensing mri with deep generative priors,”Advances in Neural Information Processing Systems, vol. 34, pp. 14 938–14 954, 2021
work page 2021
-
[2]
Score-Based Generative Modeling through Stochastic Differential Equations
Y . Song, J. Sohl-Dickstein, D. P . Kingma, A. Kumar, S. Erm on, and B. Poole, “Score- based generative modeling through stochastic differentia l equations,” arXiv preprint arXiv:2011.13456, 2020
work page internal anchor Pith review Pith/arXiv arXiv 2011
-
[3]
Ilvr: Cond itioning method for denoising diffusion probabilistic models,
J. Choi, S. Kim, Y . Jeong, Y . Gwon, and S. Y oon, “Ilvr: Cond itioning method for denoising diffusion probabilistic models,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14 367–14 376
work page 2021
-
[4]
D iffu- sion posterior sampling for general noisy inverse problems ,
H. Chung, J. Kim, M. T. Mccann, M. L. Klasky, and J. C. Y e, “D iffu- sion posterior sampling for general noisy inverse problems ,” in The Eleventh International Conference on Learning Representations , 2023. [Online]. Available: https://openreview.net/forum?id=OnD9zGAGT0k
work page 2023
-
[5]
Pseudoinve rse-guided diffusion models for inverse problems,
J. Song, A. V ahdat, M. Mardani, and J. Kautz, “Pseudoinve rse-guided diffusion models for inverse problems,” in International Conference on Learning Representations , 2022
work page 2022
-
[6]
Learning d iffusion priors from observations by expectation maximization,
F. Rozet, G. Andry, F. Lanusse, and G. Louppe, “Learning d iffusion priors from observations by expectation maximization,” arXiv preprint arXiv:2405.13712 , 2024
-
[7]
Parallel diffusion models of operator and image for blind inverse problems,
H. Chung, J. Kim, S. Kim, and J. C. Y e, “Parallel diffusion models of operator and image for blind inverse problems,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 6059–6069. 29
work page 2023
-
[8]
Snips: Solving noisy i nverse problems stochastically,
B. Kawar, G. V aksman, and M. Elad, “Snips: Solving noisy i nverse problems stochastically,” Advances in Neural Information Processing Systems , vol. 34, pp. 21 757–21 769, 2021
work page 2021
-
[9]
Denoising diffu sion restoration models,
B. Kawar, M. Elad, S. Ermon, and J. Song, “Denoising diffu sion restoration models,” in Advances in Neural Information Processing Systems , 2022
work page 2022
-
[10]
N. Murata, K. Saito, C.-H. Lai, Y . Takida, T. Uesaka, Y . M itsufuji, and S. Ermon, “Gibbsd- drm: A partially collapsed gibbs sampler for solving blind i nverse problems with denoising diffusion restoration,” in International Conference on Machine Learning . PMLR, 2023, pp. 25 501–25 522
work page 2023
-
[11]
Zero-shot image restorati on using denoising diffusion null- space model,
Y . Wang, J. Y u, and J. Zhang, “Zero-shot image restorati on using denoising diffusion null- space model,” arXiv preprint arXiv:2212.00490 , 2022
-
[12]
Decomposed diffusion sam pler for accelerating large-scale inverse problems,
H. Chung, S. Lee, and J. C. Y e, “Decomposed diffusion sam pler for accelerating large-scale inverse problems,” arXiv preprint arXiv:2303.05754 , 2023
-
[13]
Denoising diffu- sion models for plug-and-play image restoration,
Y . Zhu, K. Zhang, J. Liang, J. Cao, B. Wen, R. Timofte, and L. V an Gool, “Denoising diffu- sion models for plug-and-play image restoration,” in Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition , 2023, pp. 1219–1229
work page 2023
-
[14]
Solving linear inverse problems provably via posterior sampling with late nt diffusion models,
L. Rout, N. Raoof, G. Daras, C. Caramanis, A. Dimakis, an d S. Shakkottai, “Solving linear inverse problems provably via posterior sampling with late nt diffusion models,” Advances in Neural Information Processing Systems, vol. 36, 2024
work page 2024
-
[15]
Beyond first-order tweedie: Solving inverse problems using latent diffusion,
L. Rout, Y . Chen, A. Kumar, C. Caramanis, S. Shakkottai, and W .-S. Chu, “Beyond first-order tweedie: Solving inverse problems using latent diffusion, ” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2024, pp. 9472–9481
work page 2024
-
[16]
A variatio nal perspective on solving inverse problems with diffusion models,
M. Mardani, J. Song, J. Kautz, and A. V ahdat, “A variatio nal perspective on solving inverse problems with diffusion models,” in The Twelfth International Conference on Learning Rep- resentations, 2024
work page 2024
-
[17]
V ariational diffusion mode ls for blind mri inverse prob- lems,
C. Alkan, J. Oscanoa, D. Abraham, M. Gao, A. Nurdinova, K . Setsompop, J. M. Pauly, M. Mardani, and S. V asanawala, “V ariational diffusion mode ls for blind mri inverse prob- lems,” in NeurIPS 2023 W orkshop on Deep Learning and Inverse Problems, 2023
work page 2023
-
[18]
Score-based diffusion models as principled priors for inv erse imaging,
B. T. Feng, J. Smith, M. Rubinstein, H. Chang, K. L. Bouma n, and W . T. Freeman, “Score-based diffusion models as principled priors for inv erse imaging,” arXiv preprint arXiv:2304.11751, 2023
-
[19]
Efficient bayesian computat ional imaging with a surrogate score-based prior,
B. T. Feng and K. L. Bouman, “Efficient bayesian computat ional imaging with a surrogate score-based prior,” arXiv preprint arXiv:2309.01949 , 2023
-
[20]
Dmpl ug: A plug-in method for solving inverse problems with diffusion models,
H. Wang, X. Zhang, T. Li, Y . Wan, T. Chen, and J. Sun, “Dmpl ug: A plug-in method for solving inverse problems with diffusion models,” arXiv preprint arXiv:2405.16749 , 2024
-
[21]
Zero-shot i mage restoration via diffusion inversion,
H. Chihaoui, A. Lemkhenter, and P . Favaro, “Zero-shot i mage restoration via diffusion inversion,” 2024. [Online]. Available: https://openreview.net/forum?id=ZnmofqLWMQ
work page 2024
-
[22]
Consistency model is an effective posterior samp le approximation for diffusion inverse solvers,
T. Xu, Z. Zhu, J. Li, D. He, Y . Wang, M. Sun, L. Li, H. Qin, Y . Wang, J. Liu, and Y .- Q. Zhang, “Consistency model is an effective posterior samp le approximation for diffusion inverse solvers,” 2024
work page 2024
-
[23]
Scor e-guided intermediate level optimization: Fast Langevin mixing for inverse probl ems,
G. Daras, Y . Dagan, A. Dimakis, and C. Daskalakis, “Scor e-guided intermediate level optimization: Fast Langevin mixing for inverse probl ems,” in Proceedings of the 39th International Conference on Machine Learning , ser. Proceedings of Machine Learning Research, K. Chaudhuri, S. Jegelka, L. Song, C. Sze pesvari, G. Niu, and S. Sabato, Eds., vol. 162. P...
work page 2022
-
[24]
Principled probabilistic imaging using diffusion models as plug-and-play priors,
Z. Wu, Y . Sun, Y . Chen, B. Zhang, Y . Y ue, and K. L. Bouman, “ Principled probabilistic imaging using diffusion models as plug-and-play priors,” 2 024. 30
-
[25]
Diffusion posterior sampling for li near inverse problem solving: A filtering perspective,
Z. Dou and Y . Song, “Diffusion posterior sampling for li near inverse problem solving: A filtering perspective,” in The Twelfth International Conference on Learning Represen tations, 2023
work page 2023
-
[26]
Provable probabilistic imaging using score-based generative priors,
Y . Sun, Z. Wu, Y . Chen, B. T. Feng, and K. L. Bouman, “Provable probabilistic imaging using score-based generative priors,” IEEE Transactions on Computational Imaging , 2024
work page 2024
-
[27]
Diffusion probabilistic modeling of protein backbones in 3d for the motif-scaffolding problem,
B. L. Trippe, J. Yim, D. Tischer, D. Baker, T. Broderick, R. Barzilay, and T. S. Jaakkola, “Diffusion probabilistic modeling of protein backbones in 3d for the motif-scaffolding problem,” in The Eleventh International Conference on Learning Represe ntations, 2023. [Online]. Available: https://openreview.net/forum?id=6TxBxqNME1Y
work page 2023
-
[28]
Monte carlo guided denoising diffusion models for bayesian linear inverse problems
G. Cardoso, S. Le Corff, E. Moulines et al., “Monte carlo guided denoising diffusion models for bayesian linear inverse problems.” in The Twelfth International Conference on Learning Representations, 2023
work page 2023
-
[29]
Practical and asymptotically exact conditional sampling in diffusio n models,
L. Wu, B. L. Trippe, C. A. Naesseth, J. P . Cunningham, and D. Blei, “Practical and asymptotically exact conditional sampling in diffusio n models,” in Thirty-seventh Conference on Neural Information Processing Systems , 2023. [Online]. Available: https://openreview.net/forum?id=eWKqr1zcRv
work page 2023
-
[30]
Solving linear inv erse problems using the prior implicit in a denoiser,
Z. Kadkhodaie and E. P . Simoncelli, “Solving linear inv erse problems using the prior implicit in a denoiser,” arXiv preprint arXiv:2007.13640 , 2020
-
[31]
Improving diffusi on models for inverse problems using manifold constraints,
H. Chung, B. Sim, D. Ryu, and J. C. Y e, “Improving diffusi on models for inverse problems using manifold constraints,” Advances in Neural Information Processing Systems, vol. 35, pp. 25 683–25 696, 2022
work page 2022
-
[32]
So lving inverse problems with latent diffusion models via hard data consistency,
B. Song, S. M. Kwon, Z. Zhang, X. Hu, Q. Qu, and L. Shen, “So lving inverse problems with latent diffusion models via hard data consistency,” in The Twelfth International Conference on Learning Representat ions, 2024. [Online]. Available: https://openreview.net/forum?id=j8hdRqOUhN
work page 2024
-
[33]
Manifold preserving guided diffusion,
Y . He, N. Murata, C.-H. Lai, Y . Takida, T. Uesaka, D. Kim, W .-H. Liao, Y . Mitsufuji, J. Z. Kolter, R. Salakhutdinov et al., “Manifold preserving guided diffusion,” arXiv preprint arXiv:2311.16424, 2023
-
[34]
Promp t-tuning latent diffusion models for inverse problems,
H. Chung, J. C. Y e, P . Milanfar, and M. Delbracio, “Promp t-tuning latent diffusion models for inverse problems,” in International Conference on Machine Learning . PMLR, 2014
work page 2014
-
[35]
Regularizatio n by texts for latent diffusion inverse solvers,
J. Kim, G. Y . Park, H. Chung, and J. C. Y e, “Regularizatio n by texts for latent diffusion inverse solvers,” arXiv preprint arXiv:2311.15658 , 2023
-
[36]
Dreamsampler: Unifying diffusion sampling and score distillation for image manipulation,
J. Kim, G. Y . Park, and J. C. Y e, “Dreamsampler: Unifying diffusion sampling and score distillation for image manipulation,” arXiv preprint arXiv:2403.11415 , 2024
-
[37]
The seismic inverse problem as a sequence of before stack migra- tions,
P . Lailly and J. Bednar, “The seismic inverse problem as a sequence of before stack migra- tions,” in Conference on inverse scattering: theory and application , vol. 1983. Philadelphia, Pa, 1983, pp. 206–220
work page 1983
-
[38]
An overview of full-waveform inversion in exploration geophysics,
J. Virieux and S. Operto, “An overview of full-waveform inversion in exploration geophysics,” Geophysics, vol. 74, no. 6, pp. WCC1–WCC26, 2009
work page 2009
-
[39]
Inverse problems i n atmospheric science and their application,
S. Huang, J. Xiang, H. Du, and X. Cao, “Inverse problems i n atmospheric science and their application,” in Journal of Physics: Conference Series , vol. 12, no. 1. IOP Publishing, 2005, p. 45
work page 2005
-
[40]
Wunsch, The ocean circulation inverse problem
C. Wunsch, The ocean circulation inverse problem. Cambridge University Press, 1996
work page 1996
-
[41]
Diffusion models for audio restoration,
J.-M. Lemercier, J. Richter, S. Welker, E. Moliner, V . Välimäki, and T. Gerkmann, “Diffusion models for audio restoration,” arXiv preprint arXiv:2402.09821 , 2024. 31
-
[42]
Unsu- pervised vocal dereverberation with diffusion-based gene rative models,
K. Saito, N. Murata, T. Uesaka, C.-H. Lai, Y . Takida, T. F ukui, and Y . Mitsufuji, “Unsu- pervised vocal dereverberation with diffusion-based gene rative models,” in ICASSP 2023- 2023 IEEE International Conference on Acoustics, Speech an d Signal Processing (ICASSP) . IEEE, 2023, pp. 1–5
work page 2023
-
[43]
Blind audio b andwidth extension: A diffusion- based zero-shot approach,
E. Moliner, F. Elvander, and V . Välimäki, “Blind audio b andwidth extension: A diffusion- based zero-shot approach,” arXiv preprint arXiv:2306.01433 , 2023
-
[44]
Solving audi o inverse problems with a diffusion model,
E. Moliner, J. Lehtinen, and V . Välimäki, “Solving audi o inverse problems with a diffusion model,” in ICASSP 2023-2023 IEEE International Conference on Acousti cs, Speech and Sig- nal Processing (ICASSP). IEEE, 2023, pp. 1–5
work page 2023
-
[45]
Diffusion-based audio inp ainting,
E. Moliner and V . Välimäki, “Diffusion-based audio inp ainting,” arXiv preprint arXiv:2305.15266, 2023
-
[46]
Vrdmg: V ocal restoration via diffusion p osterior sampling with multiple guidance,
C. Hernandez-Olivan, K. Saito, N. Murata, C.-H. Lai, M. A. Martínez-Ramirez, W .-H. Liao, and Y . Mitsufuji, “Vrdmg: V ocal restoration via diffusion p osterior sampling with multiple guidance,” in ICASSP 2024-2024 IEEE International Conference on Acousti cs, Speech and Signal Processing (ICASSP). IEEE, 2024, pp. 596–600
work page 2024
-
[47]
Solving inverse problems in medical imaging with score-based generative models,
Y . Song, L. Shen, L. Xing, and S. Ermon, “Solving inverse problems in medical imaging with score-based generative models,” arXiv preprint arXiv:2111.08005 , 2021
-
[48]
Solving 3d inverse prob- lems using pre-trained 2d diffusion models,
H. Chung, D. Ryu, M. T. McCann, M. L. Klasky, and J. C. Y e, “ Solving 3d inverse prob- lems using pre-trained 2d diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 22 542–22 551
work page 2023
-
[49]
A. Aali, G. Daras, B. Levac, S. Kumar, A. G. Dimakis, and J . I. Tamir, “Ambient diffusion posterior sampling: Solving inverse problems with diffusi on models trained on corrupted data,” arXiv preprint arXiv:2403.08728 , 2024
-
[50]
Score-based diffusion models for accelerated mri,
H. Chung and J. C. Y e, “Score-based diffusion models for accelerated mri,” Medical image analysis, vol. 80, p. 102479, 2022
work page 2022
-
[51]
Brief review of im age denoising techniques,
L. Fan, F. Zhang, H. Fan, and C. Zhang, “Brief review of im age denoising techniques,” Visual Computing for Industry, Biomedicine, and Art , vol. 2, no. 1, p. 7, 2019
work page 2019
-
[52]
Deep learning-based image and video inpainting: A survey,
W . Quan, J. Chen, Y . Liu, D.-M. Y an, and P . Wonka, “Deep learning-based image and video inpainting: A survey,” International Journal of Computer Vision , vol. 132, no. 7, pp. 2367– 2400, 2024
work page 2024
-
[53]
Inpaint anything: Segment anything meets image inpainting,
T. Y u, R. Feng, R. Feng, J. Liu, X. Jin, W . Zeng, and Z. Chen , “Inpaint anything: Segment anything meets image inpainting,” arXiv preprint arXiv:2304.06790 , 2023
-
[54]
Predicting a protein’s stability under a million mutations,
J. Ouyang-Zhang, D. J. Diaz, A. Klivans, and P . Krähenbü hl, “Predicting a protein’s stability under a million mutations,” NeurIPS, 2023
work page 2023
-
[55]
D. J. Diaz, C. Gong, J. Ouyang-Zhang, J. M. Loy, J. Wells, D. Y ang, A. D. Ellington, A. G. Dimakis, and A. R. Klivans, “Stability oracle: a structure-based graph-transformer framework for identifying stabilizing mutations,” Nature Communications, vol. 15, no. 1, p. 6170, 2024
work page 2024
-
[56]
Machine-learning-guided directed evolution for protein engineering,
K. K. Y ang, Z. Wu, and F. H. Arnold, “Machine-learning-guided directed evolution for protein engineering,” Nature methods, vol. 16, no. 8, pp. 687–694, 2019
work page 2019
-
[57]
Deep dive into machine learning models for protein engineer- ing,
Y . Xu, D. V erma, R. P . Sheridan, A. Liaw, J. Ma, N. M. Marsh all, J. McIntosh, E. C. Sherer, V . Svetnik, and J. M. Johnston, “Deep dive into machine learning models for protein engineer- ing,” Journal of chemical information and modeling , vol. 60, no. 6, pp. 2773–2790, 2020
work page 2020
-
[58]
Solving i nverse problems with score-based generative priors learned from noisy data,
A. Aali, M. Arvinte, S. Kumar, and J. I. Tamir, “Solving i nverse problems with score-based generative priors learned from noisy data,” arXiv preprint arXiv:2305.01166 , 2023
-
[59]
fastmri: An open dataset and benchmarks for accelerated m ri,
J. Zbontar, F. Knoll, A. Sriram, T. Murrell, Z. Huang, M. J. Muckley, A. Defazio, R. Stern, P . Johnson, M. Bruno et al., “fastmri: An open dataset and benchmarks for accelerated m ri,” arXiv preprint arXiv:1811.08839 , 2018. 32
-
[60]
A. D. Desai, A. M. Schmidt, E. B. Rubin, C. M. Sandino, M. S . Black, V . Mazzoli, K. J. Stevens, R. Boutin, C. Ré, G. E. Gold, B. A. Hargreaves, and A. S. Chaudhari, “Skm-tea: A dataset for accelerated mri reconstruction with dense ima ge labels for quantitative clinical evaluation,” 2022
work page 2022
-
[61]
MRI Da ta: Under- sampled Abdomens,
T. Zhang, J. Pauly, S. V asanawala, and M. Lustig, “MRI Da ta: Under- sampled Abdomens,” Undersampled Abdomens | MRI Data . [Online]. Available: http://old.mridata.org/undersampled/abdomens
-
[62]
U. Tariq, P . Lai, M. Lustig, M. Alley, M. Zhang, G. Gold, a nd V . S. S, “MRI Data: Undersampled Knees,” Undersampled Knees | MRI Data . [Online]. Available: http://old.mridata.org/undersampled/knees
-
[63]
M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Co mpressed sensing mri,” IEEE signal processing magazine, vol. 25, no. 2, pp. 72–82, 2008
work page 2008
-
[64]
X. Pan, E. Y . Sidky, and M. V annier, “Why do commercial ct scanners still employ tradi- tional, filtered back-projection for image reconstruction ?” Inverse problems, vol. 25, no. 12, p. 123009, 2009
work page 2009
-
[65]
Near- exact recovery for tomographic inverse problems via deep learning,
M. Genzel, I. Gühring, J. Macdonald, and M. März, “Near- exact recovery for tomographic inverse problems via deep learning,” in International Conference on Machine Learning . PMLR, 2022, pp. 7368–7381
work page 2022
-
[66]
The inversion problem and applications of the generalized radon transform,
G. Beylkin, “The inversion problem and applications of the generalized radon transform,” Communications on pure and applied mathematics , vol. 37, no. 5, pp. 579–599, 1984
work page 1984
-
[67]
A. C. Kak and M. Slaney, Principles of computerized tomographic imaging . SIAM, 2001
work page 2001
-
[68]
Spectr al band replication, a novel approach in audio coding,
M. Dietz, L. Liljeryd, K. Kjorling, and O. Kunz, “Spectr al band replication, a novel approach in audio coding,” in Audio Engineering Society Convention 112. Audio Engineering Society, 2002
work page 2002
-
[69]
Cryo-electron microscopy of vitrified specimens,
J. Dubochet, M. Adrian, J.-J. Chang, J.-C. Homo, J. Lepa ult, A. W . McDowall, and P . Schultz, “Cryo-electron microscopy of vitrified specimens,” Quarterly reviews of biophysics , vol. 21, no. 2, pp. 129–228, 1988
work page 1988
-
[70]
Super-resolution image reconstruction: a technical overview,
S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,”IEEE signal processing magazine , vol. 20, no. 3, pp. 21–36, 2003
work page 2003
-
[71]
Image super-resolution via iterative refinement,
C. Saharia, J. Ho, W . Chan, T. Salimans, D. J. Fleet, and M . Norouzi, “Image super-resolution via iterative refinement,” arXiv preprint arXiv:2104.07636 , 2021
-
[72]
Speech dereverber- ation based on variance-normalized delayed linear predict ion,
T. Nakatani, T. Y oshioka, K. Kinoshita, M. Miyoshi, and B.-H. Juang, “Speech dereverber- ation based on variance-normalized delayed linear predict ion,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 7, pp. 1717–1731, 2010
work page 2010
-
[73]
Phase retrieval algorithms: a compariso n,
J. R. Fienup, “Phase retrieval algorithms: a compariso n,” Applied optics, vol. 21, no. 15, pp. 2758–2769, 1982
work page 1982
-
[74]
First m87 event horizon telescope results. iv. imaging th e central supermassive black hole,
K. Akiyama, A. Alberdi, W . Alef, K. Asada, R. Azulay, A.- K. Baczko, D. Ball, M. Balokovi ´c, J. Barrett, D. Bintley et al., “First m87 event horizon telescope results. iv. imaging th e central supermassive black hole,” The Astrophysical Journal Letters, vol. 875, no. 1, p. L4, 2019
work page 2019
-
[75]
Tarantola, Inverse problem theory and methods for model parameter esti mation
A. Tarantola, Inverse problem theory and methods for model parameter esti mation. SIAM, 2005
work page 2005
-
[76]
Theoretical perspectives on deep learning methods in inverse problems,
J. Scarlett, R. Heckel, M. R. D. Rodrigues, P . Hand, and Y . C. Eldar, “Theoretical perspectives on deep learning methods in inverse problems, ” IEEE Journal on Selected Areas in Information Theory , vol. 3, no. 3, p. 433–453, Sep. 2022. [Online]. Available: http://dx.doi.org/10.1109/JSAIT.2023.3241123
-
[77]
Maximum a posteriori estimat ors as a limit of bayes estimators,
R. Bassett and J. Deride, “Maximum a posteriori estimat ors as a limit of bayes estimators,” Mathematical Programming, vol. 174, pp. 129–144, 2019. 33
work page 2019
-
[78]
Revisiting maximum-a-posteriori estima tion in log-concave models,
M. Pereyra, “Revisiting maximum-a-posteriori estima tion in log-concave models,” SIAM Journal on Imaging Sciences , vol. 12, no. 1, pp. 650–670, 2019
work page 2019
-
[79]
G. A. Y oung, R. L. Smith, and R. L. Smith, Essentials of statistical inference . Cambridge University Press, 2005, vol. 16
work page 2005
-
[80]
K. P . Murphy, Machine learning: a probabilistic perspective . MIT press, 2012
work page 2012
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.