pith. machine review for the scientific record. sign in

arxiv: 2410.00083 · v1 · submitted 2024-09-30 · 💻 cs.LG · cs.AI· cs.CV

Recognition: 2 theorem links

A Survey on Diffusion Models for Inverse Problems

Authors on Pith no claims yet

Pith reviewed 2026-05-17 04:27 UTC · model grok-4.3

classification 💻 cs.LG cs.AIcs.CV
keywords diffusion modelsinverse problemsimage restorationsurveytaxonomieslatent diffusiongenerative priors
0
0 comments X

The pith

Pre-trained diffusion models serve as unsupervised priors to solve inverse problems such as image restoration and reconstruction without any additional training.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The survey compiles and organizes existing methods that apply pre-trained diffusion models to inverse problems. It creates two taxonomies: one that groups methods by the type of inverse problem they target and another that groups them by the specific technique used to incorporate the diffusion model. The work traces connections across these approaches, notes practical details for implementation, and examines the distinct issues that arise when the diffusion model operates in a latent space. This structure gives readers a map for choosing or adapting a method to a new inverse problem. The survey positions itself as a starting point for anyone working at the overlap of generative modeling and reconstruction tasks.

Core claim

This survey provides a comprehensive overview of methods that utilize pre-trained diffusion models to solve inverse problems without requiring further training. We introduce taxonomies to categorize these methods based on both the problems they address and the techniques they employ. We analyze the connections between different approaches, offering insights into their practical implementation and highlighting important considerations. We further discuss specific challenges and potential solutions associated with using latent diffusion models for inverse problems.

What carries the argument

Two taxonomies—one organized by inverse-problem type and one organized by incorporation technique—that together classify and relate the surveyed methods.

If this is right

  • Practitioners can select an existing method by matching the target inverse problem to the appropriate taxonomy branch.
  • Implementation choices become clearer once the connections between sampling, guidance, and conditioning strategies are laid out.
  • Latent-space diffusion models require separate handling of the encoder-decoder mapping when used for reconstruction tasks.
  • The same taxonomies can flag combinations of problem type and technique that remain unexplored.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The problem-based taxonomy could be extended to new domains such as audio or 3-D geometry by adding branches for those data types.
  • Future work might test whether a single unified algorithm can replace several of the technique branches while preserving performance.
  • The survey's emphasis on zero-shot use suggests that similar taxonomies could be built for other generative priors such as flow-based models.

Load-bearing premise

The chosen papers and the two taxonomies together give a complete and unbiased picture of the field at the time of writing.

What would settle it

A new or previously overlooked method that uses a pre-trained diffusion model for an inverse problem yet fits neither taxonomy, or a sizable set of relevant papers absent from the survey.

read the original abstract

Diffusion models have become increasingly popular for generative modeling due to their ability to generate high-quality samples. This has unlocked exciting new possibilities for solving inverse problems, especially in image restoration and reconstruction, by treating diffusion models as unsupervised priors. This survey provides a comprehensive overview of methods that utilize pre-trained diffusion models to solve inverse problems without requiring further training. We introduce taxonomies to categorize these methods based on both the problems they address and the techniques they employ. We analyze the connections between different approaches, offering insights into their practical implementation and highlighting important considerations. We further discuss specific challenges and potential solutions associated with using latent diffusion models for inverse problems. This work aims to be a valuable resource for those interested in learning about the intersection of diffusion models and inverse problems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 0 minor

Summary. The manuscript is a survey on the use of pre-trained diffusion models as unsupervised priors for solving inverse problems, primarily in image restoration and reconstruction. It claims to deliver a comprehensive overview of training-free methods, introduces dual taxonomies (one by problem type and one by technique), analyzes inter-method connections and implementation considerations, and discusses specific challenges arising when latent diffusion models are applied to inverse problems.

Significance. A well-executed survey that accurately organizes the growing body of work on diffusion priors for inverse problems would be a useful reference for the community, particularly if the taxonomies reveal previously under-appreciated connections and if the discussion of latent-model challenges is technically precise. The absence of any stated literature-selection protocol, however, makes it impossible to evaluate whether the claimed comprehensiveness is achieved.

major comments (1)
  1. Abstract and introduction: the central claim that the survey supplies a 'comprehensive overview' of pre-trained diffusion methods for inverse problems rests on an undocumented literature-selection process. No search protocol, keyword list, database scope, time window, or inclusion/exclusion criteria are provided, which directly undermines the reliability of the introduced taxonomies and the asserted connections among methods.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive feedback on our survey. The primary concern regarding the documentation of the literature selection process is addressed in the point-by-point response below. We outline planned revisions to improve transparency while preserving the manuscript's analytical focus.

read point-by-point responses
  1. Referee: Abstract and introduction: the central claim that the survey supplies a 'comprehensive overview' of pre-trained diffusion methods for inverse problems rests on an undocumented literature-selection process. No search protocol, keyword list, database scope, time window, or inclusion/exclusion criteria are provided, which directly undermines the reliability of the introduced taxonomies and the asserted connections among methods.

    Authors: We acknowledge that the manuscript does not explicitly document the literature curation process. In the revised version, we will add a new subsection in the Introduction titled 'Literature Scope and Selection' that specifies the time window (primarily works from 2022 onward, aligning with the emergence of diffusion models for inverse problems), key search terms and keywords (e.g., 'diffusion model inverse problems', 'pre-trained diffusion prior', 'score-based generative models for restoration'), primary sources (arXiv, CVPR, ICCV, NeurIPS, ICLR, and related journals), and inclusion criteria (methods using pre-trained diffusion models in a training-free manner to solve inverse problems in imaging). Exclusion criteria will cover works requiring model retraining or fine-tuning and those outside the core scope of unsupervised priors. This addition will clarify the basis for the taxonomies, which organize representative methods by problem type and technique to reveal conceptual connections rather than enumerate every publication. We note that while a formal PRISMA-style protocol is uncommon in fast-moving machine learning surveys, the added documentation will strengthen the reliability of the overview. revision: yes

Circularity Check

0 steps flagged

Survey structure introduces no derivation chain or self-referential reductions

full rationale

This is a literature survey that organizes and reviews existing methods for using pre-trained diffusion models on inverse problems. It introduces taxonomies for categorization and discusses connections and challenges, but presents no new mathematical derivations, predictions, or first-principles results. No equations, fitted parameters, or claims reduce by construction to the paper's own inputs or self-citations. References to prior work serve as external support rather than load-bearing self-referential justification. The paper is self-contained as a review and exhibits no circularity under the defined patterns.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

As a survey, the paper does not introduce new free parameters, axioms, or invented entities; it draws entirely from the existing literature on diffusion models and inverse problems.

pith-pipeline@v0.9.0 · 5452 in / 1058 out tokens · 29881 ms · 2026-05-17T04:27:39.886408+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 17 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Diffusion-Based Posterior Sampling: A Feynman-Kac Analysis of Bias and Stability

    cs.LG 2026-05 unverdicted novelty 8.0

    Diffusion posterior samplers produce biased outputs that can be expressed as an Ornstein-Uhlenbeck path expectation via a surrogate Gaussian path and Feynman-Kac representation, with STSL flattening the spatially vary...

  2. Image Restoration via Diffusion Models with Dynamic Resolution

    cs.CV 2026-05 conditional novelty 7.0

    Dynamic resolution priors enable faster diffusion-based image restoration by operating in lower-dimensional subspaces, with adapted methods outperforming prior DM approaches on most tasks.

  3. Proximal-Based Generative Modeling for Bayesian Inverse Problems

    math.OC 2026-05 unverdicted novelty 7.0

    PGM replaces the intractable likelihood score in diffusion models with a closed-form Moreau score computed via proximal operators, enabling non-asymptotic sampling for inverse problems trained only on prior data.

  4. FlowADMM: Plug-and-play ADMM with Flow-based Renoise-Denoise Priors

    cs.CV 2026-05 unverdicted novelty 7.0

    FlowADMM replaces stochastic renoise-denoise steps in flow-based plug-and-play methods with a deterministic expectation operator inside ADMM, yielding convergence guarantees under weak Lipschitz conditions and state-o...

  5. Bayesian Rain Field Reconstruction using Commercial Microwave Links and Diffusion Model Priors

    cs.LG 2026-05 unverdicted novelty 7.0

    Diffusion model priors enable training-free Bayesian sampling for more accurate rain field reconstruction from path-integrated commercial microwave link measurements than Gaussian process baselines.

  6. How to Guide Your Flow: Few-Step Alignment via Flow Map Reward Guidance

    cs.LG 2026-04 unverdicted novelty 7.0

    FMRG is a training-free, single-trajectory guidance method for flow models derived from optimal control that achieves strong reward alignment with only 3 NFEs.

  7. Diffusion Inpainting MIMO-OFDM Channels with Limited Noisy Observations

    eess.SP 2026-04 unverdicted novelty 7.0

    A Conditional Diffusion Transformer recovers full MIMO-OFDM channels from sparse noisy pilots, delivering over 5 dB gain versus baselines even at 1/32 pilot density and completing inference in 10 steps.

  8. Efficient Zero-Shot Inpainting with Decoupled Diffusion Guidance

    cs.CV 2025-12 conditional novelty 7.0

    A new decoupled diffusion guidance method enables efficient zero-shot inpainting by avoiding backpropagation through the denoiser while maintaining observation consistency and quality.

  9. DVD: Discrete Voxel Diffusion for 3D Generation and Editing

    cs.CV 2026-05 unverdicted novelty 6.0

    DVD treats voxel occupancy as a discrete variable in a diffusion framework to generate, assess, and edit sparse 3D voxels without continuous thresholding.

  10. Learning a Delighting Prior for Facial Appearance Capture in the Wild

    cs.CV 2026-05 unverdicted novelty 6.0

    A delighting network trained via Dataset Latent Modulation on heterogeneous OLAT and Light Stage data enables high-quality in-the-wild facial reflectance capture from video and produces the NeRSemble-Scan dataset.

  11. Local Intrinsic Dimension Unveils Hallucinations in Diffusion Models

    cs.CV 2026-05 unverdicted novelty 6.0

    Hallucinations in diffusion models are driven by local intrinsic dimension instabilities on the manifold, which Intrinsic Quenching corrects by deflating it.

  12. Uncertainty-Aware Spatiotemporal Super-Resolution Data Assimilation with Diffusion Models

    physics.flu-dyn 2026-04 unverdicted novelty 6.0

    DiffSRDA uses denoising diffusion models to perform uncertainty-aware spatiotemporal super-resolution data assimilation, achieving EnKF-like quality from low-resolution forecasts on an ocean jet testbed.

  13. Stochastic Generative Plug-and-Play Priors

    cs.CV 2026-04 conditional novelty 6.0

    Noise injection into plug-and-play algorithms using pretrained score-based diffusion denoisers optimizes a Gaussian-smoothed objective and yields better reconstructions for severely ill-posed imaging tasks.

  14. Conditional flow matching for physics-constrained inverse problems with finite training data

    stat.ML 2026-03 unverdicted novelty 6.0

    Conditional flow matching learns a velocity field to sample from measurement-conditioned posteriors in physics inverse problems, with early stopping to prevent variance collapse and selective memorization under finite...

  15. Fast and Robust Diffusion Posterior Sampling for MR Image Reconstruction Using the Preconditioned Unadjusted Langevin Algorithm

    physics.med-ph 2025-12 conditional novelty 6.0

    Preconditioned ULA with exact likelihood enables faster, higher-quality posterior sampling for Cartesian and non-Cartesian MRI reconstructions than annealed sampling or DPS.

  16. Principled Design of Diffusion-based Optimizers for Inverse Problems

    cs.CV 2026-05 unverdicted novelty 5.0

    Reparameterizations create invariances in diffusion inverse-problem solvers, enabling hyperparameter reuse and accelerated inference via the OptDiff optimization framework.

  17. A Stability Benchmark of Generative Regularizers for Inverse Problems

    eess.IV 2026-05 unverdicted novelty 5.0

    Numerical benchmarks indicate generative regularizers deliver strong reconstructions in some imaging inverse problem settings but can be unstable or problematic under imperfect conditions compared to variational methods.

Reference graph

Works this paper leans on

165 extracted references · 165 canonical work pages · cited by 17 Pith papers · 6 internal anchors

  1. [1]

    Robust compressed sensing mri with deep generative priors,

    A. Jalal, M. Arvinte, G. Daras, E. Price, A. G. Dimakis, an d J. Tamir, “Robust compressed sensing mri with deep generative priors,”Advances in Neural Information Processing Systems, vol. 34, pp. 14 938–14 954, 2021

  2. [2]

    Score-Based Generative Modeling through Stochastic Differential Equations

    Y . Song, J. Sohl-Dickstein, D. P . Kingma, A. Kumar, S. Erm on, and B. Poole, “Score- based generative modeling through stochastic differentia l equations,” arXiv preprint arXiv:2011.13456, 2020

  3. [3]

    Ilvr: Cond itioning method for denoising diffusion probabilistic models,

    J. Choi, S. Kim, Y . Jeong, Y . Gwon, and S. Y oon, “Ilvr: Cond itioning method for denoising diffusion probabilistic models,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14 367–14 376

  4. [4]

    D iffu- sion posterior sampling for general noisy inverse problems ,

    H. Chung, J. Kim, M. T. Mccann, M. L. Klasky, and J. C. Y e, “D iffu- sion posterior sampling for general noisy inverse problems ,” in The Eleventh International Conference on Learning Representations , 2023. [Online]. Available: https://openreview.net/forum?id=OnD9zGAGT0k

  5. [5]

    Pseudoinve rse-guided diffusion models for inverse problems,

    J. Song, A. V ahdat, M. Mardani, and J. Kautz, “Pseudoinve rse-guided diffusion models for inverse problems,” in International Conference on Learning Representations , 2022

  6. [6]

    Learning d iffusion priors from observations by expectation maximization,

    F. Rozet, G. Andry, F. Lanusse, and G. Louppe, “Learning d iffusion priors from observations by expectation maximization,” arXiv preprint arXiv:2405.13712 , 2024

  7. [7]

    Parallel diffusion models of operator and image for blind inverse problems,

    H. Chung, J. Kim, S. Kim, and J. C. Y e, “Parallel diffusion models of operator and image for blind inverse problems,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 6059–6069. 29

  8. [8]

    Snips: Solving noisy i nverse problems stochastically,

    B. Kawar, G. V aksman, and M. Elad, “Snips: Solving noisy i nverse problems stochastically,” Advances in Neural Information Processing Systems , vol. 34, pp. 21 757–21 769, 2021

  9. [9]

    Denoising diffu sion restoration models,

    B. Kawar, M. Elad, S. Ermon, and J. Song, “Denoising diffu sion restoration models,” in Advances in Neural Information Processing Systems , 2022

  10. [10]

    Gibbsd- drm: A partially collapsed gibbs sampler for solving blind i nverse problems with denoising diffusion restoration,

    N. Murata, K. Saito, C.-H. Lai, Y . Takida, T. Uesaka, Y . M itsufuji, and S. Ermon, “Gibbsd- drm: A partially collapsed gibbs sampler for solving blind i nverse problems with denoising diffusion restoration,” in International Conference on Machine Learning . PMLR, 2023, pp. 25 501–25 522

  11. [11]

    Zero-shot image restorati on using denoising diffusion null- space model,

    Y . Wang, J. Y u, and J. Zhang, “Zero-shot image restorati on using denoising diffusion null- space model,” arXiv preprint arXiv:2212.00490 , 2022

  12. [12]

    Decomposed diffusion sam pler for accelerating large-scale inverse problems,

    H. Chung, S. Lee, and J. C. Y e, “Decomposed diffusion sam pler for accelerating large-scale inverse problems,” arXiv preprint arXiv:2303.05754 , 2023

  13. [13]

    Denoising diffu- sion models for plug-and-play image restoration,

    Y . Zhu, K. Zhang, J. Liang, J. Cao, B. Wen, R. Timofte, and L. V an Gool, “Denoising diffu- sion models for plug-and-play image restoration,” in Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition , 2023, pp. 1219–1229

  14. [14]

    Solving linear inverse problems provably via posterior sampling with late nt diffusion models,

    L. Rout, N. Raoof, G. Daras, C. Caramanis, A. Dimakis, an d S. Shakkottai, “Solving linear inverse problems provably via posterior sampling with late nt diffusion models,” Advances in Neural Information Processing Systems, vol. 36, 2024

  15. [15]

    Beyond first-order tweedie: Solving inverse problems using latent diffusion,

    L. Rout, Y . Chen, A. Kumar, C. Caramanis, S. Shakkottai, and W .-S. Chu, “Beyond first-order tweedie: Solving inverse problems using latent diffusion, ” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , 2024, pp. 9472–9481

  16. [16]

    A variatio nal perspective on solving inverse problems with diffusion models,

    M. Mardani, J. Song, J. Kautz, and A. V ahdat, “A variatio nal perspective on solving inverse problems with diffusion models,” in The Twelfth International Conference on Learning Rep- resentations, 2024

  17. [17]

    V ariational diffusion mode ls for blind mri inverse prob- lems,

    C. Alkan, J. Oscanoa, D. Abraham, M. Gao, A. Nurdinova, K . Setsompop, J. M. Pauly, M. Mardani, and S. V asanawala, “V ariational diffusion mode ls for blind mri inverse prob- lems,” in NeurIPS 2023 W orkshop on Deep Learning and Inverse Problems, 2023

  18. [18]

    Score-based diffusion models as principled priors for inv erse imaging,

    B. T. Feng, J. Smith, M. Rubinstein, H. Chang, K. L. Bouma n, and W . T. Freeman, “Score-based diffusion models as principled priors for inv erse imaging,” arXiv preprint arXiv:2304.11751, 2023

  19. [19]

    Efficient bayesian computat ional imaging with a surrogate score-based prior,

    B. T. Feng and K. L. Bouman, “Efficient bayesian computat ional imaging with a surrogate score-based prior,” arXiv preprint arXiv:2309.01949 , 2023

  20. [20]

    Dmpl ug: A plug-in method for solving inverse problems with diffusion models,

    H. Wang, X. Zhang, T. Li, Y . Wan, T. Chen, and J. Sun, “Dmpl ug: A plug-in method for solving inverse problems with diffusion models,” arXiv preprint arXiv:2405.16749 , 2024

  21. [21]

    Zero-shot i mage restoration via diffusion inversion,

    H. Chihaoui, A. Lemkhenter, and P . Favaro, “Zero-shot i mage restoration via diffusion inversion,” 2024. [Online]. Available: https://openreview.net/forum?id=ZnmofqLWMQ

  22. [22]

    Consistency model is an effective posterior samp le approximation for diffusion inverse solvers,

    T. Xu, Z. Zhu, J. Li, D. He, Y . Wang, M. Sun, L. Li, H. Qin, Y . Wang, J. Liu, and Y .- Q. Zhang, “Consistency model is an effective posterior samp le approximation for diffusion inverse solvers,” 2024

  23. [23]

    Scor e-guided intermediate level optimization: Fast Langevin mixing for inverse probl ems,

    G. Daras, Y . Dagan, A. Dimakis, and C. Daskalakis, “Scor e-guided intermediate level optimization: Fast Langevin mixing for inverse probl ems,” in Proceedings of the 39th International Conference on Machine Learning , ser. Proceedings of Machine Learning Research, K. Chaudhuri, S. Jegelka, L. Song, C. Sze pesvari, G. Niu, and S. Sabato, Eds., vol. 162. P...

  24. [24]

    Principled probabilistic imaging using diffusion models as plug-and-play priors,

    Z. Wu, Y . Sun, Y . Chen, B. Zhang, Y . Y ue, and K. L. Bouman, “ Principled probabilistic imaging using diffusion models as plug-and-play priors,” 2 024. 30

  25. [25]

    Diffusion posterior sampling for li near inverse problem solving: A filtering perspective,

    Z. Dou and Y . Song, “Diffusion posterior sampling for li near inverse problem solving: A filtering perspective,” in The Twelfth International Conference on Learning Represen tations, 2023

  26. [26]

    Provable probabilistic imaging using score-based generative priors,

    Y . Sun, Z. Wu, Y . Chen, B. T. Feng, and K. L. Bouman, “Provable probabilistic imaging using score-based generative priors,” IEEE Transactions on Computational Imaging , 2024

  27. [27]

    Diffusion probabilistic modeling of protein backbones in 3d for the motif-scaffolding problem,

    B. L. Trippe, J. Yim, D. Tischer, D. Baker, T. Broderick, R. Barzilay, and T. S. Jaakkola, “Diffusion probabilistic modeling of protein backbones in 3d for the motif-scaffolding problem,” in The Eleventh International Conference on Learning Represe ntations, 2023. [Online]. Available: https://openreview.net/forum?id=6TxBxqNME1Y

  28. [28]

    Monte carlo guided denoising diffusion models for bayesian linear inverse problems

    G. Cardoso, S. Le Corff, E. Moulines et al., “Monte carlo guided denoising diffusion models for bayesian linear inverse problems.” in The Twelfth International Conference on Learning Representations, 2023

  29. [29]

    Practical and asymptotically exact conditional sampling in diffusio n models,

    L. Wu, B. L. Trippe, C. A. Naesseth, J. P . Cunningham, and D. Blei, “Practical and asymptotically exact conditional sampling in diffusio n models,” in Thirty-seventh Conference on Neural Information Processing Systems , 2023. [Online]. Available: https://openreview.net/forum?id=eWKqr1zcRv

  30. [30]

    Solving linear inv erse problems using the prior implicit in a denoiser,

    Z. Kadkhodaie and E. P . Simoncelli, “Solving linear inv erse problems using the prior implicit in a denoiser,” arXiv preprint arXiv:2007.13640 , 2020

  31. [31]

    Improving diffusi on models for inverse problems using manifold constraints,

    H. Chung, B. Sim, D. Ryu, and J. C. Y e, “Improving diffusi on models for inverse problems using manifold constraints,” Advances in Neural Information Processing Systems, vol. 35, pp. 25 683–25 696, 2022

  32. [32]

    So lving inverse problems with latent diffusion models via hard data consistency,

    B. Song, S. M. Kwon, Z. Zhang, X. Hu, Q. Qu, and L. Shen, “So lving inverse problems with latent diffusion models via hard data consistency,” in The Twelfth International Conference on Learning Representat ions, 2024. [Online]. Available: https://openreview.net/forum?id=j8hdRqOUhN

  33. [33]

    Manifold preserving guided diffusion,

    Y . He, N. Murata, C.-H. Lai, Y . Takida, T. Uesaka, D. Kim, W .-H. Liao, Y . Mitsufuji, J. Z. Kolter, R. Salakhutdinov et al., “Manifold preserving guided diffusion,” arXiv preprint arXiv:2311.16424, 2023

  34. [34]

    Promp t-tuning latent diffusion models for inverse problems,

    H. Chung, J. C. Y e, P . Milanfar, and M. Delbracio, “Promp t-tuning latent diffusion models for inverse problems,” in International Conference on Machine Learning . PMLR, 2014

  35. [35]

    Regularizatio n by texts for latent diffusion inverse solvers,

    J. Kim, G. Y . Park, H. Chung, and J. C. Y e, “Regularizatio n by texts for latent diffusion inverse solvers,” arXiv preprint arXiv:2311.15658 , 2023

  36. [36]

    Dreamsampler: Unifying diffusion sampling and score distillation for image manipulation,

    J. Kim, G. Y . Park, and J. C. Y e, “Dreamsampler: Unifying diffusion sampling and score distillation for image manipulation,” arXiv preprint arXiv:2403.11415 , 2024

  37. [37]

    The seismic inverse problem as a sequence of before stack migra- tions,

    P . Lailly and J. Bednar, “The seismic inverse problem as a sequence of before stack migra- tions,” in Conference on inverse scattering: theory and application , vol. 1983. Philadelphia, Pa, 1983, pp. 206–220

  38. [38]

    An overview of full-waveform inversion in exploration geophysics,

    J. Virieux and S. Operto, “An overview of full-waveform inversion in exploration geophysics,” Geophysics, vol. 74, no. 6, pp. WCC1–WCC26, 2009

  39. [39]

    Inverse problems i n atmospheric science and their application,

    S. Huang, J. Xiang, H. Du, and X. Cao, “Inverse problems i n atmospheric science and their application,” in Journal of Physics: Conference Series , vol. 12, no. 1. IOP Publishing, 2005, p. 45

  40. [40]

    Wunsch, The ocean circulation inverse problem

    C. Wunsch, The ocean circulation inverse problem. Cambridge University Press, 1996

  41. [41]

    Diffusion models for audio restoration,

    J.-M. Lemercier, J. Richter, S. Welker, E. Moliner, V . Välimäki, and T. Gerkmann, “Diffusion models for audio restoration,” arXiv preprint arXiv:2402.09821 , 2024. 31

  42. [42]

    Unsu- pervised vocal dereverberation with diffusion-based gene rative models,

    K. Saito, N. Murata, T. Uesaka, C.-H. Lai, Y . Takida, T. F ukui, and Y . Mitsufuji, “Unsu- pervised vocal dereverberation with diffusion-based gene rative models,” in ICASSP 2023- 2023 IEEE International Conference on Acoustics, Speech an d Signal Processing (ICASSP) . IEEE, 2023, pp. 1–5

  43. [43]

    Blind audio b andwidth extension: A diffusion- based zero-shot approach,

    E. Moliner, F. Elvander, and V . Välimäki, “Blind audio b andwidth extension: A diffusion- based zero-shot approach,” arXiv preprint arXiv:2306.01433 , 2023

  44. [44]

    Solving audi o inverse problems with a diffusion model,

    E. Moliner, J. Lehtinen, and V . Välimäki, “Solving audi o inverse problems with a diffusion model,” in ICASSP 2023-2023 IEEE International Conference on Acousti cs, Speech and Sig- nal Processing (ICASSP). IEEE, 2023, pp. 1–5

  45. [45]

    Diffusion-based audio inp ainting,

    E. Moliner and V . Välimäki, “Diffusion-based audio inp ainting,” arXiv preprint arXiv:2305.15266, 2023

  46. [46]

    Vrdmg: V ocal restoration via diffusion p osterior sampling with multiple guidance,

    C. Hernandez-Olivan, K. Saito, N. Murata, C.-H. Lai, M. A. Martínez-Ramirez, W .-H. Liao, and Y . Mitsufuji, “Vrdmg: V ocal restoration via diffusion p osterior sampling with multiple guidance,” in ICASSP 2024-2024 IEEE International Conference on Acousti cs, Speech and Signal Processing (ICASSP). IEEE, 2024, pp. 596–600

  47. [47]

    Solving inverse problems in medical imaging with score-based generative models,

    Y . Song, L. Shen, L. Xing, and S. Ermon, “Solving inverse problems in medical imaging with score-based generative models,” arXiv preprint arXiv:2111.08005 , 2021

  48. [48]

    Solving 3d inverse prob- lems using pre-trained 2d diffusion models,

    H. Chung, D. Ryu, M. T. McCann, M. L. Klasky, and J. C. Y e, “ Solving 3d inverse prob- lems using pre-trained 2d diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 22 542–22 551

  49. [49]

    Ambient diffusion posterior sampling: Solving inverse problems with diffusi on models trained on corrupted data,

    A. Aali, G. Daras, B. Levac, S. Kumar, A. G. Dimakis, and J . I. Tamir, “Ambient diffusion posterior sampling: Solving inverse problems with diffusi on models trained on corrupted data,” arXiv preprint arXiv:2403.08728 , 2024

  50. [50]

    Score-based diffusion models for accelerated mri,

    H. Chung and J. C. Y e, “Score-based diffusion models for accelerated mri,” Medical image analysis, vol. 80, p. 102479, 2022

  51. [51]

    Brief review of im age denoising techniques,

    L. Fan, F. Zhang, H. Fan, and C. Zhang, “Brief review of im age denoising techniques,” Visual Computing for Industry, Biomedicine, and Art , vol. 2, no. 1, p. 7, 2019

  52. [52]

    Deep learning-based image and video inpainting: A survey,

    W . Quan, J. Chen, Y . Liu, D.-M. Y an, and P . Wonka, “Deep learning-based image and video inpainting: A survey,” International Journal of Computer Vision , vol. 132, no. 7, pp. 2367– 2400, 2024

  53. [53]

    Inpaint anything: Segment anything meets image inpainting,

    T. Y u, R. Feng, R. Feng, J. Liu, X. Jin, W . Zeng, and Z. Chen , “Inpaint anything: Segment anything meets image inpainting,” arXiv preprint arXiv:2304.06790 , 2023

  54. [54]

    Predicting a protein’s stability under a million mutations,

    J. Ouyang-Zhang, D. J. Diaz, A. Klivans, and P . Krähenbü hl, “Predicting a protein’s stability under a million mutations,” NeurIPS, 2023

  55. [55]

    Stability oracle: a structure-based graph-transformer framework for identifying stabilizing mutations,

    D. J. Diaz, C. Gong, J. Ouyang-Zhang, J. M. Loy, J. Wells, D. Y ang, A. D. Ellington, A. G. Dimakis, and A. R. Klivans, “Stability oracle: a structure-based graph-transformer framework for identifying stabilizing mutations,” Nature Communications, vol. 15, no. 1, p. 6170, 2024

  56. [56]

    Machine-learning-guided directed evolution for protein engineering,

    K. K. Y ang, Z. Wu, and F. H. Arnold, “Machine-learning-guided directed evolution for protein engineering,” Nature methods, vol. 16, no. 8, pp. 687–694, 2019

  57. [57]

    Deep dive into machine learning models for protein engineer- ing,

    Y . Xu, D. V erma, R. P . Sheridan, A. Liaw, J. Ma, N. M. Marsh all, J. McIntosh, E. C. Sherer, V . Svetnik, and J. M. Johnston, “Deep dive into machine learning models for protein engineer- ing,” Journal of chemical information and modeling , vol. 60, no. 6, pp. 2773–2790, 2020

  58. [58]

    Solving i nverse problems with score-based generative priors learned from noisy data,

    A. Aali, M. Arvinte, S. Kumar, and J. I. Tamir, “Solving i nverse problems with score-based generative priors learned from noisy data,” arXiv preprint arXiv:2305.01166 , 2023

  59. [59]

    fastmri: An open dataset and benchmarks for accelerated m ri,

    J. Zbontar, F. Knoll, A. Sriram, T. Murrell, Z. Huang, M. J. Muckley, A. Defazio, R. Stern, P . Johnson, M. Bruno et al., “fastmri: An open dataset and benchmarks for accelerated m ri,” arXiv preprint arXiv:1811.08839 , 2018. 32

  60. [60]

    Skm-tea: A dataset for accelerated mri reconstruction with dense ima ge labels for quantitative clinical evaluation,

    A. D. Desai, A. M. Schmidt, E. B. Rubin, C. M. Sandino, M. S . Black, V . Mazzoli, K. J. Stevens, R. Boutin, C. Ré, G. E. Gold, B. A. Hargreaves, and A. S. Chaudhari, “Skm-tea: A dataset for accelerated mri reconstruction with dense ima ge labels for quantitative clinical evaluation,” 2022

  61. [61]

    MRI Da ta: Under- sampled Abdomens,

    T. Zhang, J. Pauly, S. V asanawala, and M. Lustig, “MRI Da ta: Under- sampled Abdomens,” Undersampled Abdomens | MRI Data . [Online]. Available: http://old.mridata.org/undersampled/abdomens

  62. [62]

    MRI Data: Undersampled Knees,

    U. Tariq, P . Lai, M. Lustig, M. Alley, M. Zhang, G. Gold, a nd V . S. S, “MRI Data: Undersampled Knees,” Undersampled Knees | MRI Data . [Online]. Available: http://old.mridata.org/undersampled/knees

  63. [63]

    Co mpressed sensing mri,

    M. Lustig, D. L. Donoho, J. M. Santos, and J. M. Pauly, “Co mpressed sensing mri,” IEEE signal processing magazine, vol. 25, no. 2, pp. 72–82, 2008

  64. [64]

    Why do commercial ct scanners still employ tradi- tional, filtered back-projection for image reconstruction ?

    X. Pan, E. Y . Sidky, and M. V annier, “Why do commercial ct scanners still employ tradi- tional, filtered back-projection for image reconstruction ?” Inverse problems, vol. 25, no. 12, p. 123009, 2009

  65. [65]

    Near- exact recovery for tomographic inverse problems via deep learning,

    M. Genzel, I. Gühring, J. Macdonald, and M. März, “Near- exact recovery for tomographic inverse problems via deep learning,” in International Conference on Machine Learning . PMLR, 2022, pp. 7368–7381

  66. [66]

    The inversion problem and applications of the generalized radon transform,

    G. Beylkin, “The inversion problem and applications of the generalized radon transform,” Communications on pure and applied mathematics , vol. 37, no. 5, pp. 579–599, 1984

  67. [67]

    A. C. Kak and M. Slaney, Principles of computerized tomographic imaging . SIAM, 2001

  68. [68]

    Spectr al band replication, a novel approach in audio coding,

    M. Dietz, L. Liljeryd, K. Kjorling, and O. Kunz, “Spectr al band replication, a novel approach in audio coding,” in Audio Engineering Society Convention 112. Audio Engineering Society, 2002

  69. [69]

    Cryo-electron microscopy of vitrified specimens,

    J. Dubochet, M. Adrian, J.-J. Chang, J.-C. Homo, J. Lepa ult, A. W . McDowall, and P . Schultz, “Cryo-electron microscopy of vitrified specimens,” Quarterly reviews of biophysics , vol. 21, no. 2, pp. 129–228, 1988

  70. [70]

    Super-resolution image reconstruction: a technical overview,

    S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: a technical overview,”IEEE signal processing magazine , vol. 20, no. 3, pp. 21–36, 2003

  71. [71]

    Image super-resolution via iterative refinement,

    C. Saharia, J. Ho, W . Chan, T. Salimans, D. J. Fleet, and M . Norouzi, “Image super-resolution via iterative refinement,” arXiv preprint arXiv:2104.07636 , 2021

  72. [72]

    Speech dereverber- ation based on variance-normalized delayed linear predict ion,

    T. Nakatani, T. Y oshioka, K. Kinoshita, M. Miyoshi, and B.-H. Juang, “Speech dereverber- ation based on variance-normalized delayed linear predict ion,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, no. 7, pp. 1717–1731, 2010

  73. [73]

    Phase retrieval algorithms: a compariso n,

    J. R. Fienup, “Phase retrieval algorithms: a compariso n,” Applied optics, vol. 21, no. 15, pp. 2758–2769, 1982

  74. [74]

    First m87 event horizon telescope results. iv. imaging th e central supermassive black hole,

    K. Akiyama, A. Alberdi, W . Alef, K. Asada, R. Azulay, A.- K. Baczko, D. Ball, M. Balokovi ´c, J. Barrett, D. Bintley et al., “First m87 event horizon telescope results. iv. imaging th e central supermassive black hole,” The Astrophysical Journal Letters, vol. 875, no. 1, p. L4, 2019

  75. [75]

    Tarantola, Inverse problem theory and methods for model parameter esti mation

    A. Tarantola, Inverse problem theory and methods for model parameter esti mation. SIAM, 2005

  76. [76]

    Theoretical perspectives on deep learning methods in inverse problems,

    J. Scarlett, R. Heckel, M. R. D. Rodrigues, P . Hand, and Y . C. Eldar, “Theoretical perspectives on deep learning methods in inverse problems, ” IEEE Journal on Selected Areas in Information Theory , vol. 3, no. 3, p. 433–453, Sep. 2022. [Online]. Available: http://dx.doi.org/10.1109/JSAIT.2023.3241123

  77. [77]

    Maximum a posteriori estimat ors as a limit of bayes estimators,

    R. Bassett and J. Deride, “Maximum a posteriori estimat ors as a limit of bayes estimators,” Mathematical Programming, vol. 174, pp. 129–144, 2019. 33

  78. [78]

    Revisiting maximum-a-posteriori estima tion in log-concave models,

    M. Pereyra, “Revisiting maximum-a-posteriori estima tion in log-concave models,” SIAM Journal on Imaging Sciences , vol. 12, no. 1, pp. 650–670, 2019

  79. [79]

    G. A. Y oung, R. L. Smith, and R. L. Smith, Essentials of statistical inference . Cambridge University Press, 2005, vol. 16

  80. [80]

    K. P . Murphy, Machine learning: a probabilistic perspective . MIT press, 2012

Showing first 80 references.