pith. machine review for the scientific record. sign in

arxiv: 2605.13620 · v1 · submitted 2026-05-13 · 🧮 math.NA · cs.NA

Recognition: 2 theorem links

· Lean Theorem

A Majorization-Minimization with Monte Carlo Approach for Hyperparameter Estimation

Authors on Pith no claims yet

Pith reviewed 2026-05-14 18:09 UTC · model grok-4.3

classification 🧮 math.NA cs.NA
keywords hyperparameter estimationmajorization-minimizationMonte Carloinverse problemsempirical Bayeslog-determinantGaussian priors
0
0 comments X

The pith

M³C iterates converge with high probability to a critical point of the hyperparameter cost function by majorizing and Monte Carlo approximating the log-determinant.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Inverse problems with linear forward models and Gaussian priors often involve unknown hyperparameters from the model, noise, or prior specification. These are handled in an empirical Bayes framework by maximizing the marginalized posterior, but the optimization requires repeated costly log-determinant evaluations. The paper introduces the M³C method that builds a majorizing surrogate for the log-determinant term and approximates this surrogate with Monte Carlo sampling, turning the hard problem into a sequence of easier ones. Under assumptions on the majorant validity and Monte Carlo accuracy, the iterates converge with high probability to a critical point of the original objective. The approach is tested on seismic tomography, super-resolution imaging, and contaminant source identification.

Core claim

We replace the challenging optimization problem for hyperparameter estimation with a sequence of simpler ones by utilizing a majorization function for the log-determinant term, combined with a Monte Carlo estimator to approximate the majorant. We provide theoretical results showing that under certain assumptions, the M³C iterates converge with high probability to a critical point of the original cost function.

What carries the argument

The M³C algorithm, which constructs a majorizing function for the log-determinant term in the empirical Bayes objective and approximates it via Monte Carlo sampling to produce tractable surrogate optimization problems.

If this is right

  • Log-determinant evaluations no longer need to be computed exactly at every iteration.
  • Hyperparameter MAP estimates become feasible for larger forward models where direct optimization was previously intractable.
  • The method applies directly to non-Gaussian posteriors arising from hierarchical models with unknown noise or prior parameters.
  • Convergence guarantees hold across the tested applications in tomography, imaging, and source identification.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The surrogate construction may extend to other non-convex or expensive terms that appear in marginal likelihoods beyond log-determinants.
  • Adaptive choice of Monte Carlo sample sizes during iteration could further reduce total cost while preserving the convergence property.
  • The approach opens a route to online or sequential hyperparameter updates when new data arrive incrementally.

Load-bearing premise

The chosen majorization function is a valid upper bound for the log-determinant and the Monte Carlo estimator is accurate enough for the sequence to converge.

What would settle it

Numerical runs on a small-scale inverse problem with known critical point where M³C diverges when Monte Carlo sample size is reduced below the accuracy threshold required by the theory.

Figures

Figures reproduced from arXiv: 2605.13620 by Arvind K. Saibaba, Elle Buser, Hugo D\'iaz, Julianne Chung.

Figure 1
Figure 1. Figure 1: Seismic inversion example. The ground truth is provided, along with re [PITH_FULL_IMAGE:figures/full_fig_p018_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The separable nonlinear inverse problem is then given by (1.2) where b =      b r b (1) . . . b (8)      , θ =    θ (1) . . . θ (8)    , A(θ) =      D DS(θ (1)) . . . DS(θ (8))      , x ∈ R 16,384 is the unknown 128 × 128 high-resolution image, µx = 0, Qx = 252 I, and R = (0.37)2 I. The reference image, denoted by b r , does not undergo a translation or rotation meaning Dx = b r .… view at source ↗
Figure 2
Figure 2. Figure 2: Super-resolution example. The true high resolution image is provided, along [PITH_FULL_IMAGE:figures/full_fig_p020_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Super-resolution example. Top row: The true and initial parameters, [PITH_FULL_IMAGE:figures/full_fig_p020_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Advection–diffusion example. The top row provides the prior mean [PITH_FULL_IMAGE:figures/full_fig_p022_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Advection–diffusion example. In the case [PITH_FULL_IMAGE:figures/full_fig_p023_5.png] view at source ↗
read the original abstract

We consider inverse problems with linear forward models and Gaussian priors, but with unknown hyperparameters that may arise from the model, the noise, or the specification of the prior. We model this using a hierarchical Bayes framework resulting in a posterior distribution that is non-Gaussian, in general, and challenging to sample from. Consequently, we use an empirical Bayes framework for estimating the maximum a posteriori estimate of the hyperpameters by considering the marginalized posterior distribution. However, the optimization problem is also computationally challenging due to the need for repeated evaluation of log determinants. To address this issue, we propose a Majorization-Minimization with Monte Carlo approach, which we call M$^{3}$C, for hyperparameter estimation. Specifically, we replace the challenging optimization problem with a sequence of simpler ones by utilizing a majorization function (or majorant) for the log-determinant term, combined with a Monte Carlo estimator to approximate the majorant. We provide theoretical results, showing that under certain assumptions, the M$^{3}$C iterates converge with high probability to a critical point of the original cost function. A variety of numerical examples are provided from seismic tomography, super-resolution imaging, and contaminant source identification.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes a Majorization-Minimization with Monte Carlo (M³C) algorithm for estimating hyperparameters in hierarchical Bayesian models for inverse problems with linear forward operators and Gaussian priors. The method approximates the log-determinant term in the marginalized posterior using a majorant combined with Monte Carlo sampling, and claims that under suitable assumptions the iterates converge with high probability to a critical point of the original objective. Numerical results are presented for applications in seismic tomography, super-resolution imaging, and contaminant source identification.

Significance. If the convergence result can be strengthened with explicit quantitative bounds, the method would provide a practical route to hyperparameter optimization that avoids repeated full-matrix factorizations in large-scale linear inverse problems.

major comments (2)
  1. [Theoretical results] Theoretical results section: The high-probability convergence claim requires that the Monte Carlo error in the majorant for the log-determinant be smaller than the majorization gap at every iteration, yet no sample-complexity bound is supplied that relates the number of Monte Carlo samples, the matrix dimension, and the failure probability. Without this, the guarantee remains conditional on an assumption whose validity is not verified analytically or numerically for the high-dimensional covariances arising in the examples.
  2. [Convergence theorem] Convergence theorem: The statement that the M³C iterates converge to a critical point of the original cost function assumes the majorization function is valid and the Monte Carlo estimator is sufficiently accurate, but the proof sketch does not quantify how the Monte Carlo variance propagates through the majorization-minimization steps or how many samples are needed to maintain the descent property with high probability.
minor comments (2)
  1. [Abstract] Abstract: 'hyperpameters' is a typographical error and should read 'hyperparameters'.
  2. [Algorithm description] Notation: The distinction between the exact majorant and its Monte Carlo approximation is not always made explicit in the algorithmic description, which can obscure the source of the stochasticity in the convergence analysis.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for the detailed and constructive comments on our manuscript. We address each of the major comments below and outline the revisions we plan to make.

read point-by-point responses
  1. Referee: [Theoretical results] Theoretical results section: The high-probability convergence claim requires that the Monte Carlo error in the majorant for the log-determinant be smaller than the majorization gap at every iteration, yet no sample-complexity bound is supplied that relates the number of Monte Carlo samples, the matrix dimension, and the failure probability. Without this, the guarantee remains conditional on an assumption whose validity is not verified analytically or numerically for the high-dimensional covariances arising in the examples.

    Authors: We acknowledge that the high-probability convergence is conditional on the Monte Carlo error being controlled relative to the majorization gap at each iteration. The manuscript presents the result under this assumption without providing an explicit sample-complexity bound, as such a bound would depend on specific properties of the covariance matrices that vary across applications and are difficult to characterize uniformly. In the numerical examples, we have used a fixed number of Monte Carlo samples and observed consistent convergence behavior, which indirectly verifies the assumption in practice. In the revised manuscript, we will add a discussion on how the number of samples can be chosen adaptively by estimating the variance of the Monte Carlo estimator and include additional plots showing the estimated error in the log-determinant approximation for the high-dimensional cases. revision: partial

  2. Referee: [Convergence theorem] Convergence theorem: The statement that the M³C iterates converge to a critical point of the original cost function assumes the majorization function is valid and the Monte Carlo estimator is sufficiently accurate, but the proof sketch does not quantify how the Monte Carlo variance propagates through the majorization-minimization steps or how many samples are needed to maintain the descent property with high probability.

    Authors: The convergence proof follows the majorization-minimization paradigm, ensuring descent in the objective when the approximation is accurate enough, combined with high-probability bounds from concentration results for the Monte Carlo sampling. We agree that a more quantitative analysis of variance propagation would be valuable. We will revise the theoretical section to include a more detailed sketch that bounds the propagation of the Monte Carlo error through the iterations using standard matrix concentration inequalities, and provide guidance on the sample size required to achieve a desired failure probability. revision: yes

standing simulated objections not resolved
  • Deriving a general explicit sample-complexity bound applicable to arbitrary high-dimensional covariance matrices without additional assumptions.

Circularity Check

0 steps flagged

No circularity: standard MM-MC framework with explicit assumptions on majorant validity and MC accuracy

full rationale

The paper applies the established majorization-minimization framework to a hierarchical Gaussian inverse problem, replacing the log-determinant term with a majorant that is then approximated by Monte Carlo. Convergence to a critical point of the original objective is stated to hold under explicit assumptions on the majorant (upper bound with equality at the current point) and on the Monte Carlo error. These assumptions are listed as external conditions rather than derived from the result itself; no equation reduces the claimed convergence to a fitted parameter, a self-referential definition, or a load-bearing self-citation. The numerical examples serve only as illustration, not as the justification for the theoretical statement. The derivation therefore remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The method rests on the existence of a valid majorizing function for the log-determinant and sufficient accuracy of the Monte Carlo estimator to preserve convergence properties.

axioms (2)
  • domain assumption A majorization function exists that upper-bounds the log-determinant term
    Central to replacing the original optimization with simpler surrogate problems.
  • domain assumption Monte Carlo samples provide a sufficiently accurate approximation to the majorant
    Required for the high-probability convergence guarantee.

pith-pipeline@v0.9.0 · 5522 in / 1210 out tokens · 64362 ms · 2026-05-14T18:09:16.123101+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

  • IndisputableMonolith/Cost/FunctionalEquation.lean washburn_uniqueness_aczel unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    We replace the challenging optimization problem with a sequence of simpler ones by utilizing a majorization function (or majorant) for the log-determinant term, combined with a Monte Carlo estimator to approximate the majorant... M³C iterates converge with high probability to a critical point of the original cost function.

  • IndisputableMonolith/Foundation/BranchSelection.lean branch_selection unclear
    ?
    unclear

    Relation between the paper passage and the cited Recognition theorem.

    the MM principle majorizes the objective function F(θ) by a surrogate function G(θ | θt)... G(θt | θt) = F(θt) and G(θ | θt) ≥ F(θ)

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

189 extracted references · 15 canonical work pages · 1 internal anchor

  1. [1]

    Inverse Problems , volume=

    Numerical methods for coupled super-resolution , author=. Inverse Problems , volume=

  2. [2]

    The Gohberg Anniversary Collection: Volume I: The Calgary Conference and Matrix Theory Papers and Volume II: Topics in Analysis and Operator Theory , pages=

    Comparing a matrix to its off-diagonal part , author=. The Gohberg Anniversary Collection: Volume I: The Calgary Conference and Matrix Theory Papers and Volume II: Topics in Analysis and Operator Theory , pages=. 1989 , organization=

  3. [3]

    2017 , publisher=

    Super-resolution imaging , author=. 2017 , publisher=

  4. [4]

    2018 , publisher=

    High-dimensional probability: An introduction with applications in data science , author=. 2018 , publisher=

  5. [5]

    2026 , publisher=

    Bayesian Statistical Methods: With Applications to Machine Learning , author=. 2026 , publisher=

  6. [6]

    Journal of the ACM , volume=

    Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix , author=. Journal of the ACM , volume=. 2011 , publisher=

  7. [7]

    Journal of Computational Physics , volume=

    The black-box fast multipole method , author=. Journal of Computational Physics , volume=. 2009 , publisher=

  8. [8]

    SIAM Journal on Matrix Analysis and Applications , volume=

    Parametric kernel low-rank approximations using tensor train decomposition , author=. SIAM Journal on Matrix Analysis and Applications , volume=. 2025 , publisher=

  9. [9]

    Cortinovis, Alice and Kressner, Daniel , title =. Found. Comput. Math. , month = jun, pages =. 2022 , issue_date =

  10. [10]

    , title =

    Higham, Nicholas J. , title =. 2008 , doi =

  11. [11]

    Journal of the Royal Statistical Society: Series B (Statistical Methodology) , volume=

    Bayesian calibration of computer models , author=. Journal of the Royal Statistical Society: Series B (Statistical Methodology) , volume=. 2001 , publisher=

  12. [12]

    Inverse problems , volume=

    Approximation errors and model reduction with an application in optical diffusion tomography , author=. Inverse problems , volume=. 2006 , publisher=

  13. [13]

    SIAM Journal on Matrix Analysis and Applications , volume=

    Local Convergence Analysis of a Variable Projection Method for Regularized Separable Nonlinear Inverse Problems , author=. SIAM Journal on Matrix Analysis and Applications , volume=. 2025 , publisher=

  14. [14]

    Journal of Mathematical Imaging and Vision , volume=

    Computed tomography reconstruction with uncertain view angles by iteratively updated model discrepancy , author=. Journal of Mathematical Imaging and Vision , volume=. 2021 , publisher=

  15. [15]

    Inverse Problems , volume=

    Computed tomography with view angle estimation using uncertainty quantification , author=. Inverse Problems , volume=. 2021 , publisher=

  16. [16]

    Fast sampling in a linear-

    Fox, Colin and Norton, Richard A , journal=. Fast sampling in a linear-. 2016 , publisher=

  17. [17]

    Hierachical

    Calvetti, Daniela and Somersalo, Erkki and Strang, Alexander , journal=. Hierachical. 2019 , publisher=

  18. [18]

    Sparse reconstructions from few noisy data: analysis of hierarchical

    Calvetti, Daniela and Pragliola, Monica and Somersalo, Erkki and Strang, Alexander , journal=. Sparse reconstructions from few noisy data: analysis of hierarchical. 2020 , publisher=

  19. [19]

    Inverse problems: From regularization to

    Calvetti, Daniela and Somersalo, Erkki , journal=. Inverse problems: From regularization to. 2018 , publisher=

  20. [20]

    Efficient sampling for sparse

    Glaubitz, Jan and Marzouk, Youssef , journal=. Efficient sampling for sparse

  21. [21]

    Priorconditioned sparsity-promoting projection methods for deterministic and

    Lindbloom, Jonathan and Pasha, Mirjeta and Glaubitz, Jan and Marzouk, Youssef , journal=. Priorconditioned sparsity-promoting projection methods for deterministic and

  22. [22]

    Computationally efficient sampling methods for sparsity promoting hierarchical

    Calvetti, Daniela and Somersalo, Erkki , journal=. Computationally efficient sampling methods for sparsity promoting hierarchical. 2024 , publisher=

  23. [23]

    Subspace Splitting Fast Sampling from

    Calvetti, Daniela and Somersalo, Erkki , journal=. Subspace Splitting Fast Sampling from

  24. [24]

    Hierarchical

    Sanz-Alonso, Daniel and Waniorek, Nathan , journal=. Hierarchical. 2025 , publisher=

  25. [25]

    2016 , publisher =

    MM optimization algorithms , author=. 2016 , publisher =

  26. [26]

    Inverse problems , volume=

    Separable nonlinear least squares: the variable projection method and itsapplications , author=. Inverse problems , volume=. 2003 , publisher=

  27. [27]

    SIAM Review , volume=

    Algorithms for separable nonlinear least squares problems , author=. SIAM Review , volume=. 1980 , publisher=

  28. [28]

    Computational Optimization and Applications , volume=

    Variable projection for nonlinear least squares problems , author=. Computational Optimization and Applications , volume=. 2013 , publisher=

  29. [29]

    SIAM Journal on Scientific Computing , volume=

    An efficient iterative approach for large-scale separable nonlinear inverse problems , author=. SIAM Journal on Scientific Computing , volume=. 2010 , publisher=

  30. [30]

    Variable projection methods for separable nonlinear inverse problems with general-form

    Espa. Variable projection methods for separable nonlinear inverse problems with general-form. Inverse Problems , volume=. 2023 , publisher=

  31. [31]

    Numerical Algorithms , volume=

    Constrained numerical optimization methods for blind deconvolution , author=. Numerical Algorithms , volume=. 2014 , publisher=

  32. [32]

    Efficient hyperparameter estimation in

    Chung, Julianne and Miller, Scot M and Sabate Landman, Malena and Saibaba, Arvind K , journal=. Efficient hyperparameter estimation in. 2025 , publisher=

  33. [33]

    doi:10.1137/140951758 , journal =

    An Adaptive Shifted Power Method for Computing Generalized Tensor Eigenpairs , author =. doi:10.1137/140951758 , journal =

  34. [34]

    Nick Higham , title =

  35. [35]

    Kolda and Ali Pinar , eprint =

    Chengbin Peng and Tamara G. Kolda and Ali Pinar , eprint =. Accelerating Community Detection by Using

  36. [36]

    and Zhang, Shanrong and Merritt, Matthew E

    Woessner, Donald E. and Zhang, Shanrong and Merritt, Matthew E. and Sherry, A. Dean , title =. Magnetic Resonance in Medicine , doi =

  37. [37]

    2003 , eid =

    Properties of Highly Clustered Networks , author =. 2003 , eid =. doi:10.1103/PhysRevE.68.026121 , journal =

  38. [38]

    Clawpack Software , author =

  39. [39]

    : A Document Preparation System

    Leslie Lamport. : A Document Preparation System. 1986

  40. [40]

    Frank Mittlebach and Michel Goossens , title =

  41. [41]

    and Van Loan, Charles F

    Golub, Gene H. and Van Loan, Charles F. , title =

  42. [42]

    Paul Dawkins , title =

  43. [43]

    User's Guide for the

  44. [44]

    Michael Downes , title =

  45. [45]

    Christian Feuers\"anger , title =

  46. [46]

    Inverse Problems , volume=

    Learning regularization parameters of inverse problems via deep neural networks , author=. Inverse Problems , volume=. 2021 , publisher=

  47. [47]

    Auto-encoding variational

    Kingma, Diederik P and Welling, Max , journal=. Auto-encoding variational

  48. [48]

    Proceedings of the AAAI conference on artificial intelligence , volume=

    Improving variational encoder-decoders in dialogue generation , author=. Proceedings of the AAAI conference on artificial intelligence , volume=

  49. [49]

    Goh, Hwan and Sheriffdeen, Sheroze and Wittmer, Jonathan and Bui-Thanh, Tan , booktitle=. Solving. 2022 , organization=

  50. [50]

    Computers & Geosciences , volume=

    Deep generative models in inversion: The impact of the generator's nonlinearity and development of a new approach based on a variational autoencoder , author=. Computers & Geosciences , volume=. 2021 , publisher=

  51. [51]

    Acta Geophysica , pages=

    Machine-learning-based prediction of regularization parameters for seismic inverse problems , author=. Acta Geophysica , pages=. 2021 , publisher=

  52. [52]

    2016 , publisher=

    Goodfellow, Ian and Bengio, Yoshua and Courville, Aaron , journal=. 2016 , publisher=

  53. [53]

    The Annals of Mathematical Statistics , pages=

    A stochastic approximation method , author=. The Annals of Mathematical Statistics , pages=. 1951 , publisher=

  54. [54]

    Adam: A Method for Stochastic Optimization

    Adam: A method for stochastic optimization , author=. arXiv preprint arXiv:1412.6980 , year=

  55. [55]

    SIAM Journal on Optimization , volume=

    The sample average approximation method for stochastic discrete optimization , author=. SIAM Journal on Optimization , volume=. 2002 , publisher=

  56. [56]

    Neural Networks , volume=

    Multilayer feedforward networks are universal approximators , author=. Neural Networks , volume=. 1989 , publisher=

  57. [57]

    1994 , publisher=

    Geophysical Inverse Theory , author=. 1994 , publisher=

  58. [58]

    Geophysics , volume=

    Occam’s inversion: A practical algorithm for generating smooth models from electromagnetic sounding data , author=. Geophysics , volume=. 1987 , publisher=

  59. [59]

    Inverse Problems , volume=

    A bilevel approach for parameter learning in inverse problems , author=. Inverse Problems , volume=. 2018 , publisher=

  60. [60]

    Large-Scale Inverse Problems and Quantification of Uncertainty , pages=

    Optimal experimental design for the large-scale nonlinear ill-posed problem of impedance imaging , author=. Large-Scale Inverse Problems and Quantification of Uncertainty , pages=. 2010 , publisher=

  61. [61]

    Inverse Problems , volume=

    An inner--outer iterative method for edge preservation in image restoration and reconstruction , author=. Inverse Problems , volume=. 2020 , publisher=

  62. [62]

    SIAM Journal on Scientific Computing , volume=

    Generalized Arnoldi--Tikhonov method for sparse reconstruction , author=. SIAM Journal on Scientific Computing , volume=. 2014 , publisher=

  63. [63]

    SIAM Journal on Scientific Computing , volume=

    Flexible Krylov methods for _p regularization , author=. SIAM Journal on Scientific Computing , volume=. 2019 , publisher=

  64. [64]

    Electronic Transactions on Numerical Analysis , volume=

    A weighted GCV method for Lanczos hybrid regularization , author=. Electronic Transactions on Numerical Analysis , volume=

  65. [65]

    Signal Processing , volume=

    UPRE method for total variation parameter selection , author=. Signal Processing , volume=. 2010 , publisher=

  66. [66]

    IEEE Transactions on Image Processing , volume=

    Parameter selection for total-variation-based image restoration using discrepancy principle , author=. IEEE Transactions on Image Processing , volume=. 2011 , publisher=

  67. [67]

    Journal of Mathematical Imaging and Vision , volume=

    Automated parameter selection for total variation minimization in image restoration , author=. Journal of Mathematical Imaging and Vision , volume=. 2017 , publisher=

  68. [68]

    JOSA A , volume=

    Selection of regularization parameter in total variation image restoration , author=. JOSA A , volume=. 2009 , publisher=

  69. [69]

    Magnetic Resonance in Medicine , volume=

    Learning a variational network for reconstruction of accelerated MRI data , author=. Magnetic Resonance in Medicine , volume=. 2018 , publisher=

  70. [70]

    SIAM journal on Imaging Sciences , volume=

    A learning-based method for solving ill-posed nonlinear inverse problems: a simulation study of lung EIT , author=. SIAM journal on Imaging Sciences , volume=. 2019 , publisher=

  71. [71]

    Computational Mathematics and Mathematical Physics , volume=

    Application of Neural Networks in Nonlinear Inverse Problems of Geophysics , author=. Computational Mathematics and Mathematical Physics , volume=. 2020 , publisher=

  72. [72]

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages=

    Learning deep CNN denoiser prior for image restoration , author=. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition , pages=

  73. [73]

    IEEE Signal Processing Magazine , volume=

    Using deep neural networks for inverse problems in imaging: beyond analytical methods , author=. IEEE Signal Processing Magazine , volume=. 2018 , publisher=

  74. [74]

    IEEE Signal Processing Magazine , volume=

    Convolutional neural networks for inverse problems in imaging: A review , author=. IEEE Signal Processing Magazine , volume=. 2017 , publisher=

  75. [75]

    Computational Optimization and Applications , volume=

    Numerical methods for A-optimal designs with a sparsity constraint for ill-posed inverse problems , author=. Computational Optimization and Applications , volume=. 2012 , publisher=

  76. [76]

    Variational Methods: In Imaging and Geometric Control , volume=

    Bilevel approaches for learning of variational imaging models , author=. Variational Methods: In Imaging and Geometric Control , volume=. 2017 , publisher=

  77. [77]

    SIAM Journal on Scientific and Statistical Computing , volume=

    A bidiagonalization-regularization procedure for large scale discretizations of ill-posed problems , author=. SIAM Journal on Scientific and Statistical Computing , volume=. 1981 , publisher=

  78. [78]

    BIT Numerical Mathematics , volume=

    A bidiagonalization algorithm for solving large and sparse ill-posed systems of linear equations , author=. BIT Numerical Mathematics , volume=. 1988 , publisher=

  79. [79]

    An Introduction to

    Calvetti, Daniela and Somersalo, Erkki , volume=. An Introduction to. 2007 , publisher=

  80. [80]

    2023 , publisher=

    Bayesian Scientific Computing , author=. 2023 , publisher=

Showing first 80 references.