pith. machine review for the scientific record. sign in

arxiv: 2605.13146 · v1 · submitted 2026-05-13 · 📊 stat.ML · cs.CV· cs.LG

Recognition: 2 theorem links

· Lean Theorem

On Hallucinations in Inverse Problems: Fundamental Limits and Provable Assessment Methods

Authors on Pith no claims yet

Pith reviewed 2026-05-14 18:10 UTC · model grok-4.3

classification 📊 stat.ML cs.CVcs.LG
keywords hallucinationsinverse problemsdeep learningimagingforward modelfaithfulness assessmentill-posed problemsreconstruction bounds
0
0 comments X

The pith

Hallucinations in AI image reconstructions arise necessarily from the ill-posed inverse problem, with magnitude bounds set only by the forward model.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that hallucinations—realistic but incorrect details produced by deep networks in inverse imaging problems—are not merely model failures but can be forced by the fundamental difficulty of recovering a signal from incomplete measurements. It derives the precise conditions under which any reconstruction method must introduce hallucinations and supplies computable bounds on their size that use only the known forward model. Algorithms are then given to estimate the smallest hallucination magnitude possible for any input and to check how faithful the details in a given reconstruction are. A sympathetic reader would care because these tools allow reliability assessment in medical or scientific imaging without access to ground-truth data.

Core claim

We develop a theoretical framework showing that such hallucinations are not merely artifacts of particular models, but can arise from the ill-posed nature of the inverse problem itself. We derive necessary and sufficient conditions for hallucinations, together with computable bounds on their magnitude that depend only on the forward model. Building on this theory, we introduce algorithms to estimate the minimum hallucination magnitude achievable by any reconstruction model for a given input and to assess the faithfulness of reconstructed details by a given reconstruction model.

What carries the argument

Necessary and sufficient conditions for the occurrence of hallucinations, together with bounds on their magnitude that are computed solely from the forward model.

If this is right

  • Any reconstruction method, including modern generative models, is subject to the same hallucination limits fixed by the forward model.
  • The minimum hallucination magnitude achievable for any given input can be estimated by algorithm.
  • Faithfulness of individual details in a reconstruction can be assessed without ground-truth data.
  • The framework applies across distinct imaging tasks and supplies a principled way to quantify hallucinations.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Reconstruction algorithms could be trained or regularized to approach the theoretical minimum hallucination bound derived from the forward model.
  • The same necessary-and-sufficient conditions may extend to other ill-posed inverse problems outside imaging, such as limited-angle tomography or sparse signal recovery.
  • Practitioners could compare the estimated minimum bound against the output of any chosen model to decide whether further measurements or a different method are required.

Load-bearing premise

The forward model is known exactly and the function spaces chosen for the signals permit derivation of the necessary and sufficient conditions without further data-dependent assumptions.

What would settle it

A reconstruction method that produces zero hallucinations on an input for which the derived conditions require positive hallucination, or that achieves a hallucination magnitude strictly below the bound computed from the forward model.

Figures

Figures reproduced from arXiv: 2605.13146 by Anders C. Hansen, David Iagaru, Josselin Garnier, Nina M. Gottschling.

Figure 1
Figure 1. Figure 1: Figure 5 from [25] shows [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Methods to anticipate and detect hallucinations in this paper. [PITH_FULL_IMAGE:figures/full_fig_p013_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Manual detail pasting method for linear forward models with additive noise, illustrated for inverse problems [PITH_FULL_IMAGE:figures/full_fig_p014_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: The number of hallucinations increases with the worst-case kernel size: Super-resolution results for MNIST [PITH_FULL_IMAGE:figures/full_fig_p017_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Illustration of Definition 2.5 and Theorem 3.2. The reconstructions are generated by a decoder trained [PITH_FULL_IMAGE:figures/full_fig_p018_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative hallucinations examples in MRI acceleration. The first three column show the the base image [PITH_FULL_IMAGE:figures/full_fig_p020_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Sharpness of the lower bound in Theorem 3.6. Both the errors and the diameters are measured in terms of [PITH_FULL_IMAGE:figures/full_fig_p022_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Examples of significant detail drawing following the method described in Figure 3. A red arrow points at [PITH_FULL_IMAGE:figures/full_fig_p023_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Quantities of Definition 2.5 and Theorem 3.2 for super-resolution of Sentinel-2 data. All the reconstruc [PITH_FULL_IMAGE:figures/full_fig_p024_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Patch-wise application of the decoder agnostic method of Figure 2. Dark areas of the image correspond [PITH_FULL_IMAGE:figures/full_fig_p025_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Sharpness of the lower bound in Theorem 3.6. Both the errors and the diameters are normalised by the [PITH_FULL_IMAGE:figures/full_fig_p026_11.png] view at source ↗
read the original abstract

Artificial intelligence (AI) has transformed imaging inverse problems, from medical diagnostics to Earth observation. Yet deep neural networks can produce hallucinations, realistic-looking but incorrect details, undermining their reliability, especially when ground truth data is unavailable. We develop a theoretical framework showing that such hallucinations are not merely artifacts of particular models, but can arise from the ill-posed nature of the inverse problem itself. We derive necessary and sufficient conditions for hallucinations, together with computable bounds on their magnitude that depend only on the forward model. Building on this theory, we introduce algorithms to: (1) estimate the minimum hallucination magnitude achievable by any reconstruction model for a given input; (2) assess the faithfulness of reconstructed details by a given reconstruction model. Experiments across three imaging tasks demonstrate that our approach applies broadly, including to modern generative models, and provides a principled way to quantify and evaluate AI hallucinations.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript develops a theoretical framework for hallucinations in AI-driven solutions to imaging inverse problems. It derives necessary and sufficient conditions for hallucinations and computable bounds on their magnitude that depend only on the forward model. Algorithms are introduced to estimate the minimum hallucination magnitude achievable by any reconstruction model and to assess faithfulness of reconstructed details for a given model. Experiments on three imaging tasks, including modern generative models, demonstrate applicability.

Significance. If the central derivations hold, the work supplies a principled, forward-model-only approach to quantifying hallucinations in ill-posed inverse problems. This would be significant for reliability assessment in applications such as medical imaging where ground truth is unavailable, moving beyond purely empirical checks.

major comments (2)
  1. [Abstract] Abstract: The claim that bounds 'depend only on the forward model' is load-bearing for the central contribution, yet the definition of a hallucination as an 'incorrect detail' requires a precise signal space X (e.g., a Banach space with topology or seminorm) that is external to the forward operator A. Without an explicit invariance argument under reasonable changes to X, the 'only on the forward model' statement does not hold.
  2. [Theory (conditions derivation)] The necessary-and-sufficient conditions (stated in the abstract) appear to presuppose a fixed admissible-signal set without data-dependent assumptions. If the proofs rely on any particular choice of X that is not shown to be canonical or invariant, the conditions risk being non-unique and the computable bounds may change with that choice.
minor comments (2)
  1. Clarify in the introduction whether the three imaging tasks are standard benchmarks or custom, and list the forward operators explicitly.
  2. Ensure all function-space notation (e.g., norms used to quantify hallucination magnitude) is defined at first use.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their thorough and constructive review. The comments raise important points about the precise dependence of our results on the forward operator alone. We address each major comment below and indicate the revisions we will make.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The claim that bounds 'depend only on the forward model' is load-bearing for the central contribution, yet the definition of a hallucination as an 'incorrect detail' requires a precise signal space X (e.g., a Banach space with topology or seminorm) that is external to the forward operator A. Without an explicit invariance argument under reasonable changes to X, the 'only on the forward model' statement does not hold.

    Authors: We thank the referee for this observation. In the manuscript the signal space X is the domain on which the forward operator A is defined, and a hallucination is any nonzero component lying in ker(A). The necessary-and-sufficient condition is therefore simply that A is not injective, while the computable bounds are expressed via the operator norm of the pseudo-inverse on the orthogonal complement of the kernel (or, equivalently, distances in the quotient space X/ker(A)). These quantities are invariant under equivalent renormings of X. We will revise the abstract to state this invariance explicitly and add a short remark in Section 2 clarifying that the results hold for any norm on X that makes A continuous. revision: partial

  2. Referee: [Theory (conditions derivation)] The necessary-and-sufficient conditions (stated in the abstract) appear to presuppose a fixed admissible-signal set without data-dependent assumptions. If the proofs rely on any particular choice of X that is not shown to be canonical or invariant, the conditions risk being non-unique and the computable bounds may change with that choice.

    Authors: The admissible-signal set is exactly the fiber A^{-1}(y) determined by the forward model A and the observed measurement y; no external or data-independent set is assumed. The necessary-and-sufficient condition for hallucinations reduces to non-injectivity of A, and the magnitude bounds follow from the spectral properties of A alone. We will add a paragraph in the theory section demonstrating that the conditions and bounds remain unchanged under any continuous linear isomorphism of X that preserves the kernel and range of A, thereby establishing canonicity with respect to the forward operator. revision: partial

Circularity Check

0 steps flagged

No significant circularity; bounds and conditions derived from forward model alone

full rationale

The paper's central claims derive necessary and sufficient conditions for hallucinations plus magnitude bounds that depend only on the forward model, without reducing to self-referential definitions, fitted parameters renamed as predictions, or load-bearing self-citations. The abstract and described framework present these as arising directly from the ill-posed inverse problem structure, with no equations or steps shown that collapse by construction to inputs. This is consistent with a low circularity score; the derivation is self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The framework rests on standard domain assumptions from inverse problems theory; no free parameters, new entities, or ad-hoc axioms are mentioned in the abstract.

axioms (1)
  • domain assumption The inverse problem is ill-posed
    This is invoked as the source of inevitable hallucinations.

pith-pipeline@v0.9.0 · 5465 in / 1040 out tokens · 40569 ms · 2026-05-14T18:10:48.341846+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

65 extracted references · 6 canonical work pages · 1 internal anchor

  1. [1]

    Adler and O

    J. Adler and O. ¨Oktem. Solving ill-posed inverse problems using iterative deep neural networks. Inverse Problems, 33(12):124007, Nov. 2017

  2. [2]

    Akhaury, P

    U. Akhaury, P. Jablonka, J.-L. Starck, and F. Courbin. Ground-based image deconvolution with swin transformer unet.Astronomy and Astrophysics, 688:A6, July 2024. 27

  3. [3]

    Antun, F

    V . Antun, F. Renna, C. Poon, B. Adcock, and A. C. Hansen. On instabilities of deep learning in image reconstruction and the potential costs of AI.Proc. Natl. Acad. Sci. USA, 117(48):30088– 30095, 2020

  4. [5]

    Arridge, P

    S. Arridge, P. Maass, O. ¨Oktem, and C.-B. Sch ¨onlieb. Solving inverse problems using data-driven models.Acta Numer., 28:1–174, 2019

  5. [6]

    Aybar, D

    C. Aybar, D. Montero, S. Donike, F. Kalaitzis, and L. G´omez-Chova. A comprehensive benchmark for optical remote sensing image super-resolution.IEEE Geoscience and Remote Sensing Letters, 21:1–5, 2024

  6. [7]

    G. M. Barco, A. Adam, C. Stone, Y . Hezaveh, and L. Perreault-Levasseur. Tackling the problem of distributional shifts: Correcting misspecified, high-dimensional data-driven priors for inverse problems.The Astrophysical Journal, 980(1):108, Feb. 2025

  7. [8]

    Belthangady and L

    C. Belthangady and L. A. Royer. Applications, promises, and pitfalls of deep learning for fluores- cence image reconstruction.Nature methods, 16(12):1215–1225, 2019

  8. [9]

    Bhadra, V

    S. Bhadra, V . A. Kelkar, F. J. Brooks, and M. A. Anastasio. On hallucinations in tomographic image reconstruction.IEEE transactions on medical imaging, 40(11):3249–3260, 2021

  9. [10]

    Bhadra, U

    S. Bhadra, U. Villa, and M. A. Anastasio. Mining the manifolds of deep generative models for multiple data-consistent solutions of ill-posed tomographic imaging problems.arXiv preprint arXiv:2202.05311, abs/2202.05311, 2022

  10. [11]

    C. M. Bishop. Training with noise is equivalent to tikhonov regularization.Neural Computation, 7(1):108–116, 1995

  11. [12]

    Bitterwolf, A

    J. Bitterwolf, A. Meinke, and M. Hein. Certifiably adversarially robust detection of out-of- distribution data.Advances in Neural Information Processing Systems, 33:16085–16095, 2020

  12. [13]

    Blau and T

    Y . Blau and T. Michaeli. The perception-distortion tradeoff. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6228–6237. IEEE, 2018

  13. [14]

    M. J. Colbrook, V . Antun, and A. C. Hansen. The difficulty of computing stable and accurate neural networks: On the barriers of deep learning and Smale’s 18th problem.Proc. Natl. Acad. Sci. USA, 119(12):e2107151119, 2022

  14. [15]

    Dashti and A

    M. Dashti and A. M. Stuart. The Bayesian approach to inverse problems. InHandbook of uncer- tainty quantification, pages 311–428. Springer, 2017

  15. [16]

    de Vries

    J. de Vries. Advanced-opengl - blending, 2024. Accessed: 2026-01-14

  16. [17]

    L. Deng. The mnist database of handwritten digit images for machine learning research [best of the web].IEEE Signal Processing Magazine, 29(6):141–142, 2012

  17. [18]

    Donike, C

    S. Donike, C. Aybar, L. G ´omez-Chova, and F. Kalaitzis. Trustworthy super-resolution of multi- spectral sentinel-2 imagery with latent diffusion.IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 18:6940–6952, 2025

  18. [19]

    Z. Fang, Y . Li, F. Liu, B. Han, and J. Lu. On the learnability of out-of-distribution detection.Journal of Machine Learning Research, 25(84):1–83, 2024

  19. [20]

    Genzel, J

    M. Genzel, J. Macdonald, and M. M ¨arz. Solving inverse problems with deep neural networks– robustness included?IEEE transactions on pattern analysis and machine intelligence, 45(1):1119– 1134, 2022. 28

  20. [21]

    N. M. Gottschling, V . Antun, A. C. Hansen, and B. Adcock. The troublesome kernel: On hallu- cinations, no free lunches, and the accuracy-stability tradeoff in inverse problems.SIAM Review, 67(1):73–104, 2025

  21. [22]

    N. M. Gottschling, P. Campodonico, V . Antun, and A. C. Hansen. On the existence of optimal multi-valued decoders and their accuracy bounds for undersampled inverse problems.to appear in EJAM, 2023

  22. [23]

    N. M. Gottschling, D. Iagaru, J. Gawlikowski, and I. Sgouralis. Average kernel sizes—computable sharp accuracy bounds for inverse problems.arXiv preprint arXiv:2510.10229, 2025

  23. [24]

    Hakim, R

    A. Hakim, R. Rohner, A. Winklehner, J.-B. Rossel, C. Lehmann, R. Wiest, J. Gralla, and E. Piechowiak. Deep resolve boost in 2d mri for neuroradiology: A comparative evaluation of diagnostic gains and potential risks.American Journal of Neuroradiology, 2025

  24. [25]

    Hakim, R

    A. Hakim, R. Rohner, A. Winklehner, J.-B. Rossel, C. Lehmann, R. Wiest, J. Gralla, and E. Piechowiak. Deep resolve boost in 2d mri for neuroradiology: A comparative evaluation of diagnostic gains and potential risks.American Journal of Neuroradiology, 2026

  25. [26]

    Hendrycks, S

    D. Hendrycks, S. Basart, M. Mazeika, A. Zou, J. Kwon, M. Mostajabi, J. Steinhardt, and D. Song. Scaling out-of-distribution detection for real-world settings. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, editors,Proceedings of the 39th International Conference on Machine Learning, volume 162 ofProceedings of Machine Learning...

  26. [27]

    D. P. Hoffman, I. Slavitt, and C. A. Fitzpatrick. The promise and peril of deep learning in mi- croscopy.Nature Methods, 18(2):131–132, 2021

  27. [28]

    Huang, Y

    L. Huang, Y . Li, N. Pillar, T. Keidar Haran, W. D. Wallace, and A. Ozcan. A robust and scal- able framework for hallucination detection in virtual tissue staining and digital pathology.Nature Biomedical Engineering, 9:2196–2214, 2025

  28. [29]

    Kaipio and E

    J. Kaipio and E. Somersalo.Statistical and Computational Inverse Problems, volume 160 ofApplied Mathematical Sciences. Springer, New York, 2005

  29. [30]

    Kamyab, Z

    S. Kamyab, Z. Azimifar, R. Sabzi, and P. Fieguth. Deep learning methods for inverse problems. PeerJ Computer Science, 8:e951, 2022

  30. [31]

    Kapoor and A

    S. Kapoor and A. Narayanan. Leakage and the reproducibility crisis in machine-learning-based science.Patterns, 4(9):100804, 2023

  31. [32]

    Karras, M

    T. Karras, M. Aittala, T. Aila, and S. Laine. Elucidating the design space of diffusion-based gen- erative models. InAdvances in Neural Information Processing Systems 35 (NeurIPS 2022), pages 22008–22024. Neural Information Processing Systems Foundation, Inc., 2022

  32. [33]

    J. Kim, J. K. Lee, and K. M. Lee. Accurate image super-resolution using very deep convolutional networks. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1646–1654, 2016

  33. [34]

    S. Kim, H. F. J. Tregidgo, M. Figini, C. Jin, S. Joshi, and D. C. Alexander. Tackling Hallucination from Conditional Models for Medical Image Reconstruction with DynamicDPS . Inproceedings of Medical Image Computing and Computer Assisted Intervention – MICCAI 2025, volume LNCS 15963. Springer Nature Switzerland, September 2025

  34. [35]

    D. P. Kingma and J. Ba. Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980, 2014

  35. [36]

    R. F. Laine, I. Arganda-Carreras, R. Henriques, and G. Jacquemet. Avoiding a replication crisis in deep-learning-based bioimage analysis.Nature methods, 18(10):1136–1144, 2021. 29

  36. [37]

    Laves, M

    M.-H. Laves, M. T ¨olle, and T. Ortmaier. Uncertainty estimation in medical image denoising with bayesian deep image prior. InInternational Workshop on Uncertainty for Safe Utilization of Ma- chine Learning in Medical Imaging, pages 81–96. Springer, 2020

  37. [38]

    J. Li, I. Rosellon-Inclan, G. Kutyniok, and J.-L. Starck. Chem: Estimating and understanding hallucinations in deep learning for image processing.arXiv preprint arXiv:2512.09806, 2025

  38. [39]

    C. Liu, T. Arnon, C. Lazarus, C. Strong, C. Barrett, and M. J. Kochenderfer. Algorithms for verify- ing deep neural networks.Foundations and Trends® in Optimization, 4(3–4):244–404, 2021

  39. [40]

    X. Liu, B. Glocker, M. M. McCradden, M. Ghassemi, A. K. Denniston, and L. Oakden-Rayner. The medical algorithmic audit.The Lancet Digital Health, 2022

  40. [41]

    S. Mallat. Group invariant scattering.Communications on Pure and Applied Mathematics, 65(10):1331–1398, 2012

  41. [42]

    J. N. Morshuis, S. Gatidis, M. Hein, and C. F. Baumgartner. Adversarial robustness of MR image reconstruction under realistic perturbations. InInternational Workshop on Machine Learning for Medical Image Reconstruction, pages 24–33. Springer, 2022

  42. [43]

    M. J. Muckley, B. Riemenschneider, A. Radmanesh, S. Kim, G. Jeong, J. Ko, Y . Jun, H. Shin, D. Hwang, M. Mostapha, S. Arberet, D. Nickel, Z. Ramzi, P. Ciuciu, J.-L. Starck, J. Teuwen, D. Karkalousos, C. Zhang, A. Sriram, Z. Huang, N. Yakubova, Y . W. Lui, and F. Knoll. Results of the 2020 fastmri challenge for machine learning mr image reconstruction.IEEE...

  43. [44]

    Murgia, D

    M. Murgia, D. Clark, and C. Murray. Ai hallucinations haunt users more than job losses. https://www.ft.com/content/e074d3a9-7fd8-447d-ac0a-e0de756ac5c5?syn-25a6b1a6=1, 2026. Ac- cessed: 2026-04-13

  44. [45]

    Natterer.The Mathematics of Computerized Tomography

    F. Natterer.The Mathematics of Computerized Tomography. Classics in Applied Mathematics. SIAM, Philadelphia, 1986

  45. [46]

    Ovadia, E

    Y . Ovadia, E. Fertig, J. Ren, Z. Nado, D. Sculley, S. Nowozin, J. V . Dillon, B. Lakshminarayanan, and J. Snoek. Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. InAdvances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019

  46. [47]

    Pal and Y

    A. Pal and Y . Rathi. A review and experimental evaluation of deep learning methods for MRI reconstruction.Machine Learning for Biomedical Imaging, 1, 2022

  47. [48]

    Renard, S

    F. Renard, S. Guedria, N. D. Palma, and N. Vuillerme. Variability and reproducibility in deep learning for medical image segmentation.Scientific Reports, 10(1):13724, Aug. 2020

  48. [49]

    W. H. Richardson. Bayesian-based iterative method of image restoration.Journal of the Optical Society of America, 62:55–59, 1972

  49. [50]

    Rifat, J

    S. Rifat, J. Ashdown, and F. Restuccia. Darda: Domain-aware real-time dynamic neural network adaptation. InProceedings of the 2025 Winter Conference on Applications of Computer Vision (WACV), pages 1–12. IEEE / CVF, 2025

  50. [51]

    M. L. Sampson and P. Melchior. Spotting hallucinations in inverse problems with data-driven priors. arXiv preprint arXiv:2306.13272, 2023

  51. [52]

    A. S. Sayyed, N. D. Bastian, and F. Restuccia. ENCORE: A Neural Collapse Perspective on Out- of-Distribution Detection in Deep Neural Networks. InProceedings of Winter Conference on Ap- plications of Computer Vision (WACV), 2026

  52. [53]

    Shevlin and W

    H. Shevlin and W. Nichols. Cambridge dictionary names ’hallucinate’ word of the year 2023.https://www.cam.ac.uk/research/news/ cambridge-dictionary-names-hallucinate-word-of-the-year-2023,

  53. [54]

    Accessed: 2025-01-21. 30

  54. [55]

    Shimron, J

    E. Shimron, J. I. Tamir, K. Wang, and M. Lustig. Implicit data crimes: Machine learning bias arising from misuse of public data.Proceedings of the National Academy of Sciences, 119(13):e2117203119, 2022

  55. [56]

    Sietsma and R

    J. Sietsma and R. J. F. Dow. Creating artificial neural networks that generalize.Neural Networks, 4(1):67–79, 1991

  56. [57]

    A. M. Stuart. Inverse problems: A Bayesian perspective.Acta numerica, 19:451–559, 2010

  57. [58]

    Tivnan, S

    M. Tivnan, S. Yoon, Z. Chen, X. Li, D. Wu, and Q. Li. Hallucination Index: An Image Quality Metric for Generative Reconstruction Models . Inproceedings of Medical Image Computing and Computer Assisted Intervention – MICCAI 2024, volume LNCS 15010. Springer Nature Switzer- land, October 2024

  58. [59]

    T ¨olle, M.-H

    M. T ¨olle, M.-H. Laves, and A. Schlaefer. A mean-field variational inference approach to deep image prior for inverse problems in medical imaging. In M. Heinrich, Q. Dou, M. de Bruijne, J. Lellmann, A. Schl ¨afer, and F. Ernst, editors,Proceedings of the Fourth Conference on Medical Imaging with Deep Learning, volume 143 ofProceedings of Machine Learning...

  59. [60]

    Varoquaux and V

    G. Varoquaux and V . Cheplygina. Machine learning for medical imaging: methodological failures and recommendations for the future.NPJ digital medicine, 5(1):1–8, 2022

  60. [61]

    Virieux and S

    J. Virieux and S. Operto. An overview of full-waveform inversion in exploration geophysics.Geo- physics, 74:WCC1–WCC26, 11 2009

  61. [62]

    L. Wald, T. Ranchin, and M. Mangolini. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images.Photogrammetric Engineering and Remote Sensing, 63:691–699, 11 1997

  62. [63]

    Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600–612, 2004

  63. [64]

    Yuan and S

    X. Yuan and S. Pang. Structured illumination temporal compressive microscopy.Biomedical Optics Express, 7(3):746–758, 2016

  64. [65]

    Zbontar, F

    J. Zbontar, F. Knoll, A. Sriram, T. Murrell, Z. Huang, M. J. Muckley, A. Defazio, R. Stern, P. John- son, M. Bruno, et al. fastMRI: An open dataset and benchmarks for accelerated MRI.arXiv preprint arXiv:1811.08839, 2018

  65. [66]

    R. Zhao, B. Yaman, Y . Zhang, R. Stewart, A. Dixon, F. Knoll, Z. Huang, Y . W. Lui, M. S. Hansen, and M. P. Lungren. fastMRI+, Clinical pathology annotations for knee and brain fully sampled magnetic resonance imaging data.Scientific Data, 9(1):152, Apr. 2022. 31