pith. machine review for the scientific record. sign in

arxiv: 2604.26745 · v1 · submitted 2026-04-29 · 🧮 math.NA · cs.NA

Recognition: unknown

Robust Model-Based Iteration for Passive Gamma Emission Tomography

Authors on Pith no claims yet

Pith reviewed 2026-05-07 12:35 UTC · model grok-4.3

classification 🧮 math.NA cs.NA
keywords passive gamma emission tomographyinverse problemsLevenberg-Marquardtdeep learningnuclear fuel verificationiterative reconstructiontrust-region methods
0
0 comments X

The pith

A safeguarded hybrid algorithm accelerates passive gamma tomography reconstruction to one third of standard iterations.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops an accelerated solver for the nonlinear inverse problem of reconstructing emission and attenuation maps from PGET measurements of spent nuclear fuel. It integrates the Levenberg-Marquardt method with a Deep Gauss-Newton step that uses a learned operator to refine updates at each iteration. Three architectures are compared: convolutional neural networks, Fourier Neural Operators, and Wavelet Neural Operators, all trained on a small set of coarsely simulated 9x9 assemblies. A trust-region safeguard ensures the method does not perform worse than pure LM and maintains convergence to a critical point of the regularized objective. Tests on simulated and real data from Finnish nuclear power plants demonstrate that LM-quality results are achieved in roughly one third the iterations, with architecture-dependent trade-offs in robustness to out-of-distribution inputs.

Core claim

The proposed robust model-based iteration combines the Levenberg-Marquardt algorithm with a learned Deep Gauss-Newton step under a trust-region safeguard, allowing the solver to reach the accuracy of standard LM in approximately one third as many iterations for PGET reconstructions while preserving convergence to a critical point.

What carries the argument

The hybrid iteration scheme where a learned operator (CNN, FNO or WNO) proposes a refined update that is accepted only if it satisfies the trust-region model condition based on the regularized objective.

Load-bearing premise

The learned operator generalizes sufficiently from the small set of coarsely simulated 9x9 assemblies to real and out-of-distribution measurements.

What would settle it

If experiments on new real PGET data show that the hybrid method requires as many or more iterations than LM or fails to converge for some cases, the acceleration claim would not hold.

Figures

Figures reproduced from arXiv: 2604.26745 by Riina Rimppi, Sara Heikkinen, Tapio Helin, Tommi Heikkil\"a.

Figure 1
Figure 1. Figure 1: Dummy fuel assembly and the PGET measure view at source ↗
Figure 3
Figure 3. Figure 3: Illustration of the FNO architecture with initial view at source ↗
Figure 2
Figure 2. Figure 2: Visualization of the convolutional neural net view at source ↗
Figure 4
Figure 4. Figure 4: Relative errors of the emission and attenuation view at source ↗
Figure 5
Figure 5. Figure 5: CNN iterates on one sample from ’Hard’ and view at source ↗
Figure 8
Figure 8. Figure 8: Reconstructions using real data and 5 iterations. view at source ↗
read the original abstract

Passive Gamma Emission Tomography (PGET) is an IAEA-approved technique for verifying spent nuclear fuel assemblies prior to geological disposal. Reconstructing the emission and attenuation maps from PGET measurements is a nonlinear ill-posed inverse problem, currently solved with a Levenberg-Marquardt (LM) scheme that requires 10-20 iterations to achieve sufficient accuracy. We propose an accelerated iterative solver that combines the LM algorithm with a Deep Gauss-Newton step, in which a learned operator refines the update proposed by the deterministic algorithm at each iteration. A safeguard condition based on the trust-region model ensures that the accelerated iterates perform no worse than LM and retain convergence to a critical point of the regularized objective. Within this framework we compare three architectures for the learned component: an encoder-decoder-style convolutional neural network, Fourier Neural Operators, and Wavelet Neural Operators. Each is trained on a small set of coarsely simulated 9x9 assemblies. Experiments on simulated and real measurements from Finnish nuclear power plants show that the proposed scheme reaches LM-quality reconstructions in roughly one third of the iterations, while revealing architecture-dependent trade-offs in robustness against out-of-distribution inputs.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims to introduce a safeguarded hybrid Levenberg-Marquardt and Deep Gauss-Newton iteration for PGET reconstruction. A learned operator (implemented via CNN, FNO or WNO) accelerates the LM updates, with a trust-region safeguard ensuring the hybrid method performs no worse than standard LM and converges to a critical point of the objective. Trained on small sets of simulated 9x9 assemblies, the method is tested on simulated and real data from Finnish nuclear plants, reportedly achieving equivalent reconstruction quality in about one-third the iterations, with architecture-specific robustness properties.

Significance. If validated, the approach could substantially accelerate PGET-based verification of spent nuclear fuel, a key IAEA safeguard technique, by reducing iteration counts while maintaining reliability through the safeguard. The explicit comparison of convolutional, Fourier, and wavelet neural operators highlights practical trade-offs in applying learned methods to this inverse problem. Strengths include the use of real measurement data and the focus on robustness via the model-based safeguard. This contributes to the growing field of hybrid physics-informed and data-driven solvers for ill-posed problems. The result, if the generalization holds, has potential for broader application in similar tomography settings.

major comments (2)
  1. [Results and Experiments] The abstract and results claim that the proposed scheme reaches LM-quality reconstructions in roughly one third of the iterations on real data, but no quantitative metrics (e.g., error norms, exact iteration numbers, or statistical measures) are supplied to support this. Additionally, details on the safeguard activation rate or convergence criteria are missing, undermining the ability to verify the speedup and robustness claims. This directly impacts the central experimental contribution.
  2. [§3 (Method and Safeguard)] The convergence guarantee for the safeguarded iteration relies on the learned operator generalizing from the small training set of coarsely simulated assemblies to real PGET measurements. No specific analysis, bounds, or empirical statistics on out-of-distribution performance (such as step acceptance rates on real data) are provided to support that the safeguard prevents degradation or preserves convergence properties when generalization is imperfect. This is a load-bearing assumption for the robustness claim.
minor comments (2)
  1. [Abstract] The phrase 'roughly one third' is vague; referencing specific figures or tables with precise ratios would improve clarity.
  2. [Notation] The description of the Deep Gauss-Newton step would benefit from an explicit equation defining the learned operator's input and output.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive review and positive assessment of the significance of our work. We agree that the experimental validation can be strengthened with additional quantitative details and will revise the manuscript to address both major comments.

read point-by-point responses
  1. Referee: [Results and Experiments] The abstract and results claim that the proposed scheme reaches LM-quality reconstructions in roughly one third of the iterations on real data, but no quantitative metrics (e.g., error norms, exact iteration numbers, or statistical measures) are supplied to support this. Additionally, details on the safeguard activation rate or convergence criteria are missing, undermining the ability to verify the speedup and robustness claims. This directly impacts the central experimental contribution.

    Authors: We agree that explicit quantitative metrics would allow direct verification of the speedup claim. Although the manuscript presents comparative results via figures for both simulated and real data, it does not include a summary table of exact iteration counts to convergence, error norms, or safeguard statistics. In the revision we will add a table for the real-data experiments reporting average iterations required to reach the convergence tolerance for standard LM and each hybrid variant, final objective values, and the observed frequency of safeguard activations. We will also state the precise convergence criterion employed (relative change in the objective below a fixed threshold). This will substantiate the one-third iteration claim with verifiable numbers. revision: yes

  2. Referee: [§3 (Method and Safeguard)] The convergence guarantee for the safeguarded iteration relies on the learned operator generalizing from the small training set of coarsely simulated assemblies to real PGET measurements. No specific analysis, bounds, or empirical statistics on out-of-distribution performance (such as step acceptance rates on real data) are provided to support that the safeguard prevents degradation or preserves convergence properties when generalization is imperfect. This is a load-bearing assumption for the robustness claim.

    Authors: The convergence argument is model-based and holds because the trust-region safeguard only accepts a learned step when it produces at least as much objective decrease as the corresponding LM step; rejected steps fall back to the standard LM update, preserving the descent property irrespective of generalization quality. We acknowledge that the current manuscript provides no explicit statistics on acceptance rates for the real (out-of-distribution) measurements. In the revision we will add these empirical statistics, reporting for each architecture the fraction of iterations on the Finnish real-data sets in which the learned step was accepted versus rejected. This will quantify the safeguard's practical role when generalization is imperfect. We do not claim theoretical generalization bounds, but the added acceptance-rate data will directly support the robustness claim. revision: yes

Circularity Check

0 steps flagged

No circularity; hybrid scheme rests on independent training and standard trust-region safeguards

full rationale

The paper introduces a hybrid LM + learned Deep Gauss-Newton iteration with a trust-region safeguard to guarantee that accelerated steps are never worse than pure LM and retain convergence to a critical point. The learned operators (CNN, FNO, WNO) are trained separately on a small set of coarsely simulated assemblies and then applied to both simulated and real PGET data. No step in the claimed chain reduces by definition or self-citation to the target reconstruction result; the convergence claim invokes standard LM properties rather than a fitted parameter renamed as prediction. Experimental speed-up claims are empirical outcomes, not tautological. This is the normal case of a self-contained algorithmic proposal.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 1 invented entities

The claim depends on standard convergence of Levenberg-Marquardt, the validity of the trust-region safeguard for the hybrid step, and generalization of networks trained on limited simulated data.

free parameters (1)
  • neural network weights and biases
    Learned parameters of the CNN, FNO, or WNO that define the Deep Gauss-Newton update; fitted during training on simulated assemblies.
axioms (2)
  • standard math Levenberg-Marquardt iteration converges to a critical point of the regularized objective
    Invoked as the baseline whose convergence the hybrid method must preserve.
  • domain assumption Trust-region model provides a reliable safeguard for the learned update
    Assumed to guarantee that accelerated iterates never perform worse than plain LM.
invented entities (1)
  • Deep Gauss-Newton step realized by a learned operator no independent evidence
    purpose: To refine the deterministic LM update at each iteration
    New component introduced by the paper; no independent evidence outside the training and safeguard.

pith-pipeline@v0.9.0 · 5508 in / 1460 out tokens · 46998 ms · 2026-05-07T12:35:45.031974+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

63 extracted references · 13 canonical work pages · 2 internal anchors

  1. [1]

    Learning to learn by gradient descent by gradient descent

    Marcin Andrychowicz et al. “Learning to learn by gradient descent by gradient descent”. In:Advances in neural information processing systems29 (2016)

  2. [2]

    PyTorch 2: Faster Ma- chine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation

    Jason Ansel et al. “PyTorch 2: Faster Ma- chine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation”. In:Pro- ceedings of the 29th ACM International Confer- ence on Architectural Support for Programming Lan- guages and Operating Systems, Volume 2. ASPLOS ’24. La Jolla, CA, USA: Association for Computing Machinery, 2024, pp. 929–9...

  3. [3]

    Simultaneous reconstruc- tion of emission and attenuation in passive gamma emission tomography of spent nuclear fuel

    Rasmus Backholm et al. “Simultaneous reconstruc- tion of emission and attenuation in passive gamma emission tomography of spent nuclear fuel”. In:In- verse Problems & Imaging14.2 (2020)

  4. [4]

    SegNet: A Deep Convolutional Encoder- Decoder Architecture for Image Segmentation

    Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. “SegNet: A Deep Convolutional Encoder- Decoder Architecture for Image Segmentation”. In: IEEE Transactions on Pattern Analysis and Ma- chine Intelligence39.12 (2017), pp. 2481–2495.doi: 10.1109/TPAMI.2016.2644615

  5. [5]

    Effect of gamma-ray energy on image quality in passive gamma emission tomography of spent nuclear fuel

    Camille B´ elanger-Champagne et al. “Effect of gamma-ray energy on image quality in passive gamma emission tomography of spent nuclear fuel”. In:IEEE transactions on nuclear science66.1 (2018), pp. 487–496

  6. [6]

    A scaled gradient projection method for con- strained image deblurring

    Silvia Bonettini, Riccardo Zanella, and Luca Zanni. “A scaled gradient projection method for con- strained image deblurring”. In:Inverse problems 25.1 (2008), p. 015002. 10

  7. [7]

    Learning firmly nonex- pansive operators.arXiv:2407.14156, 2024

    Kristian Bredies, Jonathan Chirinos-Rodriguez, and Emanuele Naldi. “Learning firmly nonexpansive operators”. In:arXiv preprint arXiv:2407.14156 (2024)

  8. [8]

    Vanquishing the compu- tational cost of passive gamma emission tomogra- phy simulations leveraging physics-aware reduced order modeling

    Nicola Cavallini et al. “Vanquishing the compu- tational cost of passive gamma emission tomogra- phy simulations leveraging physics-aware reduced order modeling”. In:Scientific Reports13.1 (2023), p. 15034

  9. [9]

    Learning to optimize: A primer and a benchmark

    Tianlong Chen et al. “Learning to optimize: A primer and a benchmark”. In:Journal of Machine Learning Research23.189 (2022), pp. 1–59

  10. [10]

    Neural operator learning for ul- trasound tomography inversion

    Haocheng Dai et al. “Neural operator learning for ul- trasound tomography inversion”. In:arXiv preprint arXiv:2304.03297(2023)

  11. [11]

    A new approach towards simultane- ous activity and attenuation reconstruction in emis- sion tomography

    Volker Dicken. “A new approach towards simultane- ous activity and attenuation reconstruction in emis- sion tomography”. In:Inverse Problems15.4 (1999), pp. 931–960

  12. [12]

    Neubauer.Regularization of Inverse Problems

    Heinz Werner Engl, Martin Hanke, and A. Neubauer.Regularization of Inverse Problems. 1st ed. Mathematics and Its Applications. 3300 AA Dordrecht, The Netherlands: Kluwer Academic Pub- lisher, 1996.isbn: 978-0-7923-4157-4

  13. [13]

    Assessing instrument performance for passive gamma emission tomography of spent fuel

    Miller Erin et al. “Assessing instrument performance for passive gamma emission tomography of spent fuel”. In:INMM 59th Annual Meeting Paper Ad- vanced Nondestructive Assay Techniques for Fuel Assemblies. 2018

  14. [14]

    Greedy Learning to Optimize with Convergence Guarantees

    Patrick Fahy, Mohammad Golbabaee, and Matthias J. Ehrhardt. “Greedy Learning to Optimize with Convergence Guarantees”. In:arXiv preprint arXiv:2406.00260(2024)

  15. [15]

    On a fixed-point con- tinuation method for a convex optimization prob- lem

    Jean-Baptiste Fest et al. “On a fixed-point con- tinuation method for a convex optimization prob- lem”. In:INdAM Workshop: Advanced Techniques in Optimization for Machine learning and Imaging. Springer. 2022, pp. 15–30

  16. [16]

    Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization

    Donald Goldfarb and Shiqian Ma. “Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization”. In:Foundations of Computa- tional Mathematics11.2 (Feb. 2011), pp. 183–210. doi:10.1007/s10208-011-9084-6

  17. [17]

    Gonzalez and Richard E

    Rafael C. Gonzalez and Richard E. Woods.Digi- tal image processing. Fourth Edition. 330 Hudson Street, NY 10013, USA: Pearson Education, 2018. isbn: 978-0-13-335672-4

  18. [18]

    Fixed- Point Continuation forℓ 1-Minimization: Methodol- ogy and Convergence

    Elaine Hale, Wotao Yin, and Yin Zhang. “Fixed- Point Continuation forℓ 1-Minimization: Methodol- ogy and Convergence”. In:SIAM Journal on Opti- mization19.3 (Jan. 2008), pp. 1107–1130.doi:10. 1137/070698920

  19. [19]

    The regularizing Levenberg- Marquardt scheme is of optimal order

    Martin Hanke. “The regularizing Levenberg- Marquardt scheme is of optimal order”. In:The journal of integral equations and applications (2010), pp. 259–283

  20. [20]

    Model-based learning for accelerated, limited-view 3-D photoacoustic to- mography

    Andreas Hauptmann et al. “Model-based learning for accelerated, limited-view 3-D photoacoustic to- mography”. In:IEEE transactions on medical imag- ing37.6 (2018), pp. 1382–1393

  21. [21]

    Safeguarded learned convex optimization

    Howard Heaton et al. “Safeguarded learned convex optimization”. In:Proceedings of the AAAI Con- ference on Artificial Intelligence. Vol. 37. 6. 2023, pp. 7848–7855

  22. [22]

    Model-based imaging of spent nu- clear fuel with passive gamma emission tomog- raphy

    Sara Heikkinen. “Model-based imaging of spent nu- clear fuel with passive gamma emission tomog- raphy”.https : / / urn . fi / URN : NBN : fi - fe2024112797098. Master’s thesis. LUT University, 2024

  23. [23]

    Gaussian Error Linear Units (GELUs)

    Dan Hendrycks and Kevin Gimpel. “Gaussian error linear units (gelus)”. In:arXiv preprint arXiv:1606.08415(2016)

  24. [24]

    Graph convolutional net- works for model-based learning in nonlinear inverse problems

    William Herzberg et al. “Graph convolutional net- works for model-based learning in nonlinear inverse problems”. In:IEEE transactions on computational imaging7 (2021), pp. 1341–1353

  25. [25]

    A prototype for passive gamma emission tomography

    Tapani Honkamaa et al. “A prototype for passive gamma emission tomography”. In:Proceedings of Symposium on International Safeguards. 2014

  26. [26]

    Resolution-invariant image classification based on Fourier neural operators

    Samira Kabri et al. “Resolution-invariant image classification based on Fourier neural operators”. In: International Conference on Scale Space and Varia- tional Methods in Computer Vision. Springer. 2023, pp. 236–249

  27. [27]

    Plug-and-play methods for integrating physical and learned models in com- putational imaging: Theory, algorithms, and appli- cations

    Ulugbek S. Kamilov et al. “Plug-and-play methods for integrating physical and learned models in com- putational imaging: Theory, algorithms, and appli- cations”. In:IEEE Signal Processing Magazine40.1 (2023), pp. 85–97

  28. [28]

    Kelley.Iterative methods for optimization

    Carl T. Kelley.Iterative methods for optimization. 3600 Market Street, PA 19104, USA: SIAM, 1999

  29. [29]

    Learn Best Practices of Deep Learn- ing Models with PyTorch

    Nikhil Ketkar and Jojo Moolayil.Deep Learning with Python. Learn Best Practices of Deep Learn- ing Models with PyTorch. 2nd ed. One New York Plaza, NY 10004-1562, USA: Apress Media, 2021. isbn: 978-1-4842-5364-9

  30. [30]

    Learning a pre- conditioner to accelerate compressed sensing re- constructions in MRI

    Kirsten Koolstra and Rob Remis. “Learning a pre- conditioner to accelerate compressed sensing re- constructions in MRI”. In:Magnetic Resonance in Medicine87.4 (2022), pp. 2063–2073

  31. [31]

    Jean Kossaifi et al.A Library for Learning Neural Operators. 2024

  32. [32]

    Neural operator: learning maps between function spaces with applications to PDEs

    Nikola Kovachki et al. “Neural operator: learning maps between function spaces with applications to PDEs”. In:J. Mach. Learn. Res.24.1 (Jan. 2023). issn: 1532-4435

  33. [33]

    Nonlocality and nonlinearity implies universal- ity in operator learning

    Samuel Lanthaler, Zongyi Li, and Andrew M. Stu- art. “Nonlocality and nonlinearity implies universal- ity in operator learning”. In:Constructive Approxi- mation(2025), pp. 1–43

  34. [34]

    Backpropagation Applied to Hand- written Zip Code Recognition

    Y. LeCun et al. “Backpropagation Applied to Hand- written Zip Code Recognition”. In:Neural Compu- tation1 (4 1989), pp. 541–551.issn: 0899-7667

  35. [35]

    Learning preconditioners for con- jugate gradient PDE solvers

    Yichen Li et al. “Learning preconditioners for con- jugate gradient PDE solvers”. In:International Conference on Machine Learning. PMLR. 2023, pp. 19425–19439. 11

  36. [36]

    Fourier Neural Operator for Parametric Partial Differential Equations

    Zongyi Li et al. “Fourier neural operator for para- metric partial differential equations”. In:arXiv preprint arXiv:2010.08895(2020). Published as a conference paper at ICLR 2021

  37. [37]

    Learning to optimize quasi-Newton methods

    Isaac Liao et al. “Learning to optimize quasi-Newton methods”. In:arXiv preprint arXiv:2210.06171 (2022)

  38. [38]

    Enhancing fourier neural operators with local spatial features.arXiv preprint arXiv:2503.17797, 2025

    Chaoyu Liu et al. “Enhancing fourier neural opera- tors with local spatial features”. In:arXiv preprint arXiv:2503.17797(2025)

  39. [39]

    arXiv preprint arXiv:2402.16845 , year=

    Miguel Liu-Schiaffini et al. “Neural operators with localized integral and differential kernels”. In:arXiv preprint arXiv:2402.16845(2024)

  40. [40]

    Burlington, MA 01803, USA: Academic Press, 1999

    Stephane Mallat.A wavelet tour of signal processing. Burlington, MA 01803, USA: Academic Press, 1999

  41. [41]

    Algorithm unrolling: interpretable, efficient deep learning for signal and image processing

    Vishal Monga, Yuelong Li, and Yonina C. Eldar. “Algorithm unrolling: interpretable, efficient deep learning for signal and image processing”. In:arXiv preprint arXiv:1912.10557(2019)

  42. [42]

    A model-based itera- tive learning approach for diffuse optical tomogra- phy

    Meghdoot Mozumder et al. “A model-based itera- tive learning approach for diffuse optical tomogra- phy”. In:IEEE Transactions on Medical Imaging 41.5 (2021), pp. 1289–1299

  43. [43]

    3600 Market Street, PA 19104, USA: SIAM, 2001

    Frank Natterer and Frank W¨ ubbeling.Mathemati- cal methods in image reconstruction. 3600 Market Street, PA 19104, USA: SIAM, 2001

  44. [44]

    Wright.Numerical op- timization

    Jorge Nocedal and Stephen J. Wright.Numerical op- timization. 2nd ed. 233 Spring Street, NY 10013, USA: Springer, 2006.isbn: 978-0387-30303-1

  45. [45]

    Learning maximally monotone operators for image recovery

    Jean-Christophe Pesquet et al. “Learning maximally monotone operators for image recovery”. In:SIAM Journal on Imaging Sciences14.3 (2021), pp. 1206– 1237

  46. [46]

    Posiva Oy.YJH 2024 Olkiluodon ja Loviisan ydin- laitosten ydinj¨ atehuollon ohjelma vuosille 2025-

  47. [47]

    Posiva public reports and publications: https : / / www . posiva . fi / material / sites / posivaraportit / 20240210 - 1210 - H4n8OESC1 / yvbmsegfm / YJH - 2024 - ohjelma _ web . pdf. (In Finnish), referenced 5.9.2025. 2024

  48. [48]

    A simple guard for learned optimizers

    Isabeau Pr´ emont-Schwarz, Jaroslav V´ ıtk˚ u, and Jan Feyereisl. “A simple guard for learned optimizers”. In:arXiv preprint arXiv:2201.12426(2022)

  49. [49]

    Convolutional neural opera- tors for robust and accurate learning of PDEs

    Bogdan Raonic et al. “Convolutional neural opera- tors for robust and accurate learning of PDEs”. In: Advances in Neural Information Processing Systems 36 (2023), pp. 77187–77200

  50. [50]

    Plug-and-play methods provably converge with properly trained denoisers

    Ernest Ryu et al. “Plug-and-play methods provably converge with properly trained denoisers”. In:Inter- national Conference on Machine Learning. PMLR. 2019, pp. 5546–5557

  51. [51]

    The identification problem for the attenuated X-ray transform

    Plamen Stefanov. “The identification problem for the attenuated X-ray transform”. In:American Journal of Mathematics136.5 (2014), pp. 1215– 1247

  52. [52]

    Convolution theorems for linear transforms

    Harold S. Stone. “Convolution theorems for linear transforms”. In:IEEE transactions on signal pro- cessing46.10 (1998), pp. 2819–2821

  53. [53]

    PDEBench: An extensive benchmark for scientific machine learning

    Makoto Takamoto et al. “PDEBench: An extensive benchmark for scientific machine learning”. In:Ad- vances in Neural Information Processing Systems35 (2022), pp. 1596–1611

  54. [54]

    Wavelet neural operator for solving parametric partial differ- ential equations in computational mechanics prob- lems

    Tapas Tripura and Souvik Chakraborty. “Wavelet neural operator for solving parametric partial differ- ential equations in computational mechanics prob- lems”. In:Computer Methods in Applied Mechanics and Engineering404 (2023), p. 115783

  55. [55]

    Github repository:https: //github.com/TapasTripura/WNO

    Tapas Tripura and Souvik Chakraborty.Wavelet- Neural-Operator (WNO). Github repository:https: //github.com/TapasTripura/WNO. (v2.0.0) Refer- enced in 22.10.2025

  56. [56]

    Plug-and-play priors for model based reconstruction

    Singanallur V. Venkatakrishnan, Charles A. Bouman, and Brendt Wohlberg. “Plug-and-play priors for model based reconstruction”. In:2013 IEEE global conference on signal and information processing. IEEE. 2013, pp. 945–948

  57. [57]

    Gamma tomography of spent nuclear fuel for geological repository safeguards

    Riina Virta. “Gamma tomography of spent nuclear fuel for geological repository safeguards”.http:// hdl.handle.net/10138/575149. PhD thesis. Uni- versity of Helsinki, 2024, p. 60

  58. [58]

    Fuel rod classification from passive gamma emission tomography (PGET) of spent nuclear fuel assemblies

    Riina Virta et al. “Fuel rod classification from passive gamma emission tomography (PGET) of spent nuclear fuel assemblies”. In:ESARDA Bul- letin2020.61 (2020), pp. 10–21

  59. [59]

    Improved Passive Gamma Emis- sion Tomography image quality in the central region of spent nuclear fuel

    Riina Virta et al. “Improved Passive Gamma Emis- sion Tomography image quality in the central region of spent nuclear fuel”. In:Scientific Reports12.1 (2022), p. 12473

  60. [60]

    Application of passive gamma emission tomography (PGET) for the verification of spent nuclear fuel

    Timothy White et al. “Application of passive gamma emission tomography (PGET) for the verification of spent nuclear fuel”. In:INMM 59th Annual Meeting, Baltimore, Maryland, USA. 2018

  61. [61]

    Verification of spent nu- clear fuel using passive gamma emission tomography (PGET)

    Timothy White et al. “Verification of spent nu- clear fuel using passive gamma emission tomography (PGET)”. In:IAEA Symposium on International Safeguards. Book of Abstracts. IAEA-CN–267. 2019, pp. 198–198

  62. [62]

    Shaftesbury Road, CB2 8EA, England: Cambridge University Press, 2023

    Aston Zhang et al.Dive into Deep Learning.https: //D2L.ai. Shaftesbury Road, CB2 8EA, England: Cambridge University Press, 2023

  63. [63]

    A review of convolutional neural networks in computer vision

    Xia Zhao et al. “A review of convolutional neural networks in computer vision”. In:Artificial Intelli- gence Review57.4 (2024), p. 99. 12