Recognition: unknown
Robust Model-Based Iteration for Passive Gamma Emission Tomography
Pith reviewed 2026-05-07 12:35 UTC · model grok-4.3
The pith
A safeguarded hybrid algorithm accelerates passive gamma tomography reconstruction to one third of standard iterations.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The proposed robust model-based iteration combines the Levenberg-Marquardt algorithm with a learned Deep Gauss-Newton step under a trust-region safeguard, allowing the solver to reach the accuracy of standard LM in approximately one third as many iterations for PGET reconstructions while preserving convergence to a critical point.
What carries the argument
The hybrid iteration scheme where a learned operator (CNN, FNO or WNO) proposes a refined update that is accepted only if it satisfies the trust-region model condition based on the regularized objective.
Load-bearing premise
The learned operator generalizes sufficiently from the small set of coarsely simulated 9x9 assemblies to real and out-of-distribution measurements.
What would settle it
If experiments on new real PGET data show that the hybrid method requires as many or more iterations than LM or fails to converge for some cases, the acceleration claim would not hold.
Figures
read the original abstract
Passive Gamma Emission Tomography (PGET) is an IAEA-approved technique for verifying spent nuclear fuel assemblies prior to geological disposal. Reconstructing the emission and attenuation maps from PGET measurements is a nonlinear ill-posed inverse problem, currently solved with a Levenberg-Marquardt (LM) scheme that requires 10-20 iterations to achieve sufficient accuracy. We propose an accelerated iterative solver that combines the LM algorithm with a Deep Gauss-Newton step, in which a learned operator refines the update proposed by the deterministic algorithm at each iteration. A safeguard condition based on the trust-region model ensures that the accelerated iterates perform no worse than LM and retain convergence to a critical point of the regularized objective. Within this framework we compare three architectures for the learned component: an encoder-decoder-style convolutional neural network, Fourier Neural Operators, and Wavelet Neural Operators. Each is trained on a small set of coarsely simulated 9x9 assemblies. Experiments on simulated and real measurements from Finnish nuclear power plants show that the proposed scheme reaches LM-quality reconstructions in roughly one third of the iterations, while revealing architecture-dependent trade-offs in robustness against out-of-distribution inputs.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims to introduce a safeguarded hybrid Levenberg-Marquardt and Deep Gauss-Newton iteration for PGET reconstruction. A learned operator (implemented via CNN, FNO or WNO) accelerates the LM updates, with a trust-region safeguard ensuring the hybrid method performs no worse than standard LM and converges to a critical point of the objective. Trained on small sets of simulated 9x9 assemblies, the method is tested on simulated and real data from Finnish nuclear plants, reportedly achieving equivalent reconstruction quality in about one-third the iterations, with architecture-specific robustness properties.
Significance. If validated, the approach could substantially accelerate PGET-based verification of spent nuclear fuel, a key IAEA safeguard technique, by reducing iteration counts while maintaining reliability through the safeguard. The explicit comparison of convolutional, Fourier, and wavelet neural operators highlights practical trade-offs in applying learned methods to this inverse problem. Strengths include the use of real measurement data and the focus on robustness via the model-based safeguard. This contributes to the growing field of hybrid physics-informed and data-driven solvers for ill-posed problems. The result, if the generalization holds, has potential for broader application in similar tomography settings.
major comments (2)
- [Results and Experiments] The abstract and results claim that the proposed scheme reaches LM-quality reconstructions in roughly one third of the iterations on real data, but no quantitative metrics (e.g., error norms, exact iteration numbers, or statistical measures) are supplied to support this. Additionally, details on the safeguard activation rate or convergence criteria are missing, undermining the ability to verify the speedup and robustness claims. This directly impacts the central experimental contribution.
- [§3 (Method and Safeguard)] The convergence guarantee for the safeguarded iteration relies on the learned operator generalizing from the small training set of coarsely simulated assemblies to real PGET measurements. No specific analysis, bounds, or empirical statistics on out-of-distribution performance (such as step acceptance rates on real data) are provided to support that the safeguard prevents degradation or preserves convergence properties when generalization is imperfect. This is a load-bearing assumption for the robustness claim.
minor comments (2)
- [Abstract] The phrase 'roughly one third' is vague; referencing specific figures or tables with precise ratios would improve clarity.
- [Notation] The description of the Deep Gauss-Newton step would benefit from an explicit equation defining the learned operator's input and output.
Simulated Author's Rebuttal
We thank the referee for their constructive review and positive assessment of the significance of our work. We agree that the experimental validation can be strengthened with additional quantitative details and will revise the manuscript to address both major comments.
read point-by-point responses
-
Referee: [Results and Experiments] The abstract and results claim that the proposed scheme reaches LM-quality reconstructions in roughly one third of the iterations on real data, but no quantitative metrics (e.g., error norms, exact iteration numbers, or statistical measures) are supplied to support this. Additionally, details on the safeguard activation rate or convergence criteria are missing, undermining the ability to verify the speedup and robustness claims. This directly impacts the central experimental contribution.
Authors: We agree that explicit quantitative metrics would allow direct verification of the speedup claim. Although the manuscript presents comparative results via figures for both simulated and real data, it does not include a summary table of exact iteration counts to convergence, error norms, or safeguard statistics. In the revision we will add a table for the real-data experiments reporting average iterations required to reach the convergence tolerance for standard LM and each hybrid variant, final objective values, and the observed frequency of safeguard activations. We will also state the precise convergence criterion employed (relative change in the objective below a fixed threshold). This will substantiate the one-third iteration claim with verifiable numbers. revision: yes
-
Referee: [§3 (Method and Safeguard)] The convergence guarantee for the safeguarded iteration relies on the learned operator generalizing from the small training set of coarsely simulated assemblies to real PGET measurements. No specific analysis, bounds, or empirical statistics on out-of-distribution performance (such as step acceptance rates on real data) are provided to support that the safeguard prevents degradation or preserves convergence properties when generalization is imperfect. This is a load-bearing assumption for the robustness claim.
Authors: The convergence argument is model-based and holds because the trust-region safeguard only accepts a learned step when it produces at least as much objective decrease as the corresponding LM step; rejected steps fall back to the standard LM update, preserving the descent property irrespective of generalization quality. We acknowledge that the current manuscript provides no explicit statistics on acceptance rates for the real (out-of-distribution) measurements. In the revision we will add these empirical statistics, reporting for each architecture the fraction of iterations on the Finnish real-data sets in which the learned step was accepted versus rejected. This will quantify the safeguard's practical role when generalization is imperfect. We do not claim theoretical generalization bounds, but the added acceptance-rate data will directly support the robustness claim. revision: yes
Circularity Check
No circularity; hybrid scheme rests on independent training and standard trust-region safeguards
full rationale
The paper introduces a hybrid LM + learned Deep Gauss-Newton iteration with a trust-region safeguard to guarantee that accelerated steps are never worse than pure LM and retain convergence to a critical point. The learned operators (CNN, FNO, WNO) are trained separately on a small set of coarsely simulated assemblies and then applied to both simulated and real PGET data. No step in the claimed chain reduces by definition or self-citation to the target reconstruction result; the convergence claim invokes standard LM properties rather than a fitted parameter renamed as prediction. Experimental speed-up claims are empirical outcomes, not tautological. This is the normal case of a self-contained algorithmic proposal.
Axiom & Free-Parameter Ledger
free parameters (1)
- neural network weights and biases
axioms (2)
- standard math Levenberg-Marquardt iteration converges to a critical point of the regularized objective
- domain assumption Trust-region model provides a reliable safeguard for the learned update
invented entities (1)
-
Deep Gauss-Newton step realized by a learned operator
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Learning to learn by gradient descent by gradient descent
Marcin Andrychowicz et al. “Learning to learn by gradient descent by gradient descent”. In:Advances in neural information processing systems29 (2016)
2016
-
[2]
Jason Ansel et al. “PyTorch 2: Faster Ma- chine Learning Through Dynamic Python Bytecode Transformation and Graph Compilation”. In:Pro- ceedings of the 29th ACM International Confer- ence on Architectural Support for Programming Lan- guages and Operating Systems, Volume 2. ASPLOS ’24. La Jolla, CA, USA: Association for Computing Machinery, 2024, pp. 929–9...
-
[3]
Simultaneous reconstruc- tion of emission and attenuation in passive gamma emission tomography of spent nuclear fuel
Rasmus Backholm et al. “Simultaneous reconstruc- tion of emission and attenuation in passive gamma emission tomography of spent nuclear fuel”. In:In- verse Problems & Imaging14.2 (2020)
2020
-
[4]
SegNet: A Deep Convolutional Encoder- Decoder Architecture for Image Segmentation
Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. “SegNet: A Deep Convolutional Encoder- Decoder Architecture for Image Segmentation”. In: IEEE Transactions on Pattern Analysis and Ma- chine Intelligence39.12 (2017), pp. 2481–2495.doi: 10.1109/TPAMI.2016.2644615
-
[5]
Effect of gamma-ray energy on image quality in passive gamma emission tomography of spent nuclear fuel
Camille B´ elanger-Champagne et al. “Effect of gamma-ray energy on image quality in passive gamma emission tomography of spent nuclear fuel”. In:IEEE transactions on nuclear science66.1 (2018), pp. 487–496
2018
-
[6]
A scaled gradient projection method for con- strained image deblurring
Silvia Bonettini, Riccardo Zanella, and Luca Zanni. “A scaled gradient projection method for con- strained image deblurring”. In:Inverse problems 25.1 (2008), p. 015002. 10
2008
-
[7]
Learning firmly nonex- pansive operators.arXiv:2407.14156, 2024
Kristian Bredies, Jonathan Chirinos-Rodriguez, and Emanuele Naldi. “Learning firmly nonexpansive operators”. In:arXiv preprint arXiv:2407.14156 (2024)
-
[8]
Vanquishing the compu- tational cost of passive gamma emission tomogra- phy simulations leveraging physics-aware reduced order modeling
Nicola Cavallini et al. “Vanquishing the compu- tational cost of passive gamma emission tomogra- phy simulations leveraging physics-aware reduced order modeling”. In:Scientific Reports13.1 (2023), p. 15034
2023
-
[9]
Learning to optimize: A primer and a benchmark
Tianlong Chen et al. “Learning to optimize: A primer and a benchmark”. In:Journal of Machine Learning Research23.189 (2022), pp. 1–59
2022
-
[10]
Neural operator learning for ul- trasound tomography inversion
Haocheng Dai et al. “Neural operator learning for ul- trasound tomography inversion”. In:arXiv preprint arXiv:2304.03297(2023)
-
[11]
A new approach towards simultane- ous activity and attenuation reconstruction in emis- sion tomography
Volker Dicken. “A new approach towards simultane- ous activity and attenuation reconstruction in emis- sion tomography”. In:Inverse Problems15.4 (1999), pp. 931–960
1999
-
[12]
Neubauer.Regularization of Inverse Problems
Heinz Werner Engl, Martin Hanke, and A. Neubauer.Regularization of Inverse Problems. 1st ed. Mathematics and Its Applications. 3300 AA Dordrecht, The Netherlands: Kluwer Academic Pub- lisher, 1996.isbn: 978-0-7923-4157-4
1996
-
[13]
Assessing instrument performance for passive gamma emission tomography of spent fuel
Miller Erin et al. “Assessing instrument performance for passive gamma emission tomography of spent fuel”. In:INMM 59th Annual Meeting Paper Ad- vanced Nondestructive Assay Techniques for Fuel Assemblies. 2018
2018
-
[14]
Greedy Learning to Optimize with Convergence Guarantees
Patrick Fahy, Mohammad Golbabaee, and Matthias J. Ehrhardt. “Greedy Learning to Optimize with Convergence Guarantees”. In:arXiv preprint arXiv:2406.00260(2024)
-
[15]
On a fixed-point con- tinuation method for a convex optimization prob- lem
Jean-Baptiste Fest et al. “On a fixed-point con- tinuation method for a convex optimization prob- lem”. In:INdAM Workshop: Advanced Techniques in Optimization for Machine learning and Imaging. Springer. 2022, pp. 15–30
2022
-
[16]
Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization
Donald Goldfarb and Shiqian Ma. “Convergence of Fixed-Point Continuation Algorithms for Matrix Rank Minimization”. In:Foundations of Computa- tional Mathematics11.2 (Feb. 2011), pp. 183–210. doi:10.1007/s10208-011-9084-6
-
[17]
Gonzalez and Richard E
Rafael C. Gonzalez and Richard E. Woods.Digi- tal image processing. Fourth Edition. 330 Hudson Street, NY 10013, USA: Pearson Education, 2018. isbn: 978-0-13-335672-4
2018
-
[18]
Fixed- Point Continuation forℓ 1-Minimization: Methodol- ogy and Convergence
Elaine Hale, Wotao Yin, and Yin Zhang. “Fixed- Point Continuation forℓ 1-Minimization: Methodol- ogy and Convergence”. In:SIAM Journal on Opti- mization19.3 (Jan. 2008), pp. 1107–1130.doi:10. 1137/070698920
2008
-
[19]
The regularizing Levenberg- Marquardt scheme is of optimal order
Martin Hanke. “The regularizing Levenberg- Marquardt scheme is of optimal order”. In:The journal of integral equations and applications (2010), pp. 259–283
2010
-
[20]
Model-based learning for accelerated, limited-view 3-D photoacoustic to- mography
Andreas Hauptmann et al. “Model-based learning for accelerated, limited-view 3-D photoacoustic to- mography”. In:IEEE transactions on medical imag- ing37.6 (2018), pp. 1382–1393
2018
-
[21]
Safeguarded learned convex optimization
Howard Heaton et al. “Safeguarded learned convex optimization”. In:Proceedings of the AAAI Con- ference on Artificial Intelligence. Vol. 37. 6. 2023, pp. 7848–7855
2023
-
[22]
Model-based imaging of spent nu- clear fuel with passive gamma emission tomog- raphy
Sara Heikkinen. “Model-based imaging of spent nu- clear fuel with passive gamma emission tomog- raphy”.https : / / urn . fi / URN : NBN : fi - fe2024112797098. Master’s thesis. LUT University, 2024
2024
-
[23]
Gaussian Error Linear Units (GELUs)
Dan Hendrycks and Kevin Gimpel. “Gaussian error linear units (gelus)”. In:arXiv preprint arXiv:1606.08415(2016)
work page internal anchor Pith review arXiv 2016
-
[24]
Graph convolutional net- works for model-based learning in nonlinear inverse problems
William Herzberg et al. “Graph convolutional net- works for model-based learning in nonlinear inverse problems”. In:IEEE transactions on computational imaging7 (2021), pp. 1341–1353
2021
-
[25]
A prototype for passive gamma emission tomography
Tapani Honkamaa et al. “A prototype for passive gamma emission tomography”. In:Proceedings of Symposium on International Safeguards. 2014
2014
-
[26]
Resolution-invariant image classification based on Fourier neural operators
Samira Kabri et al. “Resolution-invariant image classification based on Fourier neural operators”. In: International Conference on Scale Space and Varia- tional Methods in Computer Vision. Springer. 2023, pp. 236–249
2023
-
[27]
Plug-and-play methods for integrating physical and learned models in com- putational imaging: Theory, algorithms, and appli- cations
Ulugbek S. Kamilov et al. “Plug-and-play methods for integrating physical and learned models in com- putational imaging: Theory, algorithms, and appli- cations”. In:IEEE Signal Processing Magazine40.1 (2023), pp. 85–97
2023
-
[28]
Kelley.Iterative methods for optimization
Carl T. Kelley.Iterative methods for optimization. 3600 Market Street, PA 19104, USA: SIAM, 1999
1999
-
[29]
Learn Best Practices of Deep Learn- ing Models with PyTorch
Nikhil Ketkar and Jojo Moolayil.Deep Learning with Python. Learn Best Practices of Deep Learn- ing Models with PyTorch. 2nd ed. One New York Plaza, NY 10004-1562, USA: Apress Media, 2021. isbn: 978-1-4842-5364-9
2021
-
[30]
Learning a pre- conditioner to accelerate compressed sensing re- constructions in MRI
Kirsten Koolstra and Rob Remis. “Learning a pre- conditioner to accelerate compressed sensing re- constructions in MRI”. In:Magnetic Resonance in Medicine87.4 (2022), pp. 2063–2073
2022
-
[31]
Jean Kossaifi et al.A Library for Learning Neural Operators. 2024
2024
-
[32]
Neural operator: learning maps between function spaces with applications to PDEs
Nikola Kovachki et al. “Neural operator: learning maps between function spaces with applications to PDEs”. In:J. Mach. Learn. Res.24.1 (Jan. 2023). issn: 1532-4435
2023
-
[33]
Nonlocality and nonlinearity implies universal- ity in operator learning
Samuel Lanthaler, Zongyi Li, and Andrew M. Stu- art. “Nonlocality and nonlinearity implies universal- ity in operator learning”. In:Constructive Approxi- mation(2025), pp. 1–43
2025
-
[34]
Backpropagation Applied to Hand- written Zip Code Recognition
Y. LeCun et al. “Backpropagation Applied to Hand- written Zip Code Recognition”. In:Neural Compu- tation1 (4 1989), pp. 541–551.issn: 0899-7667
1989
-
[35]
Learning preconditioners for con- jugate gradient PDE solvers
Yichen Li et al. “Learning preconditioners for con- jugate gradient PDE solvers”. In:International Conference on Machine Learning. PMLR. 2023, pp. 19425–19439. 11
2023
-
[36]
Fourier Neural Operator for Parametric Partial Differential Equations
Zongyi Li et al. “Fourier neural operator for para- metric partial differential equations”. In:arXiv preprint arXiv:2010.08895(2020). Published as a conference paper at ICLR 2021
work page internal anchor Pith review arXiv 2010
-
[37]
Learning to optimize quasi-Newton methods
Isaac Liao et al. “Learning to optimize quasi-Newton methods”. In:arXiv preprint arXiv:2210.06171 (2022)
-
[38]
Enhancing fourier neural operators with local spatial features.arXiv preprint arXiv:2503.17797, 2025
Chaoyu Liu et al. “Enhancing fourier neural opera- tors with local spatial features”. In:arXiv preprint arXiv:2503.17797(2025)
-
[39]
arXiv preprint arXiv:2402.16845 , year=
Miguel Liu-Schiaffini et al. “Neural operators with localized integral and differential kernels”. In:arXiv preprint arXiv:2402.16845(2024)
-
[40]
Burlington, MA 01803, USA: Academic Press, 1999
Stephane Mallat.A wavelet tour of signal processing. Burlington, MA 01803, USA: Academic Press, 1999
1999
-
[41]
Algorithm unrolling: interpretable, efficient deep learning for signal and image processing
Vishal Monga, Yuelong Li, and Yonina C. Eldar. “Algorithm unrolling: interpretable, efficient deep learning for signal and image processing”. In:arXiv preprint arXiv:1912.10557(2019)
-
[42]
A model-based itera- tive learning approach for diffuse optical tomogra- phy
Meghdoot Mozumder et al. “A model-based itera- tive learning approach for diffuse optical tomogra- phy”. In:IEEE Transactions on Medical Imaging 41.5 (2021), pp. 1289–1299
2021
-
[43]
3600 Market Street, PA 19104, USA: SIAM, 2001
Frank Natterer and Frank W¨ ubbeling.Mathemati- cal methods in image reconstruction. 3600 Market Street, PA 19104, USA: SIAM, 2001
2001
-
[44]
Wright.Numerical op- timization
Jorge Nocedal and Stephen J. Wright.Numerical op- timization. 2nd ed. 233 Spring Street, NY 10013, USA: Springer, 2006.isbn: 978-0387-30303-1
2006
-
[45]
Learning maximally monotone operators for image recovery
Jean-Christophe Pesquet et al. “Learning maximally monotone operators for image recovery”. In:SIAM Journal on Imaging Sciences14.3 (2021), pp. 1206– 1237
2021
-
[46]
Posiva Oy.YJH 2024 Olkiluodon ja Loviisan ydin- laitosten ydinj¨ atehuollon ohjelma vuosille 2025-
2024
-
[47]
Posiva public reports and publications: https : / / www . posiva . fi / material / sites / posivaraportit / 20240210 - 1210 - H4n8OESC1 / yvbmsegfm / YJH - 2024 - ohjelma _ web . pdf. (In Finnish), referenced 5.9.2025. 2024
2024
-
[48]
A simple guard for learned optimizers
Isabeau Pr´ emont-Schwarz, Jaroslav V´ ıtk˚ u, and Jan Feyereisl. “A simple guard for learned optimizers”. In:arXiv preprint arXiv:2201.12426(2022)
-
[49]
Convolutional neural opera- tors for robust and accurate learning of PDEs
Bogdan Raonic et al. “Convolutional neural opera- tors for robust and accurate learning of PDEs”. In: Advances in Neural Information Processing Systems 36 (2023), pp. 77187–77200
2023
-
[50]
Plug-and-play methods provably converge with properly trained denoisers
Ernest Ryu et al. “Plug-and-play methods provably converge with properly trained denoisers”. In:Inter- national Conference on Machine Learning. PMLR. 2019, pp. 5546–5557
2019
-
[51]
The identification problem for the attenuated X-ray transform
Plamen Stefanov. “The identification problem for the attenuated X-ray transform”. In:American Journal of Mathematics136.5 (2014), pp. 1215– 1247
2014
-
[52]
Convolution theorems for linear transforms
Harold S. Stone. “Convolution theorems for linear transforms”. In:IEEE transactions on signal pro- cessing46.10 (1998), pp. 2819–2821
1998
-
[53]
PDEBench: An extensive benchmark for scientific machine learning
Makoto Takamoto et al. “PDEBench: An extensive benchmark for scientific machine learning”. In:Ad- vances in Neural Information Processing Systems35 (2022), pp. 1596–1611
2022
-
[54]
Wavelet neural operator for solving parametric partial differ- ential equations in computational mechanics prob- lems
Tapas Tripura and Souvik Chakraborty. “Wavelet neural operator for solving parametric partial differ- ential equations in computational mechanics prob- lems”. In:Computer Methods in Applied Mechanics and Engineering404 (2023), p. 115783
2023
-
[55]
Github repository:https: //github.com/TapasTripura/WNO
Tapas Tripura and Souvik Chakraborty.Wavelet- Neural-Operator (WNO). Github repository:https: //github.com/TapasTripura/WNO. (v2.0.0) Refer- enced in 22.10.2025
2025
-
[56]
Plug-and-play priors for model based reconstruction
Singanallur V. Venkatakrishnan, Charles A. Bouman, and Brendt Wohlberg. “Plug-and-play priors for model based reconstruction”. In:2013 IEEE global conference on signal and information processing. IEEE. 2013, pp. 945–948
2013
-
[57]
Gamma tomography of spent nuclear fuel for geological repository safeguards
Riina Virta. “Gamma tomography of spent nuclear fuel for geological repository safeguards”.http:// hdl.handle.net/10138/575149. PhD thesis. Uni- versity of Helsinki, 2024, p. 60
2024
-
[58]
Fuel rod classification from passive gamma emission tomography (PGET) of spent nuclear fuel assemblies
Riina Virta et al. “Fuel rod classification from passive gamma emission tomography (PGET) of spent nuclear fuel assemblies”. In:ESARDA Bul- letin2020.61 (2020), pp. 10–21
2020
-
[59]
Improved Passive Gamma Emis- sion Tomography image quality in the central region of spent nuclear fuel
Riina Virta et al. “Improved Passive Gamma Emis- sion Tomography image quality in the central region of spent nuclear fuel”. In:Scientific Reports12.1 (2022), p. 12473
2022
-
[60]
Application of passive gamma emission tomography (PGET) for the verification of spent nuclear fuel
Timothy White et al. “Application of passive gamma emission tomography (PGET) for the verification of spent nuclear fuel”. In:INMM 59th Annual Meeting, Baltimore, Maryland, USA. 2018
2018
-
[61]
Verification of spent nu- clear fuel using passive gamma emission tomography (PGET)
Timothy White et al. “Verification of spent nu- clear fuel using passive gamma emission tomography (PGET)”. In:IAEA Symposium on International Safeguards. Book of Abstracts. IAEA-CN–267. 2019, pp. 198–198
2019
-
[62]
Shaftesbury Road, CB2 8EA, England: Cambridge University Press, 2023
Aston Zhang et al.Dive into Deep Learning.https: //D2L.ai. Shaftesbury Road, CB2 8EA, England: Cambridge University Press, 2023
2023
-
[63]
A review of convolutional neural networks in computer vision
Xia Zhao et al. “A review of convolutional neural networks in computer vision”. In:Artificial Intelli- gence Review57.4 (2024), p. 99. 12
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.