pith. machine review for the scientific record. sign in

arxiv: 2605.13988 · v1 · submitted 2026-05-13 · 💻 cs.LG · quant-ph

Recognition: no theorem link

Neural Fields for NV-Center Inverse Sensing

Authors on Pith no claims yet

Pith reviewed 2026-05-15 05:55 UTC · model grok-4.3

classification 💻 cs.LG quant-ph
keywords neural fieldsinverse problemsNV centersquantum sensingdipolar operatorssparse reconstructionmagnetic noise
0
0 comments X

The pith

A coordinate neural field recovers sparse spin sources from NV-center magnetic noise by coupling to a tensor dipolar forward model.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Inverse problems in NV-center sensing require reconstructing sparse magnetic sources from quantum noise measurements, but common scalar approximations in the forward model distort the landscape and trigger center-collapse during free-density optimization. The paper replaces that approximation with a tensor power-summed dipolar operator that more faithfully represents the physics. It then introduces NeTMY, an amortization-free coordinate neural field equipped with annealed positional encoding, multiscale optimization, sparsity gating, and spectrum-fidelity losses, and shows that this combination yields the best localization and distributional scores on sparse synthetic test cases generated by the corrected operator. Mechanism studies indicate the neural parameterization smooths and redistributes gradient updates, avoiding the pathology seen in direct density optimization.

Core claim

NeTMY is an amortization-free coordinate neural field coupled to the differentiable NV forward model, using annealed positional encoding, multiscale optimization, sparsity/gating, and spectrum-fidelity losses. When the forward model is upgraded to a tensor power-summed dipolar operator, NeTMY produces the best localization and distributional metrics across the tested sparse synthetic reconstructions. Its parameterization smooths updates and thereby mitigates the center-collapse failure mode that appears in raw density-space optimization.

What carries the argument

NeTMY, a coordinate neural field with annealed positional encoding and multiscale optimization, coupled to the tensor power-summed dipolar NV forward model for density reconstruction.

If this is right

  • Upgrading the forward model from scalar to tensor dipolar changes the inverse landscape and reveals a center-collapse mode in direct density optimization.
  • NeTMY achieves the best localization and distributional metrics among tested methods on sparse synthetic data generated by the corrected operator.
  • The neural-field parameterization smooths and redistributes gradient updates, avoiding the raw density-space pathology.
  • NV quantum sensing becomes a practical testbed for physics-faithful neural inverse problems.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same neural-field-plus-differentiable-physics pattern could be applied to other nonlinear quantum sensing modalities such as atomic magnetometers or superconducting qubit arrays.
  • If the neural field can be evaluated and differentiated at high speed, the method could support real-time or adaptive sensing protocols.
  • Transfer from the current synthetic benchmarks to laboratory data will require explicit handling of calibration drift and sensor-specific noise correlations.

Load-bearing premise

The tensor power-summed dipolar operator accurately represents real NV-center physics and the synthetic data distributions capture experimental challenges including noise and model mismatch.

What would settle it

Running free-density optimization and NeTMY on the same experimental NV-center measurements and checking whether center-collapse appears in the density-based result but not in the neural-field result.

Figures

Figures reproduced from arXiv: 2605.13988 by Christine Allen-Blanchette, Nathalie P. de Leon, Tao Zhong, Yixun Hu, Zhixuan Zhao.

Figure 1
Figure 1. Figure 1: NV relaxometry maps sparse fluctuating spins to a noisy frequency-domain measurement. [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: NeTMY pipeline. Coordinates pass through annealed Fourier features and a coordinate [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Qualitative reconstructions on two F3-generated samples inverted under F2. Free-density solvers center-collapse; HybridNeTMY and NeTMY preserve off-center structure. (a) Epoch-200 reconstructions on a representative many/close sample 0.0 0.5 1.0 Interp. parameter t (collapse → GT) 0 1 2 3 4 5 Total loss barrier h=1.12 (b) Energy barrier (collapse → GT) Full physics (2) Simple physics (1) 0.0 0.1 0.2 Cent… view at source ↗
Figure 4
Figure 4. Figure 4: Optimization-geometry diagnostics. (a) Epoch-200 reconstructions. (b) Loss along [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Loss landscape on α-RuCl3: F2 (left) is well-conditioned, F1 (right) is a de￾generate valley. (i) Depth-amplitude. Solving Γ pred Ru (d) = Γmeas Ru for the implied NV depth d ⋆ at fixed crystallographic density and moment puts 1/8 NVs in [9, 20] nm un￾der F2 vs 0/8 under F1, with point amplitude ratio Γ F1/Γ F2 ≈ 210× at d = 14.5 nm; the F1 inversion has no real solution for 7 of the 8 NVs. (ii) Hessian co… view at source ↗
Figure 6
Figure 6. Figure 6: Verification of the Green kernel in physical solver [PITH_FULL_IMAGE:figures/full_fig_p016_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: The reproduction of the relationship between magnetic noise and [PITH_FULL_IMAGE:figures/full_fig_p017_7.png] view at source ↗
Figure 4
Figure 4. Figure 4 [PITH_FULL_IMAGE:figures/full_fig_p027_4.png] view at source ↗
Figure 8
Figure 8. Figure 8: The reconstruction captures the broad support of the active region but develops strong [PITH_FULL_IMAGE:figures/full_fig_p030_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: High-frequency leakage on a medium/medium sample. The predicted density spreads [PITH_FULL_IMAGE:figures/full_fig_p030_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Realized first-step image-space update |∆ρ| produced by NeTMY, where ∆ρ = ρ1 − ρ0. The surface does not exhibit a singular center spike, in contrast to the iter-0 free-density gradient under the same forward operator (Section 5.3). This is the empirical signature of the Lemma 2 filtering kernel Gθ = JθJ ⊤ θ redistributing the raw density-space gradient through the parameterization. 30 [PITH_FULL_IMAGE:fi… view at source ↗
Figure 11
Figure 11. Figure 11: Cross-fidelity sample 008 (F3-generated). Reconstructions under F1 and F2 inversion. Free-density solvers (Tikhonov, ADMM) center-collapse under the more faithful F2, while NeTMY recovers the off-center support; GaussianSplat sits between the two regimes. See Section 5.2 for aggregate metrics [PITH_FULL_IMAGE:figures/full_fig_p031_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Cross-fidelity sample 009 (F3-generated). Same layout as [PITH_FULL_IMAGE:figures/full_fig_p031_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Cross-fidelity sample 010 (F3-generated). A many/close scene where the (P4) merging pathology and the (P2)+(P3) center-collapse pathology jointly stress the free-density solvers; NeTMY preserves the most peaks within the matching radius of Appendix E.3. 31 [PITH_FULL_IMAGE:figures/full_fig_p031_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Cross-fidelity sample 018 (F3-generated). A medium/medium scene; GaussianSplat is competitive on Hungarian F1 here while NeTMY remains best on Sliced-Wasserstein, consistent with the table-level rankings of Section 5.2 [PITH_FULL_IMAGE:figures/full_fig_p032_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Matched-operator F1/F1 examples (samples 012, 016, 022, 029). Under the simplified scalar/coherent operator there is no centering pathology, and ADMM is the lowest-MSE method ( [PITH_FULL_IMAGE:figures/full_fig_p032_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Matched-operator F2/F2 examples (samples 012, 016, 022, 029). Under the tensor power￾summed operator the same ADMM run degrades to a centered collapse, while NeTMY produces the cleanest off-center reconstructions and tops every F2/F2 metric in [PITH_FULL_IMAGE:figures/full_fig_p033_16.png] view at source ↗
read the original abstract

Inverse problems in scientific sensing are often solved with either hand-designed regularizers or supervised networks trained on simulated labels, yet both can fail when the forward model is nonlinear, spectrally coupled, and physically delicate. We study this issue for noise sensing based on nitrogen-vacancy (NV) centers in diamond, where a quantum sensor measures magnetic-noise spectra generated by sparse spin sources. We show that replacing a common scalar/coherent forward approximation with a tensor power-summed dipolar operator changes the inverse landscape and exposes a center-collapse failure mode in free-density optimization. We propose NeTMY, an amortization-free coordinate neural field coupled to the differentiable NV forward model, with annealed positional encoding, multiscale optimization, sparsity/gating, and spectrum-fidelity losses. Across sparse synthetic reconstructions generated by the corrected operator, NeTMY achieves the best localization and distributional metrics in the tested benchmark. Mechanism experiments show that NeTMY does not directly execute the raw density-space gradient; its parameterization smooths and redistributes updates, mitigating the center-collapse pathology. These results position NV quantum sensing as a useful testbed for physics-faithful neural inverse problems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes NeTMY, an amortization-free coordinate neural field for inverse sensing of sparse spin sources via NV-center magnetic noise measurements. It replaces scalar/coherent forward approximations with a tensor power-summed dipolar operator, which changes the inverse landscape and exposes center-collapse in free-density optimization. The method couples the neural field to a differentiable NV forward model using annealed positional encoding, multiscale optimization, sparsity/gating, and spectrum-fidelity losses. On sparse synthetic reconstructions generated by the corrected operator, NeTMY reports the best localization and distributional metrics, with mechanism experiments indicating that its parameterization smooths and redistributes updates to mitigate collapse.

Significance. If the results hold, the work supplies a concrete testbed for physics-faithful neural inverse methods in quantum sensing. Strengths include the explicit differentiable forward model, identification of a specific optimization pathology, and the amortization-free formulation. These elements could inform broader use of neural fields for nonlinear, spectrally coupled inverse problems where hand-designed regularizers or purely supervised approaches fall short.

major comments (3)
  1. [Abstract] Abstract: the central claim that NeTMY 'achieves the best localization and distributional metrics in the tested benchmark' is stated at a high level without numerical values, error bars, baseline definitions, or data-exclusion rules. This leaves the magnitude of improvement and reproducibility unverifiable from the provided information.
  2. [Experimental evaluation] Experimental evaluation (implied Section 4): all reported results use synthetic data generated by the identical tensor power-summed dipolar operator that the network inverts. This closed-loop setup does not test the paper's stated concerns about model mismatch, experimental noise, or unmodeled interactions, weakening the claim of practical utility for real NV-center measurements.
  3. [Mechanism experiments] Mechanism experiments: the assertion that NeTMY 'does not directly execute the raw density-space gradient' and instead smooths updates is presented without quantitative ablation isolating the contribution of annealed positional encoding, multiscale optimization, or sparsity/gating to the collapse mitigation. It is therefore unclear which components are load-bearing.
minor comments (2)
  1. [Methods] The tensor power-summed dipolar operator should be given an explicit equation number and derivation sketch in the methods section to allow readers to reproduce the forward model.
  2. [Methods] Notation for the spectrum-fidelity loss and gating mechanism could be standardized with a single table of symbols.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive comments. We address each major point below, agreeing where the manuscript can be strengthened and providing clarifications or planned revisions where appropriate.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central claim that NeTMY 'achieves the best localization and distributional metrics in the tested benchmark' is stated at a high level without numerical values, error bars, baseline definitions, or data-exclusion rules. This leaves the magnitude of improvement and reproducibility unverifiable from the provided information.

    Authors: We agree that the abstract would benefit from greater specificity. In the revised manuscript we will insert the key quantitative results (localization error and distributional metrics with error bars), together with concise definitions of the baselines and data-exclusion criteria used in the benchmark. revision: yes

  2. Referee: [Experimental evaluation] Experimental evaluation (implied Section 4): all reported results use synthetic data generated by the identical tensor power-summed dipolar operator that the network inverts. This closed-loop setup does not test the paper's stated concerns about model mismatch, experimental noise, or unmodeled interactions, weakening the claim of practical utility for real NV-center measurements.

    Authors: The referee correctly identifies that all quantitative results are obtained in a closed synthetic loop using the same forward operator. This controlled setting was chosen to isolate the center-collapse pathology and the effect of our mitigation strategies without confounding experimental noise. We acknowledge that the current experiments do not address model mismatch or real-device noise. In the revision we will add an explicit limitations subsection that discusses these gaps and sketches the path toward incorporating experimental noise models and real NV-center measurements. revision: partial

  3. Referee: [Mechanism experiments] Mechanism experiments: the assertion that NeTMY 'does not directly execute the raw density-space gradient' and instead smooths updates is presented without quantitative ablation isolating the contribution of annealed positional encoding, multiscale optimization, or sparsity/gating to the collapse mitigation. It is therefore unclear which components are load-bearing.

    Authors: We will expand the mechanism section with quantitative ablation studies that individually disable or vary annealed positional encoding, multiscale optimization, and sparsity/gating, reporting their separate effects on collapse frequency and final metrics. This will make clear which elements are primarily responsible for the observed smoothing of updates. revision: yes

Circularity Check

0 steps flagged

No circularity in derivation chain

full rationale

The paper's central method couples a coordinate neural field to an external differentiable physical forward model (tensor power-summed dipolar operator) and optimizes via explicit spectrum-fidelity losses, annealed encoding, and sparsity terms. Evaluation occurs on synthetic data generated by that same operator, which is standard practice for inverse-problem benchmarks and does not reduce any claimed prediction or result to a quantity defined by the method's own fitted parameters or self-citations. No load-bearing self-citations, uniqueness theorems, or ansatz smuggling appear in the provided text; the derivation remains self-contained against the stated physical model.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim rests on the domain assumption that the tensor dipolar operator is the correct physical model and on the new neural parameterization without independent evidence beyond the paper's synthetic benchmarks.

axioms (1)
  • domain assumption The tensor power-summed dipolar operator correctly models NV-center magnetic interactions
    Replaces common scalar/coherent forward approximation and changes the inverse landscape
invented entities (1)
  • NeTMY neural field no independent evidence
    purpose: Amortization-free coordinate representation of source density
    New parameterization introduced to smooth updates and mitigate center-collapse

pith-pipeline@v0.9.0 · 5507 in / 1311 out tokens · 56006 ms · 2026-05-15T05:55:46.776844+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

87 extracted references · 87 canonical work pages · 5 internal anchors

  1. [1]

    Nv center based nano-nmr enhanced by deep learning.Scientific reports, 9(1):17802, 2019

    Nati Aharon, Amit Rotem, Liam P McGuinness, Fedor Jelezko, Alex Retzker, and Zohar Ringel. Nv center based nano-nmr enhanced by deep learning.Scientific reports, 9(1):17802, 2019

  2. [2]

    Blind deconvolution using convex programming.IEEE Transactions on Information Theory, 60(3):1711–1732, 2013

    Ali Ahmed, Benjamin Recht, and Justin Romberg. Blind deconvolution using convex programming.IEEE Transactions on Information Theory, 60(3):1711–1732, 2013

  3. [3]

    Understanding untrained deep models for inverse problems: Algorithms and theory.arXiv preprint arXiv:2502.18612, 2025

    Ismail Alkhouri, Evan Bell, Avrajit Ghosh, Shijun Liang, Rongrong Wang, and Saiprasad Ravishankar. Understanding untrained deep models for inverse problems: Algorithms and theory.arXiv preprint arXiv:2502.18612, 2025

  4. [4]

    On instabilities of deep learning in image reconstruction and the potential costs of ai.Proceedings of the National Academy of Sciences, 117(48):30088–30095, 2020

    Vegard Antun, Francesco Renna, Clarice Poon, Ben Adcock, and Anders C Hansen. On instabilities of deep learning in image reconstruction and the potential costs of ai.Proceedings of the National Academy of Sciences, 117(48):30088–30095, 2020

  5. [5]

    Solving inverse problems using data-driven models.Acta numerica, 28:1–174, 2019

    Simon Arridge, Peter Maass, Ozan Öktem, and Carola-Bibiane Schönlieb. Solving inverse problems using data-driven models.Acta numerica, 28:1–174, 2019

  6. [6]

    Sliced and radon wasserstein barycenters of measures.Journal of Mathematical Imaging and Vision, 51(1):22–45, 2015

    Nicolas Bonneel, Julien Rabin, Gabriel Peyré, and Hanspeter Pfister. Sliced and radon wasserstein barycenters of measures.Journal of Mathematical Imaging and Vision, 51(1):22–45, 2015

  7. [7]

    Compressed sensing using generative models

    Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G Dimakis. Compressed sensing using generative models. InInternational conference on machine learning, pages 537–546. PMLR, 2017

  8. [8]

    Multiscale seismic waveform inversion.Geophysics, 60(5):1457–1473, 1995

    Carey Bunks, Fatimetou M Saleck, Stephane Zaleski, and Guy Chavent. Multiscale seismic waveform inversion.Geophysics, 60(5):1457–1473, 1995

  9. [9]

    Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information.IEEE Transactions on information theory, 52(2):489–509, 2006

    Emmanuel J Candès, Justin Romberg, and Terence Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information.IEEE Transactions on information theory, 52(2):489–509, 2006

  10. [10]

    Probing condensed matter physics with magnetometry based on nitrogen-vacancy centres in diamond.Nature Reviews Materials, 3(1):17088, 2018

    Francesco Casola, Toeno Van Der Sar, and Amir Yacoby. Probing condensed matter physics with magnetometry based on nitrogen-vacancy centres in diamond.Nature Reviews Materials, 3(1):17088, 2018

  11. [11]

    A first-order primal-dual algorithm for convex problems with applications to imaging.Journal of mathematical imaging and vision, 40(1):120–145, 2011

    Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging.Journal of mathematical imaging and vision, 40(1):120–145, 2011

  12. [12]

    Massively multiplexed nanoscale magnetometry with diamond quantum sensors

    Kai-Hung Cheng, Zeeshawn Kazi, Jared Rovny, Bichen Zhang, Lila S Nassar, Jeff D Thompson, and Nathalie P De Leon. Massively multiplexed nanoscale magnetometry with diamond quantum sensors. Physical Review X, 15(3):031014, 2025

  13. [13]

    Nonconvex optimization meets low-rank matrix factorization: An overview.IEEE Transactions on Signal Processing, 67(20):5239–5269, 2019

    Yuejie Chi, Yue M Lu, and Yuxin Chen. Nonconvex optimization meets low-rank matrix factorization: An overview.IEEE Transactions on Signal Processing, 67(20):5239–5269, 2019

  14. [14]

    Scientific machine learning through physics–informed neural networks: Where we are and what’s next.Journal of Scientific Computing, 92(3):88, 2022

    Salvatore Cuomo, Vincenzo Schiano Di Cola, Fabio Giampaolo, Gianluigi Rozza, Maziar Raissi, and Francesco Piccialli. Scientific machine learning through physics–informed neural networks: Where we are and what’s next.Journal of Scientific Computing, 92(3):88, 2022

  15. [15]

    Quantum sensing.Reviews of modern physics, 89(3):035002, 2017

    Christian L Degen, Friedemann Reinhard, and Paola Cappellaro. Quantum sensing.Reviews of modern physics, 89(3):035002, 2017

  16. [16]

    The benchmark lottery.arXiv preprint arXiv:2107.07002, 2021

    Mostafa Dehghani, Yi Tay, Alexey A Gritsenko, Zhe Zhao, Neil Houlsby, Fernando Diaz, Donald Metzler, and Oriol Vinyals. The benchmark lottery.arXiv preprint arXiv:2107.07002, 2021

  17. [17]

    The nitrogen-vacancy colour centre in diamond.Physics Reports, 528(1):1–45, 2013

    Marcus W Doherty, Neil B Manson, Paul Delaney, Fedor Jelezko, Jörg Wrachtrup, and Lloyd CL Hollen- berg. The nitrogen-vacancy colour centre in diamond.Physics Reports, 528(1):1–45, 2013

  18. [18]

    Compressed sensing.IEEE Transactions on information theory, 52(4):1289–1306, 2006

    David L Donoho. Compressed sensing.IEEE Transactions on information theory, 52(4):1289–1306, 2006

  19. [19]

    Incomplete spectrum qsm using support information.Frontiers in Neuroscience, 17:1130524, 2023

    Patrick Fuchs and Karin Shmueli. Incomplete spectrum qsm using support information.Frontiers in Neuroscience, 17:1130524, 2023

  20. [20]

    Deep equilibrium architectures for inverse problems in imaging.IEEE Transactions on Computational Imaging, 7:1123–1133, 2021

    Davis Gilton, Gregory Ongie, and Rebecca Willett. Deep equilibrium architectures for inverse problems in imaging.IEEE Transactions on Computational Imaging, 7:1123–1133, 2021. 10

  21. [21]

    Generative adversarial nets.Advances in neural information processing systems, 27, 2014

    Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets.Advances in neural information processing systems, 27, 2014

  22. [22]

    Review on solving the inverse problem in eeg source analysis.Journal of neuroengineering and rehabilitation, 5(1):25, 2008

    Roberta Grech, Tracey Cassar, Joseph Muscat, Kenneth P Camilleri, Simon G Fabri, Michalis Zervakis, Petros Xanthopoulos, Vangelis Sakkalis, and Bart Vanrumste. Review on solving the inverse problem in eeg source analysis.Journal of neuroengineering and rehabilitation, 5(1):25, 2008

  23. [23]

    Solving inverse problems with model mismatch using untrained neural networks within model-based architectures.arXiv preprint arXiv:2403.04847, 2024

    Peimeng Guan, Naveed Iqbal, Mark A Davenport, and Mudassir Masood. Solving inverse problems with model mismatch using untrained neural networks within model-based architectures.arXiv preprint arXiv:2403.04847, 2024

  24. [24]

    Cnn-based projected gradient descent for consistent ct image reconstruction.IEEE transactions on medical imaging, 37(6):1440–1453, 2018

    Harshit Gupta, Kyong Hwan Jin, Ha Q Nguyen, Michael T McCann, and Michael Unser. Cnn-based projected gradient descent for consistent ct image reconstruction.IEEE transactions on medical imaging, 37(6):1440–1453, 2018

  25. [25]

    Neural-network-based regularization methods for inverse problems in imaging.GAMM-Mitteilungen, 47(4):e202470004, 2024

    Andreas Habring and Martin Holler. Neural-network-based regularization methods for inverse problems in imaging.GAMM-Mitteilungen, 47(4):e202470004, 2024

  26. [26]

    Learning a variational network for reconstruction of accelerated mri data.Magnetic resonance in medicine, 79(6):3055–3071, 2018

    Kerstin Hammernik, Teresa Klatzer, Erich Kobler, Michael P Recht, Daniel K Sodickson, Thomas Pock, and Florian Knoll. Learning a variational network for reconstruction of accelerated mri data.Magnetic resonance in medicine, 79(6):3055–3071, 2018

  27. [27]

    Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks

    Reinhard Heckel and Paul Hand. Deep decoder: Concise image representations from untrained non- convolutional networks.arXiv preprint arXiv:1810.03982, 2018

  28. [28]

    Compressive sensing with un-trained neural networks: Gradient descent finds a smooth approximation

    Reinhard Heckel and Mahdi Soltanolkotabi. Compressive sensing with un-trained neural networks: Gradient descent finds a smooth approximation. InInternational conference on machine learning, pages 4149–4158. PMLR, 2020

  29. [29]

    Deep reinforcement learning that matters

    Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. InProceedings of the AAAI conference on artificial intelligence, volume 32, 2018

  30. [30]

    Ridge regression: applications to nonorthogonal problems

    Arthur E Hoerl and Robert W Kennard. Ridge regression: applications to nonorthogonal problems. Technometrics, 12(1):69–82, 1970

  31. [31]

    Ridge regression: Biased estimation for nonorthogonal problems

    Arthur E Hoerl and Robert W Kennard. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1):55–67, 1970

  32. [32]

    Edge-machine-learning-assisted robust magnetometer based on randomly oriented nv- ensembles in diamond.Sensors, 23(3):1119, 2023

    Jonas Homrighausen, Ludwig Horsthemke, Jens Pogorzelski, Sarah Trinschek, Peter Glösekötter, and Markus Gregor. Edge-machine-learning-assisted robust magnetometer based on randomly oriented nv- ensembles in diamond.Sensors, 23(3):1119, 2023

  33. [33]

    Physics informed neural networks (pinn) for low snr magnetic resonance electrical properties tomography (mrept).Diagnostics, 12(11):2627, 2022

    Adan Jafet Garcia Inda, Shao Ying Huang, Nevrez ˙Imamo˘glu, Ruian Qin, Tianyi Yang, Tiao Chen, Zilong Yuan, and Wenwei Yu. Physics informed neural networks (pinn) for low snr magnetic resonance electrical properties tomography (mrept).Diagnostics, 12(11):2627, 2022

  34. [34]

    Algorithmic guarantees for inverse imaging with untrained network priors.Advances in neural information processing systems, 32, 2019

    Gauri Jagatap and Chinmay Hegde. Algorithmic guarantees for inverse imaging with untrained network priors.Advances in neural information processing systems, 32, 2019

  35. [35]

    Statistical inverse problems: discretization, model reduction and inverse crimes.Journal of computational and applied mathematics, 198(2):493–504, 2007

    Jari Kaipio and Erkki Somersalo. Statistical inverse problems: discretization, model reduction and inverse crimes.Journal of computational and applied mathematics, 198(2):493–504, 2007

  36. [36]

    Physics-informed machine learning.Nature Reviews Physics, 3(6):422–440, 2021

    George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed machine learning.Nature Reviews Physics, 3(6):422–440, 2021

  37. [37]

    3d gaussian splatting for real-time radiance field rendering.ACM Trans

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, George Drettakis, et al. 3d gaussian splatting for real-time radiance field rendering.ACM Trans. Graph., 42(4):139–1, 2023

  38. [38]

    Adam: A Method for Stochastic Optimization

    Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980, 2014

  39. [39]

    The hungarian method for the assignment problem.Naval research logistics quarterly, 2 (1-2):83–97, 1955

    Harold W Kuhn. The hungarian method for the assignment problem.Naval research logistics quarterly, 2 (1-2):83–97, 1955

  40. [40]

    Room temperature relaxometry of single nitrogen vacancy centers in proximity toα-rucl3 nanoflakes.Nano Letters, 24(16):4793–4800, 2024

    Jitender Kumar, Dan Yudilevich, Ariel Smooha, Inbar Zohar, Arnab K Pariari, Rainer Stöhr, Andrej Denisenko, Markus Hücker, and Amit Finkler. Room temperature relaxometry of single nitrogen vacancy centers in proximity toα-rucl3 nanoflakes.Nano Letters, 24(16):4793–4800, 2024

  41. [41]

    Blind recovery of sparse signals from subsampled convolution.IEEE Transactions on Information Theory, 63(2):802–821, 2016

    Kiryung Lee, Yanjun Li, Marius Junge, and Yoram Bresler. Blind recovery of sparse signals from subsampled convolution.IEEE Transactions on Information Theory, 63(2):802–821, 2016

  42. [42]

    Are we learning yet? a meta review of evaluation failures across machine learning

    Thomas Liao, Rohan Taori, Inioluwa Deborah Raji, and Ludwig Schmidt. Are we learning yet? a meta review of evaluation failures across machine learning. InThirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021

  43. [43]

    Self-calibration and biconvex compressive sensing.Inverse Problems, 31(11):115002, 2015

    Shuyang Ling and Thomas Strohmer. Self-calibration and biconvex compressive sensing.Inverse Problems, 31(11):115002, 2015. 11

  44. [44]

    On the limited memory bfgs method for large scale optimization

    Dong C Liu and Jorge Nocedal. On the limited memory bfgs method for large scale optimization. Mathematical programming, 45(1):503–528, 1989

  45. [45]

    High-resolution wide-field magnetic imaging with sparse sampling using nitrogen-vacancy centers.arXiv preprint arXiv:2602.00679, 2026

    Keqing Liu, Jiazhao Tian, Bokun Duan, Hao Zhang, Kangze Li, Guofeng Zhang, Fedor Jelezko, Ressa S Said, Jianming Cai, and Liantuan Xiao. High-resolution wide-field magnetic imaging with sparse sampling using nitrogen-vacancy centers.arXiv preprint arXiv:2602.00679, 2026

  46. [46]

    Nanoscale nuclear magnetic resonance with a nitrogen-vacancy spin sensor.Science, 339(6119):557–560, 2013

    HJ Mamin, M Kim, MH Sherwood, Charles T Rettner, K Ohno, DD Awschalom, and D Rugar. Nanoscale nuclear magnetic resonance with a nitrogen-vacancy spin sensor.Science, 339(6119):557–560, 2013

  47. [47]

    Acorn: Adaptive coordinate networks for neural scene representation.arXiv preprint arXiv:2105.02788, 2021

    Julien NP Martel, David B Lindell, Connor Z Lin, Eric R Chan, Marco Monteiro, and Gordon Wetzstein. Acorn: Adaptive coordinate networks for neural scene representation.arXiv preprint arXiv:2105.02788, 2021

  48. [48]

    Deepred: Deep image prior powered by red

    Gary Mataev, Peyman Milanfar, and Michael Elad. Deepred: Deep image prior powered by red. In Proceedings of the IEEE/CVF international conference on computer vision workshops, pages 0–0, 2019

  49. [49]

    Convolutional neural networks for inverse problems in imaging: A review.IEEE Signal Processing Magazine, 34(6):85–95, 2017

    Michael T McCann, Kyong Hwan Jin, and Michael Unser. Convolutional neural networks for inverse problems in imaging: A review.IEEE Signal Processing Magazine, 34(6):85–95, 2017

  50. [50]

    Optimized current-density reconstruction from wide-field quantum diamond magnetic field maps

    Siddhant Midha, Madhur Parashar, Anuj Bathla, David A Broadway, Jean-Philippe Tetienne, and Kasturi Saha. Optimized current-density reconstruction from wide-field quantum diamond magnetic field maps. Physical Review Applied, 22(1):014015, 2024

  51. [51]

    Nerf: Representing scenes as neural radiance fields for view synthesis.Communications of the ACM, 65 (1):99–106, 2021

    Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis.Communications of the ACM, 65 (1):99–106, 2021

  52. [52]

    Implicit neural representation in medical imaging: A comparative survey

    Amirali Molaei, Amirhossein Aminimehr, Armin Tavakoli, Amirhossein Kazerouni, Bobby Azad, Reza Azad, and Dorit Merhof. Implicit neural representation in medical imaging: A comparative survey. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2381–2391, 2023

  53. [53]

    Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing.IEEE Signal Processing Magazine, 38(2):18–44, 2021

    Vishal Monga, Yuelong Li, and Yonina C Eldar. Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing.IEEE Signal Processing Magazine, 38(2):18–44, 2021

  54. [54]

    Relaxometry with nitrogen vacancy (nv) centers in diamond.Accounts of chemical research, 55(24):3572–3580, 2022

    Aldona Mzyk, Alina Sigaeva, and Romana Schirhagl. Relaxometry with nitrogen vacancy (nv) centers in diamond.Accounts of chemical research, 55(24):3572–3580, 2022

  55. [55]

    Distributed optimization and statistical learning via the alternating direction method of multipliers.Foundations and Trends® in Machine learning, 3(1):1–122, 2011

    Parikh Neal, Chu Eric, Peleato Borja, and Eckstein Jonathan. Distributed optimization and statistical learning via the alternating direction method of multipliers.Foundations and Trends® in Machine learning, 3(1):1–122, 2011

  56. [56]

    Deep learning techniques for inverse problems in imaging.IEEE Journal on Selected Areas in Information Theory, 1(1):39–56, 2020

    Gregory Ongie, Ajil Jalal, Christopher A Metzler, Richard G Baraniuk, Alexandros G Dimakis, and Rebecca Willett. Deep learning techniques for inverse problems in imaging.IEEE Journal on Selected Areas in Information Theory, 1(1):39–56, 2020

  57. [57]

    Nerfies: Deformable neural radiance fields

    Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. InProceedings of the IEEE/CVF international conference on computer vision, pages 5865–5874, 2021

  58. [58]

    Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

    Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks.arXiv preprint arXiv:1511.06434, 2015

  59. [59]

    Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.Journal of Computational physics, 378:686–707, 2019

  60. [60]

    Ai and the everything in the whole wide world benchmark.arXiv preprint arXiv:2111.15366, 2021

    Inioluwa Deborah Raji, Emily M Bender, Amandalynne Paullada, Emily Denton, and Alex Hanna. Ai and the everything in the whole wide world benchmark.arXiv preprint arXiv:2111.15366, 2021

  61. [61]

    Do imagenet classifiers generalize to imagenet? InInternational conference on machine learning, pages 5389–5400

    Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classifiers generalize to imagenet? InInternational conference on machine learning, pages 5389–5400. PMLR, 2019

  62. [62]

    Dynamic ct reconstruction from limited views with implicit neural representations and parametric motion fields

    Albert W Reed, Hyojin Kim, Rushil Anirudh, K Aditya Mohan, Kyle Champley, Jingu Kang, and Suren Jayasuriya. Dynamic ct reconstruction from limited views with implicit neural representations and parametric motion fields. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 2258–2268, 2021

  63. [63]

    The little engine that could: Regularization by denoising (red).SIAM journal on imaging sciences, 10(4):1804–1844, 2017

    Yaniv Romano, Michael Elad, and Peyman Milanfar. The little engine that could: Regularization by denoising (red).SIAM journal on imaging sciences, 10(4):1804–1844, 2017

  64. [64]

    U-net: Convolutional networks for biomedical image segmentation

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. InInternational Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015

  65. [65]

    Nanoscale diamond quantum sensors for many-body physics.Nature Reviews Physics, 6(12):753–768, 2024

    Jared Rovny, Sarang Gopalakrishnan, Ania C Bleszynski Jayich, Patrick Maletinsky, Eugene Demler, and Nathalie P de Leon. Nanoscale diamond quantum sensors for many-body physics.Nature Reviews Physics, 6(12):753–768, 2024. 12

  66. [66]

    Nonlinear total variation based noise removal algorithms

    Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, 60(1-4):259–268, 1992

  67. [67]

    Liyue Shen, John Pauly, and Lei Xing. Nerp: implicit neural representation learning with prior embedding for sparsely sampled image reconstruction.IEEE transactions on neural networks and learning systems, 35(1):770–782, 2022

  68. [68]

    Improved implicit neural representation with fourier reparameterized training

    Kexuan Shi, Xingyu Zhou, and Shuhang Gu. Improved implicit neural representation with fourier reparameterized training. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 25985–25994, 2024

  69. [69]

    Implicit neural representations with periodic activation functions.Advances in neural information processing systems, 33:7462–7473, 2020

    Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions.Advances in neural information processing systems, 33:7462–7473, 2020

  70. [70]

    Sensing single-molecule magnets with nitrogen-vacancy centers.Nano Letters, 2026

    Ariel Smooha, Jitender Kumar, Dan Yudilevich, John W Rosenberg, Valentin Bayer, Rainer Stöhr, Andrej Denisenko, Tatyana Bendikov, Anna Kossoy, Iddo Pinkas, et al. Sensing single-molecule magnets with nitrogen-vacancy centers.Nano Letters, 2026

  71. [71]

    Fourier features let networks learn high frequency functions in low dimensional domains.Advances in neural information processing systems, 33: 7537–7547, 2020

    Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains.Advances in neural information processing systems, 33: 7537–7547, 2020

  72. [72]

    Spin relaxometry of single nitrogen-vacancy defects in diamond nanocrystals for magnetic noise sensing

    J-P Tetienne, Thomas Hingant, Loïc Rondin, Adrien Cavaillès, Ludovic Mayer, Géraldine Dantelle, Thierry Gacoin, Jörg Wrachtrup, J-F Roch, and Vincent Jacques. Spin relaxometry of single nitrogen-vacancy defects in diamond nanocrystals for magnetic noise sensing.arXiv preprint arXiv:1304.1197, 2013

  73. [73]

    Accurate magnetic field imaging using nanodiamond quantum sensors enhanced by machine learning

    Moeta Tsukamoto, Shuji Ito, Kensuke Ogawa, Yuto Ashida, Kento Sasaki, and Kensuke Kobayashi. Accurate magnetic field imaging using nanodiamond quantum sensors enhanced by machine learning. Scientific reports, 12(1):13942, 2022

  74. [74]

    Deep image prior

    Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Deep image prior. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 9446–9454, 2018

  75. [75]

    Nanometre-scale probing of spin waves using single electron spins.Nature communications, 6(1):7886, 2015

    Toeno Van der Sar, Francesco Casola, Ronald Walsworth, and Amir Yacoby. Nanometre-scale probing of spin waves using single electron spins.Nature communications, 6(1):7886, 2015

  76. [76]

    Plug-and-play priors for model based reconstruction

    Singanallur V Venkatakrishnan, Charles A Bouman, and Brendt Wohlberg. Plug-and-play priors for model based reconstruction. In2013 IEEE global conference on signal and information processing, pages 945–948. IEEE, 2013

  77. [77]

    An overview of full-waveform inversion in exploration geophysics

    Jean Virieux and Stéphane Operto. An overview of full-waveform inversion in exploration geophysics. 2010

  78. [78]

    Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600–612, 2004

    Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600–612, 2004

  79. [79]

    Gradient magnitude similarity deviation: A highly efficient perceptual image quality index.IEEE transactions on image processing, 23(2):684–695, 2013

    Wufeng Xue, Lei Zhang, Xuanqin Mou, and Alan C Bovik. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index.IEEE transactions on image processing, 23(2):684–695, 2013

  80. [80]

    Admm-csnet: A deep learning approach for image compressive sensing.IEEE transactions on pattern analysis and machine intelligence, 42(3):521–538, 2018

    Yan Yang, Jian Sun, Huibin Li, and Zongben Xu. Admm-csnet: A deep learning approach for image compressive sensing.IEEE transactions on pattern analysis and machine intelligence, 42(3):521–538, 2018

Showing first 80 references.