pith. machine review for the scientific record. sign in

arxiv: 2604.19296 · v1 · submitted 2026-04-21 · 💻 cs.LG

Recognition: unknown

Debiased neural operators for estimating functionals

Authors on Pith no claims yet

Pith reviewed 2026-05-10 02:31 UTC · model grok-4.3

classification 💻 cs.LG
keywords neural operatorsdebiased estimationNeyman orthogonalitysemiparametric inferenceRiesz regressionfunctional estimationoperator-valued nuisances
0
0 comments X

The pith

A debiased estimator removes first-order bias when estimating scalar functionals from neural operator trajectories.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces DOPE, a semiparametric estimator for target quantities such as accumulated cost or time spent in a range that summarize solution trajectories approximated by neural operators. It establishes that naive plug-in estimation of these functionals suffers from first-order bias. DOPE counters the bias with a one-step Neyman-orthogonal estimator that treats the neural operator as a high-dimensional nuisance mapping between function spaces. The correction uses a weighting mechanism that accounts for irregular observation designs and the sensitivity of the target functional to perturbations in the trajectory. Weights are learned by extending Riesz regression to operator-valued nuisances, and the method works with arbitrary neural operator architectures.

Core claim

DOPE derives a novel one-step, Neyman-orthogonal estimator that treats the neural operator as a high-dimensional nuisance mapping and removes the leading bias term using a weighting mechanism that simultaneously accounts for irregular observation designs and the sensitivity of the target functional to perturbations of the trajectory, with weights learned via Riesz regression extended to operator-valued nuisances.

What carries the argument

Neyman-orthogonal one-step estimator with Riesz regression weights for operator-valued nuisances that correct bias from both observation irregularity and functional sensitivity.

If this is right

  • DOPE yields estimators with lower bias and improved convergence rates for functionals such as total energy or time above a threshold.
  • The method directly handles both partial and irregular observations without requiring uniform sampling.
  • It remains compatible with any existing neural operator architecture for the underlying trajectory approximation.
  • The learned weights explicitly correct for how sensitive each target functional is to small changes in the approximated trajectory.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The debiasing approach may extend to other settings that approximate high-dimensional functional mappings before computing summaries.
  • Scientific computing applications that derive aggregate statistics from simulated trajectories could adopt this correction for greater reliability.
  • Empirical tests on real datasets with verifiable ground-truth functionals would clarify practical gains over plug-in baselines.

Load-bearing premise

The weighting mechanism can be accurately learned via Riesz regression for the operator-valued nuisance case and that the neural operator nuisance is estimated at a rate sufficient for the bias correction to be effective.

What would settle it

A controlled numerical experiment on a known functional where the plug-in estimator exhibits persistent first-order bias while the DOPE estimator shows reduced bias and faster convergence rates.

Figures

Figures reproduced from arXiv: 2604.19296 by Dennis Frauen, Konstantin Hess, Niki Kilbertus, Stefan Feuerriegel.

Figure 1
Figure 1. Figure 1: Examples: In many scientific appli￾cations, solution trajectories are summarized by scalar quantities (i.e., functionals g). Neural operators [35, 39, 40] are a powerful framework for learning solution maps of com￾plex physical systems governed by partial dif￾ferential equations (PDEs). In particular, neural operators can efficiently learn mesh-independent approximations of PDE solutions that general￾ize a… view at source ↗
Figure 2
Figure 2. Figure 2: Overview of DOPE (our debiased neural operator). However, this is non￾trivial: the perturbation ∆(A) is not observed as a function, but only through noisy point evaluations Yk − Sˆ(A)(Xk) at loca￾tions Xk drawn from the sampling design. Hence, we can not directly evaluate the functional derivative. Instead, we leverage that DgS0(A)(·) is a linear functional and can therefore be represented via a Riesz repr… view at source ↗
Figure 3
Figure 3. Figure 3: Robustness to nuisance esti￾mation errors: Reported is the increase in RMSE as we artificially increase esti￾mation errors of the nuisance functions Sˆ and βˆ. ⇒ Due to debiasing, RMSE for DOPE increases more slowly even when both nuisances are corrupted simultane￾ously. 2 Robustness to nuisance errors: To show the bene￾fits of debiasing, we analyze the robustness of our DOPE against errors in the nuisance… view at source ↗
read the original abstract

Neural operators are widely used to approximate solution maps of complex physical systems. In many applications, however, the goal is not to recover the full solution trajectory, but to summarize the solution trajectory via a scalar target quantity (e.g., a functional such as time spent in a target range, time above a threshold, accumulated cost, or total energy). In this paper, we introduce DOPE (debiased neural operator): a semiparametric estimator for such target quantities of solution trajectories obtained from neural operators. DOPE is broadly applicable to settings with both partial and irregular observations and can be combined with arbitrary neural operator architectures. We make three main contributions. (1) We show that, in contrast to DOPE, naive plug-in estimation can suffer from first-order bias. (2) To address this, we derive a novel one-step, Neyman-orthogonal estimator that treats the neural operator as a high-dimensional nuisance mapping between function spaces, and removes the leading bias term. For this, DOPE uses a weighting mechanism that simultaneously accounts for irregular observation designs and for how sensitive the target quantity is to perturbations of the underlying trajectory. (3) To learn the weights, we extend automatic debiased machine learning to operator-valued nuisances via Riesz regression. We demonstrate the benefits of DOPE across various numerical experiments.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper introduces DOPE, a semiparametric one-step estimator for scalar functionals (e.g., time spent in a range or accumulated cost) of trajectories obtained from neural operators. It claims that naive plug-in estimation incurs first-order bias, derives a Neyman-orthogonal correction via a weighting mechanism learned by Riesz regression on operator-valued nuisances, and shows applicability to irregular/partial observations with arbitrary neural operator backbones, supported by numerical experiments.

Significance. If the orthogonality construction and Riesz consistency hold for operator-valued nuisances, the work would usefully extend debiased machine learning to neural operators, enabling more reliable functional estimation in scientific ML settings with incomplete trajectory data. The explicit handling of both sampling irregularity and Fréchet sensitivity in the weights is a potentially valuable technical step.

major comments (3)
  1. [Derivation of DOPE and Riesz regression for operator-valued nuisances] The central claim that the weighting mechanism removes first-order bias relies on the Riesz regression producing a consistent estimator of the representer in the dual of the operator space. The manuscript must specify (in the section deriving the DOPE estimator and the Riesz step) whether this regression is performed on the identical partially observed trajectories used for the neural operator, and must state the precise loss (including any operator-norm weighting) that guarantees the cross-term vanishes at the required rate under irregular sampling.
  2. [Theoretical development of the Neyman-orthogonal estimator] The Neyman orthogonality argument treats the neural operator as a high-dimensional nuisance mapping between function spaces. The paper should explicitly identify the Banach space and norm in which the Fréchet derivative and Riesz representer are taken, and verify that the one-step estimator remains orthogonal when the nuisance is itself learned from data.
  3. [Numerical experiments] Experiments are cited as demonstrating benefits, but the manuscript should report quantitative evidence that the bias term is removed at the expected rate (e.g., tables or plots of estimation error versus sample size for DOPE versus plug-in, with varying degrees of irregularity).
minor comments (2)
  1. Notation for the target functional and its Fréchet derivative should be introduced earlier and used consistently when describing the weighting mechanism.
  2. [Abstract] The abstract states that DOPE 'can be combined with arbitrary neural operator architectures'; a brief remark on any architectural constraints (e.g., differentiability requirements) would help readers.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed comments, which will help strengthen the clarity and rigor of our manuscript. We address each major comment below and will incorporate revisions accordingly.

read point-by-point responses
  1. Referee: [Derivation of DOPE and Riesz regression for operator-valued nuisances] The central claim that the weighting mechanism removes first-order bias relies on the Riesz regression producing a consistent estimator of the representer in the dual of the operator space. The manuscript must specify (in the section deriving the DOPE estimator and the Riesz step) whether this regression is performed on the identical partially observed trajectories used for the neural operator, and must state the precise loss (including any operator-norm weighting) that guarantees the cross-term vanishes at the required rate under irregular sampling.

    Authors: We agree this specification is needed for completeness. In the revised manuscript, Section 3 (derivation of DOPE) and the Riesz regression subsection will explicitly state that the regression uses the identical partially observed trajectories as the neural operator training data. The loss is the squared norm in the dual operator space, ||R - hat{R}||_{op}^2 where the operator norm accounts for the Fréchet sensitivity, ensuring the cross-term vanishes at o_p(n^{-1/2}) under irregular sampling by the same arguments as in the scalar case but lifted to operator-valued nuisances. We will add this derivation detail and a supporting lemma in the appendix. revision: yes

  2. Referee: [Theoretical development of the Neyman-orthogonal estimator] The Neyman orthogonality argument treats the neural operator as a high-dimensional nuisance mapping between function spaces. The paper should explicitly identify the Banach space and norm in which the Fréchet derivative and Riesz representer are taken, and verify that the one-step estimator remains orthogonal when the nuisance is itself learned from data.

    Authors: We will revise the theoretical development to explicitly identify the Banach space as C([0,T]; R^d) equipped with the supremum norm, with the Fréchet derivative taken in this space and the Riesz representer in its dual. A new paragraph will verify Neyman orthogonality for the one-step estimator when the neural operator nuisance is learned from data, under standard approximation and consistency assumptions on the operator estimator. This builds directly on the existing orthogonality derivation but adds the required space identification and verification. revision: yes

  3. Referee: [Numerical experiments] Experiments are cited as demonstrating benefits, but the manuscript should report quantitative evidence that the bias term is removed at the expected rate (e.g., tables or plots of estimation error versus sample size for DOPE versus plug-in, with varying degrees of irregularity).

    Authors: We will augment the numerical experiments section with additional figures and tables showing mean squared error versus sample size (n = 100 to 5000) for DOPE versus naive plug-in, across three levels of observation irregularity (regular grid, 30% missing, Poisson process). These will demonstrate the bias removal at the expected rate, with error decaying as O(n^{-1/2}) for DOPE while remaining biased for plug-in. The existing experiments already vary irregularity but will be expanded with these quantitative rate plots. revision: yes

Circularity Check

0 steps flagged

Derivation applies standard Neyman orthogonality to operator nuisances without self-referential reduction.

full rationale

The abstract frames DOPE as a one-step Neyman-orthogonal estimator obtained by treating the neural operator as a high-dimensional nuisance and removing first-order bias via a weighting mechanism learned through Riesz regression. No equations or steps are shown that reduce the target estimator to a fitted parameter or prior result by construction. The contributions are presented as an extension of existing debiased machine learning to operator-valued settings, with the weighting accounting for irregular designs and Fréchet sensitivity; this remains an independent construction rather than a tautology. The paper is self-contained against external semiparametric benchmarks and does not rely on load-bearing self-citations or ansatzes imported from prior author work.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 1 invented entities

Assessment limited to abstract only; no explicit free parameters, axioms, or invented entities are detailed beyond the introduction of the DOPE estimator itself as a new method relying on standard semiparametric assumptions.

invented entities (1)
  • DOPE estimator no independent evidence
    purpose: Debiased estimation of functionals from neural operator trajectories
    Introduced as the central new contribution in the abstract.

pith-pipeline@v0.9.0 · 5537 in / 1160 out tokens · 34934 ms · 2026-05-10T02:31:31.856793+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. ORTHOBO: Orthogonal Bayesian Hyperparameter Optimization

    cs.LG 2026-05 unverdicted novelty 5.0

    OrthoBO introduces an orthogonal acquisition estimator subtracting an optimally weighted score-function control variate to reduce Monte Carlo variance, preserve the acquisition target, and improve ranking stability in...

Reference graph

Works this paper leans on

59 extracted references · 7 canonical work pages · cited by 1 Pith paper

  1. [1]

    A. N. Angelopoulos, S. Bates, C. Fannjiang, M. I. Jordan, and T. Zrnic. Prediction-powered inference.Science, 382(6671):669–674, 2023

  2. [2]

    A. N. Angelopoulos, J. C. Duchi, and T. Zrnic. PPI++: Efficient prediction-powered inference. arXiv preprint arXiv:2311.01453, 2023

  3. [3]

    Azizzadenesheli, N

    K. Azizzadenesheli, N. Kovachki, Z. Li, M. Liu-Schiaffini, J. Kossaifi, and A. Anandkumar. Neural operators for accelerating scientific simulations and design.Nature Reviews Physics, 6 (5):320–328, 2024

  4. [4]

    Berg and K

    J. Berg and K. Nyström. Data-driven discovery of PDEs in complex datasets.Journal of Computational Physics, 384:239–252, 2019

  5. [5]

    Bonev, T

    B. Bonev, T. Kurth, C. Hundt, J. Pathak, M. Baust, K. Kashinath, and A. Anandkumar. Spherical fourier neural operators: Learning stable dynamics on the sphere. InICML, 2023

  6. [6]

    Q. Cao, S. Goswami, and G. E. Karniadakis. Laplace neural operator for solving differential equations.Nature Machine Intelligence, 6:631–640, 2024

  7. [7]

    Carone, A

    M. Carone, A. R. Luedtke, and M. J. van der Laan. Toward computerized efficient estimation in infinite-dimensional models.Journal of the American Statistical Association, 114(527): 1174–1190, 2019

  8. [8]

    R. T. Q. Chen, Y . Rubanova, J. Bettencourt, and D. Duvenaud. Neural ordinary differential equations. InNeurIPS, 2018

  9. [9]

    Newey, Victor Quintas-Martinez, and Vasilis Syrgkanis

    V . Chernozhukov, W. K. Newey, V . Quintas-Martinez, and V . Syrgkanis. Automatic debiased machine learning via Riesz regression.arXiv preprint, 2104.14737, 2021

  10. [10]

    Chernozhukov, W

    V . Chernozhukov, W. Newey, V . Quintas-Martinez, and V . Syrgkanis. RieszNet and ForestRiesz: Automatic debiased machine learning with neural nets and random forests. InICML, 2022

  11. [11]

    Chernozhukov, W

    V . Chernozhukov, W. K. Newey, and R. Singh. Automatic debiased machine learning of causal and structural effects.Econometrica, 90(3):967–1027, 2022

  12. [12]

    Chernozhukov, W

    V . Chernozhukov, W. K. Newey, and R. Singh. Debiased machine learning of global and local parameters using regularized riesz representers.The Econometrics Journal, 25(3):576–601, 2022

  13. [13]

    Chernozhukov, W

    V . Chernozhukov, W. Newey, V . Quintas-Martinez, and V . Syrgkanis. Automatic debiased machine learning for covariate shifts.arXiv preprint, arXiv:2307:04527, 2023

  14. [14]

    Curth and M

    A. Curth and M. van der Schaar. Nonparametric estimation of heterogeneous treatment effects: From theory to learning algorithms. InAISTATS, 2021

  15. [15]

    Curth, A

    A. Curth, A. M. Alaa, and M. van der Schaar. Estimating structural target functions using machine learning and influence functions.arXiv preprint, arXiv:2008.06461, 2020

  16. [16]

    K. Dong, S. Chen, Y . Dan, L. Zhang, X. Li, W. Liang, Y . Zhao, and Y . Sun. Optimal stochastic tracking control for brain network dynamics.Communications Biology, 8(1), 2025

  17. [17]

    Fanaskov and I

    V . Fanaskov and I. Oseledets. Spectral neural operators.Doklady Mathematics, 108(2):226–232, 2023

  18. [18]

    S. A. Faroughi, N. M. Pawar, C. Fernandes, M. Raissi, S. Das, N. K. Kalantari, and S. Kourosh Mahjour. Physics-guided, physics-informed, and physics-encoded neural net- works and operators in scientific computing: Fluid and solid mechanics.Journal of Computing and Information Science in Engineering, 24(4):040802, 2024

  19. [19]

    D. J. Foster and V . Syrgkanis. Orthogonal statistical learning.The Annals of Statistics, 53(3): 879–908, 2023

  20. [20]

    A. A. Grachev, C. W. Fairall, B. W. Blomquist, H. J. Fernando, L. S. Leo, S. F. Otárola-Bustos, J. M. Wilczak, and K. L. McCaffrey. On the surface energy balance closure at different temporal scales.Agricultural and Forest Meteorology, 281:107823, 2020

  21. [21]

    Hashiguchi, N

    Y . Hashiguchi, N. Matsumoto, K. Oda, H. Jono, and H. Saito. Population pharmacokinetics and auc-guided dosing of tobramycin in the treatment of infections caused by glucose-nonfermenting gram-negative bacteria.Clinical Therapeutics, 45(5):400–414, 2023

  22. [22]

    Hess and S

    K. Hess and S. Feuerriegel. Stabilized neural prediction of potential outcomes in continuous time. InICLR, 2025. 10

  23. [23]

    K. Hess, V . Melnychuk, D. Frauen, and S. Feuerriegel. Bayesian neural controlled differential equations for treatment effect estimation. InICLR, 2024

  24. [24]

    K. Hess, D. Frauen, V . Melnychuk, and S. Feuerriegel. Efficient and sharp off-policy learning under unobserved confounding. InICLR, 2026

  25. [25]

    K. Hess, D. Frauen, M. van der Schaar, and S. Feuerriegel. Overlap-weighted orthogonal meta-learner for treatment effect estimation over time. InICLR, 2026

  26. [26]

    Hines, O

    O. Hines, O. Dukes, K. Diaz-Ordaz, and S. Vansteelandt. Demystifying statistical learning based on efficient influence functions.The American Statistician, 76(3):292–304, 2022

  27. [27]

    Iakovlev, M

    V . Iakovlev, M. Heinonen, and H. Lähdesmäki. Learning space-time continuous neural pdes from partially observed states. InNeurIPS, 2023

  28. [28]

    Ichimura and W

    H. Ichimura and W. K. Newey. The influence function of semiparametric estimators.Quantita- tive Economics, 13(1):29–61, 2022

  29. [29]

    Iqbal, A

    K. Iqbal, A. Milioudi, and S. G. Wicha. Pharmacokinetics and pharmacodynamics of tedizolid. Clinical Pharmacokinetics, 61:489–503, 2022

  30. [30]

    E. H. Kennedy. Semiparametric doubly robust targeted double machine learning: A review. arXiv preprint, 2203.06469, 2022

  31. [31]

    Y . Kim, H. Kim, G. Ko, and J. Lee. Active learning with selective time-step acquisition for pdes. InICML, 2025

  32. [32]

    D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. InICLR, 2015

  33. [33]

    Kobayashi, J

    K. Kobayashi, J. Daniell, and S. B. Alam. Improved generalization with deep neural operators for engineering systems: Path towards digital twin.Engineering Applications of Artificial Intelligence, 131:107844, 2024

  34. [34]

    Kovachki, S

    N. Kovachki, S. Lanthaler, and S. Mishra. On universal approximation and error bounds for fourier neural operators.Journal of Machine Learning Research, 22(290):1–76, 2021

  35. [35]

    Kovachki, Z

    N. Kovachki, Z. Li, B. Liu, K. Azizzadenesheli, K. Bhattacharya, A. Stuart, and A. Anandkumar. Neural operator: Learning maps between function spaces with applications to PDEs.Journal of Machine Learning Research, 24(89):1–97, 2023

  36. [36]

    Kurth, S

    T. Kurth, S. Subramanian, P. Harrington, J. Pathak, M. Mardani, D. Hall, A. Miele, K. Kashinath, and A. Anandkumar. FourCastNet: Accelerating global high-resolution weather forecasting using adaptive fourier neural operators. InPASC, 2023

  37. [37]

    Kwon.Handbook of Essential Pharmacokinetics, Pharmacodynamics and Drug Metabolism for Industrial Scientists

    Y . Kwon.Handbook of Essential Pharmacokinetics, Pharmacodynamics and Drug Metabolism for Industrial Scientists. Springer US, Boston, MA, 2002. ISBN 978-0-306-46820-9

  38. [38]

    J. Lee, Z. Liu, X. Yu, Y . Wang, H. Jeong, M. Y . Niu, and Z. Zhang. KANO: Kolmogorov-Arnold neural operator. InICLR, 2026

  39. [39]

    Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, and A. Anandkumar. Fourier neural operator for parametric partial differential equations. InICLR, 2021

  40. [40]

    L. Lu, P. Jin, G. Pang, Z. Zhang, and G. E. Karniadakis. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators.Nature Machine Intelligence, 3:218–229, 2021

  41. [41]

    A. Luedtke. Simplifying debiased inference via automatic differentiation and probabilistic programming.arXiv preprint, 2405.08675, 2024

  42. [42]

    Melnychuk, S

    V . Melnychuk, S. Feuerriegel, and M. van der Schaar. Quantifying aleatoric uncertainty of the treatment effect: A novel orthogonal learner. InNeurIPS, 2024

  43. [43]

    Meyer, M

    L. Meyer, M. Schouler, R. A. Caulk, A. Ribes, and B. Raffin. Training deep surrogate models with large scale online learning. InICML, 2023

  44. [44]

    Morzywolek, J

    P. Morzywolek, J. Decruyenaere, and S. Vansteelandt. On a general class of orthogonal learners for the estimation of heterogeneous treatment effects.arXiv preprint, arXiv:2303.12687, 2023

  45. [45]

    M. Pavone. On the riesz representation theorem for bounded linear functionals.Proceedings of the Royal Irish Academy. Section A: Mathematical and Physical Sciences, 94A(1):133–135, 1994. 11

  46. [46]

    W. Peng, S. Qin, S. Yang, J. Wang, X. Liu, and L. L. Wang. Fourier neural operator for real-time simulation of 3d dynamic urban microclimate.Building and Environment, 248:111063, 2024

  47. [47]

    Price, A

    I. Price, A. Sanchez-Gonzalez, F. Alet, T. R. Andersson, A. El-Kadi, D. Masters, T. Ewalds, J. Stott, S. Mohamed, P. Battaglia, R. Lam, and M. Willson. Probabilistic weather forecasting with machine learning.Nature, 637:84–90, 2025

  48. [48]

    M. A. Rahman, Z. E. Ross, and K. Azizzadenesheli. U-NO: U-shaped neural operators. Transactions on Machine Learning Research, 2023

  49. [49]

    Raonic, R

    B. Raonic, R. Molinaro, T. De Ryck, T. Rohner, F. Bartolucci, S. Alaifari, Rima Mishra, and E. de Bézenac. Convolutional neural operators for robust and accurate learning of pdes. In NeurIPS, 2023

  50. [50]

    S. G. Rosofsky, H. A. Majed, and E. A. Huerta. Applications of physics informed neural operators.Machine Learning: Science and Technology, 4, 2023

  51. [51]

    Shukla, V

    K. Shukla, V . Oommen, A. Peyvan, M. Penwarden, N. Plewacki, L. Bravo, A. Ghoshal, R. M. Kirby, and G. E. Karniadakis. Deep neural operators as accurate surrogates for shape optimization.Engineering Applications of Artificial Intelligence, 129:107615, 2024

  52. [52]

    R. N. Upton, D. J. Foster, and A. Y . Abuhelwa. An introduction to physiologically-based pharmacokinetic models.Pediatric Anesthesia, 26(11):1036–1046, 2016

  53. [53]

    van der Laan, A

    L. van der Laan, A. Bibaut, N. Kallus, and A. Luedtke. Automatic debiased machine learning for smooth functionals of nonparametric M-estimands.arXiv preprint, arXiv:2501.11868, 2025

  54. [54]

    H. Wang, Y . Song, H. Yang, and Z. Liu. Generalized koopman neural operator for data- driven modeling of electric railway pantograph–catenary systems.IEEE Transactions on Transportation Electrification, 11(6):14100–14112, 2025

  55. [55]

    G. Wen, Z. Li, Q. Long, K. Azizzadenesheli, A. Anandkumar, and S. M. Benson. Real-time high-resolution co2 geological storage prediction using nested fourier neural operators.Science, 16:1732–1741, 2023

  56. [56]

    Widdershins, E

    A. Widdershins, E. Hansen, A. Read, and R. Hohl. Optimal control theory as a method for designing multidrug adaptive therapy regimens.npj Systems Biology and Applications, 12(1), 2026

  57. [57]

    Yang and Y .-Y

    X.-D. Yang and Y .-Y . Yang. Clinical pharmacokinetics of semaglutide: A systematic review. Drug Design, Development and Therapy, 18:2555–2570, 2024

  58. [58]

    Z. Zhao, C. Liu, Y . Li, Z. Chen, and X. Liu. Diffeomorphism neural operator for various domains and parameters of partial differential equations.Communications Physics, 8(15), 2025

  59. [59]

    1 K KX k=1 β0(A)(Xk)∆(A)(Xk) A, K # =E[β 0(A)(X)∆(A)(X)|A].(40) Taking expectations and applying the Riesz identity from Eq. 37 withh= ∆(A)yields E

    Y . Zhou, J. Li, Z. Xu, Y . Zhao, S. Zhang, T. Liu, Y . Yao, L. Fang, Y . Cai, , X. Ye, and B. Liang. A nanomip sensor for real-time in vivo monitoring of levodopa pharmacokinetics in precision parkinson’s therapy.Nature Communications, 16(1), 2025. 12 A Extended related work Neural operators provide a framework for learning mappings between function spac...