Recognition: 2 theorem links
· Lean TheoremDeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators
Pith reviewed 2026-05-15 03:12 UTC · model grok-4.3
The pith
DeepONets learn nonlinear operators from small datasets by splitting input encoding from output evaluation points.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
DeepONets realize the practical application of the operator approximation theorem through a branch-trunk architecture, enabling the learning of nonlinear operators for identifying differential equations with high accuracy and efficiency from limited data, as evidenced by observed high-order convergence in error with respect to training set size.
What carries the argument
The branch-trunk split architecture, where one subnetwork processes input function values at sensors and the other processes output locations to produce the operator output.
Load-bearing premise
That the practical optimization and generalization errors remain small enough with the branch-trunk design and standard training to achieve the high convergence rates promised by the approximation theorem.
What would settle it
A test where increasing the training dataset size for DeepONet on identifying a partial differential equation operator yields only linear or slower error reduction instead of the reported polynomial or exponential rates.
read the original abstract
While it is widely known that neural networks are universal approximators of continuous functions, a less known and perhaps more powerful result is that a neural network with a single hidden layer can approximate accurately any nonlinear continuous operator. This universal approximation theorem is suggestive of the potential application of neural networks in learning nonlinear operators from data. However, the theorem guarantees only a small approximation error for a sufficient large network, and does not consider the important optimization and generalization errors. To realize this theorem in practice, we propose deep operator networks (DeepONets) to learn operators accurately and efficiently from a relatively small dataset. A DeepONet consists of two sub-networks, one for encoding the input function at a fixed number of sensors $x_i, i=1,\dots,m$ (branch net), and another for encoding the locations for the output functions (trunk net). We perform systematic simulations for identifying two types of operators, i.e., dynamic systems and partial differential equations, and demonstrate that DeepONet significantly reduces the generalization error compared to the fully-connected networks. We also derive theoretically the dependence of the approximation error in terms of the number of sensors (where the input function is defined) as well as the input function type, and we verify the theorem with computational results. More importantly, we observe high-order error convergence in our computational tests, namely polynomial rates (from half order to fourth order) and even exponential convergence with respect to the training dataset size.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes deep operator networks (DeepONets) consisting of a branch network to encode the input function at a fixed number of sensors and a trunk network to encode the output function locations. This architecture is used to learn nonlinear operators for dynamic systems and PDEs from data. The authors derive the dependence of the approximation error on the number of sensors and input function type, verify it computationally, and report high-order convergence rates (polynomial to exponential) with respect to the training dataset size, while showing reduced generalization error compared to fully-connected networks.
Significance. If the empirical observations of high-order convergence hold, this work is significant as it provides a practical method to approximate operators with controllable error based on the operator universal approximation theorem. The separation into branch and trunk networks allows efficient learning from small datasets, which could impact fields like scientific machine learning and surrogate modeling for differential equations. The theoretical derivation combined with numerical verification adds strength to the claims.
major comments (2)
- [Section on theoretical derivation] The approximation error bound depending on sensor count m is derived, but the manuscript should explicitly state the assumptions on the input function class (e.g., continuity or Sobolev space) to ensure the bound is rigorous and load-bearing for the convergence claims.
- [Numerical results section] The reported polynomial and exponential convergence rates with training dataset size N are observed in computational tests; however, details on the exact error metric (e.g., L2 norm on held-out data), number of independent runs, and confirmation that rates are not due to overfitting need to be provided to support the high-order convergence claim.
minor comments (2)
- The abstract mentions 'systematic simulations' but the manuscript could benefit from a table summarizing the benchmark problems, sensor counts m, and observed rates for clarity.
- [Introduction] Clarify the distinction between the branch net and trunk net in the notation to avoid ambiguity for readers unfamiliar with the architecture.
Simulated Author's Rebuttal
We thank the referee for the positive assessment and constructive suggestions for minor revision. We have addressed both major comments by clarifying the theoretical assumptions and expanding the numerical details in the revised manuscript.
read point-by-point responses
-
Referee: [Section on theoretical derivation] The approximation error bound depending on sensor count m is derived, but the manuscript should explicitly state the assumptions on the input function class (e.g., continuity or Sobolev space) to ensure the bound is rigorous and load-bearing for the convergence claims.
Authors: We agree that an explicit statement of the function class is needed for rigor. The derivation relies on the universal approximation theorem for nonlinear operators, which holds for continuous input functions. In the revised manuscript, we have added a dedicated paragraph in the theoretical derivation section stating that the input functions are assumed to lie in C([0,1]^d) (continuous functions on a compact domain) or the appropriate Sobolev space when higher regularity is invoked, thereby making the error bound with respect to sensor count m fully rigorous under these conditions. revision: yes
-
Referee: [Numerical results section] The reported polynomial and exponential convergence rates with training dataset size N are observed in computational tests; however, details on the exact error metric (e.g., L2 norm on held-out data), number of independent runs, and confirmation that rates are not due to overfitting need to be provided to support the high-order convergence claim.
Authors: We have expanded the numerical results section to include these details. The error metric is the relative L2 norm computed on a fixed held-out test set of 2000 samples drawn independently of the training data. We performed 5 independent runs with different random seeds for network initialization and data shuffling, reporting both mean convergence rates and standard deviations. To rule out overfitting, we added a new figure showing that test error continues to decrease monotonically with N while training error saturates early; the reported high-order rates are therefore measured on unseen data. These clarifications have been incorporated. revision: yes
Circularity Check
No significant circularity in derivation chain
full rationale
The paper grounds its DeepONet proposal in the external universal approximation theorem for nonlinear operators (Chen & Chen 1995), which is not self-cited. The branch-trunk architecture is defined directly from that theorem without reference to fitted quantities. The claimed theoretical dependence of approximation error on sensor count m is derived from the external UAT and then verified numerically on held-out data for standard benchmark ODEs and PDEs; the reported polynomial-to-exponential convergence rates versus training-set size N are empirical observations, not restatements of training loss or self-defined quantities. No self-definitional steps, no load-bearing self-citations, and no renaming of known results appear in the derivation chain.
Axiom & Free-Parameter Ledger
free parameters (1)
- number of sensors m
axioms (1)
- standard math Universal approximation theorem for nonlinear continuous operators
Forward citations
Cited by 23 Pith papers
-
Constraint-Aware Flow Matching: Decision Aligned End-to-End Training for Constrained Sampling
Constraint-Aware Flow Matching integrates constraint projections into the flow matching training objective to align model dynamics with constrained sampling and reduce distributional shift.
-
Approximation of Maximally Monotone Operators : A Graph Convergence Perspective
Any maximally monotone operator can be approximated in local graph convergence by continuous encoder-decoder networks, with structure-preserving versions that retain maximal monotonicity via resolvent parameterizations.
-
Fixed-Point Neural Optimal Transport without Implicit Differentiation
A single-network fixed-point formulation for neural optimal transport eliminates adversarial min-max optimization and implicit differentiation while enforcing dual feasibility exactly.
-
Stable Long-Horizon PDE Forecasting via Latent Structured Spectral Propagators
A latent Structured Spectral Propagator enables stable autoregressive PDE forecasting by decoupling spatial details from recurrent modal dynamics.
-
CATO: Charted Attention for Neural PDE Operators
CATO learns a continuous latent chart for efficient axial attention on PDE meshes and adds derivative-aware supervision to improve accuracy and reduce oversmoothing on general geometries.
-
Physics-Informed Neural PDE Solvers via Spatio-Temporal MeanFlow
Spatio-Temporal MeanFlow adapts MeanFlow to PDEs by replacing the generative velocity field with the physical operator and extending the integral constraint to the spatio-temporal domain, yielding a unified solver for...
-
Geometry-Aware Neural Optimizer for Shape Optimization and Inversion
GANO unifies shape encoding with auto-decoders, denoising-stabilized latent optimization, and geometry-injected surrogates into an end-to-end differentiable pipeline for PDE-governed shape optimization and inversion.
-
Geometry-Aware Neural Optimizer for Shape Optimization and Inversion
GANO is an end-to-end differentiable latent-space optimizer that unifies shape encoding, surrogate prediction, and controllable geometry updates for PDE-governed shape optimization and inversion.
-
AI models of unstable flow exhibit hallucination
AI models of viscous fingering exhibit hallucinations from spectral bias; DeepFingers combines FNO and DeepONet with time-contrast conditioning to predict accurate finger dynamics while preserving mixing metrics.
-
DeepRitzSplit Neural Operator for Phase-Field Models via Energy Splitting
A DeepRitzSplit neural operator trained on energy-split variational forms enforces dissipation in phase-field models and outperforms data-driven training in generalization while running faster than Fourier spectral me...
-
DiLO: Decoupling Generative Priors and Neural Operators via Diffusion Latent Optimization for Inverse Problems
DiLO turns diffusion sampling into deterministic latent optimization to satisfy the manifold consistency requirement for neural operators in inverse problem solving.
-
Compositional Neural Operators for Multi-Dimensional Fluid Dynamics
Compositional Neural Operators decompose multi-dimensional fluid PDEs into a library of pretrained elementary physics blocks assembled via an aggregator that minimizes data and physics residuals.
-
Don't Fix the Basis -- Learn It: Spectral Representation with Adaptive Basis Learning for PDEs
ABLE learns a spatially adaptive Parseval frame from data via an ancillary density to replace fixed bases in spectral neural operators for PDEs.
-
PnP-Corrector: A Universal Correction Framework for Coupled Spatiotemporal Forecasting
PnP-Corrector decouples physics simulation from error correction to counter reciprocal error amplification in coupled spatiotemporal forecasting, cutting error by 29% in a 300-day ocean-atmosphere test.
-
PnP-Corrector: A Universal Correction Framework for Coupled Spatiotemporal Forecasting
PnP-Corrector decouples physics simulation from error correction via a plug-and-play agent, cutting error by 29% in 300-day global ocean-atmosphere forecasts.
-
Continuity Laws for Sequential Models
S4 models exhibit stable time-continuity unlike sensitive S6 models, with task continuity predicting performance and enabling temporal subsampling for better efficiency.
-
Hierarchical Multi-Fidelity Learning for Predicting Three-Dimensional Flame Wrinkling and Turbulent Burning Velocity
MuFiNNs integrates sparse experimental measurements with structured low-fidelity models via hierarchical construction and nonlinear correction to predict 3D flame wrinkling dynamics and turbulent mass burning velocity...
-
Geometry-Aware Neural Optimizer for Shape Optimization and Inversion
GANO unifies shape encoding, field prediction, and latent optimization with denoising for stable, controllable updates in PDE shape problems, reporting SOTA accuracy and up to 55.9% lift-to-drag gains on benchmarks.
-
Late Fusion Neural Operators for Extrapolation Across Parameter Space in Partial Differential Equations
Late Fusion Neural Operators disentangle state and parameter learning to outperform FNO and CAPE-FNO on advection, Burgers, and reaction-diffusion PDEs with 72% average RMSE reduction in and out of domain.
-
Hyperfastrl: Hypernetwork-based reinforcement learning for unified control of parametric chaotic PDEs
Hypernetworks map a forcing parameter directly to policy weights in an RL framework, enabling unified stabilization of the Kuramoto-Sivashinsky equation across regimes with KAN architectures showing strongest extrapolation.
-
Accelerated and data-efficient flow prediction in stirred tanks via physics-informed learning
Physics-informed constraints on implicit neural representations yield more accurate and stable predictions of stirred-tank flows than purely data-driven models when training data is scarce, with diminishing returns at...
-
RETO: A Rotary-Enhanced Transformer Operator for High-Fidelity Prediction of Automotive Aerodynamics
RETO achieves relative L2 errors of 0.063 on ShapeNet and 0.089/0.097 on DrivAerML surface pressure/velocity, outperforming Transolver and other baselines.
-
Toward Artificial Intelligence Enabled Earth System Coupling
AI methods can strengthen cross-domain interactions and support more coherent multi-component representations in Earth system models.
Reference graph
Works this paper leans on
-
[1]
L. Bottou and O. Bousquet. The tradeoffs of large scale learning. InAdvances in Neural Information Processing Systems, pages 161–168, 2008
work page 2008
-
[2]
S. L. Brunton, J. L. Proctor, and J. N. Kutz. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proceedings of the National Academy of Sciences, 113(15):3932–3937, 2016
work page 2016
-
[3]
T. Chen and H. Chen. Approximations of continuous functionals by neural networks with application to dynamic systems.IEEE Transactions on Neural Networks, 4(6):910–918, 1993
work page 1993
-
[4]
T.ChenandH.Chen. Approximationcapabilitytofunctionsofseveralvariables,nonlinearfunctionals, and operators by radial basis function neural networks. IEEE Transactions on Neural Networks, 6(4):904–910, 1995
work page 1995
-
[5]
T.ChenandH.Chen. Universalapproximationtononlinearoperatorsbyneuralnetworkswitharbitrary activation functions and its application to dynamical systems.IEEE Transactions on Neural Networks, 6(4):911–917, 1995
work page 1995
-
[6]
T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud. Neural ordinary differential equations. In Advances in Neural Information Processing Systems, pages 6571–6583, 2018
work page 2018
-
[7]
G. Cybenko. Approximation by superpositions of a sigmoidal function.Mathematics of Control, Signals and Systems, 2(4):303–314, 1989
work page 1989
-
[8]
V. Dumoulin, E. Perez, N. Schucher, F. Strub, H. d. Vries, A. Courville, and Y. Bengio. Feature-wise transformations. Distill, 2018. https://distill.pub/2018/feature-wise-transformations
work page 2018
-
[9]
N. B. Erichson, M. Muehlebach, and M. W. Mahoney. Physics-informed autoencoders for Lyapunov- stable fluid flow prediction.arXiv preprint arXiv:1905.10866, 2019
work page internal anchor Pith review Pith/arXiv arXiv 1905
- [10]
-
[11]
Multilayerfeedforward networksareuniversalapproxima- tors
K.Hornik, M.Stinchcombe, andH.White. Multilayerfeedforward networksareuniversalapproxima- tors. Neural Networks, 2(5):359–366, 1989
work page 1989
-
[12]
arXivpreprintarXiv:1905.10403 , 2019
J.JiaandA.R.Benson.Neuraljumpstochasticdifferentialequations. arXivpreprintarXiv:1905.10403 , 2019
- [13]
-
[14]
A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. InAdvances in Neural Information Processing Systems, pages 1097–1105, 2012
work page 2012
- [15]
- [16]
- [17]
-
[18]
H. N. Mhaskar and N. Hahm. Neural networks for functional approximation and system identification. Neural Computation, 9(1):143–159, 1997
work page 1997
-
[19]
Probabilityandcomputing: randomizationandprobabilistictechniques in algorithms and data analysis
M.MitzenmacherandE.Upfal. Probabilityandcomputing: randomizationandprobabilistictechniques in algorithms and data analysis. Cambridge university press, 2017
work page 2017
-
[20]
Machine learning with observers predicts complex spatiotemporal behavior
G. Neofotistos, M. Mattheakis, G. D. Barmparis, J. Hizanidis, G. P. Tsironis, and E. Kaxiras. Machine learning with observers predicts complex spatiotemporal behavior.arXiv preprint arXiv:1807.10758, 2018
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[21]
G. Pang, L. Lu, and G. E. Karniadakis. fPINNs: Fractional physics-informed neural networks.SIAM Journal on Scientific Computing, 41(4):A2603–A2626, 2019
work page 2019
-
[22]
J. C. Patra, R. N. Pal, B. Chatterji, and G. Panda. Identification of nonlinear dynamic systems using functional link artificial neural networks.IEEE transactions on systems, man, and cybernetics, part b (cybernetics), 29(2):254–262, 1999
work page 1999
-
[23]
Datadrivengoverningequationsapproximationusingdeepneuralnetworks
T.Qin,K.Wu,andD.Xiu. Datadrivengoverningequationsapproximationusingdeepneuralnetworks. Journal of Computational Physics, 2019
work page 2019
-
[24]
Multistep Neural Networks for Data-driven Discovery of Nonlinear Dynamical Systems
M. Raissi, P. Perdikaris, and G. E. Karniadakis. Multistep neural networks for data-driven discovery of nonlinear dynamical systems.arXiv preprint arXiv:1801.01236, 2018
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[25]
F. Rossi and B. Conan-Guez. Functional multi-layer perceptron: A non-linear tool for functional data analysis. Neural Networks, 18(1):45–60, 2005
work page 2005
-
[26]
S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz. Data-driven discovery of partial differential equations. Science Advances, 3(4):e1602614, 2017
work page 2017
- [27]
- [28]
-
[29]
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. InAdvances in Neural Information Processing Systems, pages 5998–6008, 2017
work page 2017
-
[30]
N. Winovich, K. Ramani, and G. Lin. ConvPDE-UQ: Convolutional neural networks with quantified uncertainty for heterogeneous elliptic partial differential equations on varied domains.Journal of Computational Physics, 2019
work page 2019
- [31]
-
[32]
Z. Zhang and G. E. Karniadakis.Numerical methods for stochastic partial differential equations with white noise. Springer, 2017
work page 2017
-
[33]
H.ZhaoandJ.Zhang. Nonlineardynamicsystemidentificationusingpipelinedfunctionallinkartificial recurrent neural network.Neurocomputing, 72(13-15):3046–3054, 2009
work page 2009
-
[34]
Y. Zhu, N. Zabaras, P.-S. Koutsourelakis, and P. Perdikaris. Physics-constrained deep learning for high-dimensional surrogate modeling and uncertainty quantification without labeled data.Journal of Computational Physics, 394:56–81, 2019. A Neural networks to approximate nonlinear operators We list in Table 3 the main symbols and notations that are used thr...
work page 2019
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.