pith. machine review for the scientific record. sign in

arxiv: 2605.08436 · v1 · submitted 2026-05-08 · 💻 cs.LG · cs.AI· physics.comp-ph

Recognition: 2 theorem links

· Lean Theorem

A meshfree exterior calculus for generalizable and data-efficient learning of physics from point clouds

Authors on Pith no claims yet

Pith reviewed 2026-05-12 02:04 UTC · model grok-4.3

classification 💻 cs.LG cs.AIphysics.comp-ph
keywords meshfree exterior calculuspoint cloudsstructure-preserving discretizationphysics surrogatesgeneralizationSchur complementdiscrete conservationneural operators
0
0 comments X

The pith

MEEC equips arbitrary point clouds with exact discrete conservation via one Schur solve so learned physics transfers across geometries and parameters.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a meshfree exterior calculus that equips an epsilon-ball graph on point clouds with virtual node and edge measures through a single sparse Schur complement solve. This produces a discrete complex that obeys conservation laws exactly while remaining differentiable in the point positions, allowing a neural network to learn a shared flux law in a rotation-invariant frame. The authors prove an error bound whose discretization term is independent of geometry, which directly accounts for the observed transfer from single-example training to new shapes, boundaries, and parameters. A sympathetic reader would care because conventional structure-preserving methods require costly mesh generation for each new domain, while data-hungry neural operators typically fail to generalize without large numbers of examples.

Core claim

MEEC equips an ε-ball graph with virtual node and edge measures via a single sparse Schur complement solve; the resulting complex satisfies discrete conservation exactly, is end-to-end differentiable in the point positions, and exposes a direct geometry-to-physics link without the mesh-generation step required by conventional structure-preserving discretizations. MEEC-Net learns unknown physics as a shared edge-wise flux law in an SO(d)-invariant local frame, so the same kernel produces compatible fluxes on any point cloud whose features lie in the training range. A solution-error bound splits into discretization and kernel-approximation terms which is independent of problem geometry.

What carries the argument

The ε-ball graph equipped with virtual node and edge measures obtained from a single sparse Schur complement solve, forming a discrete exterior complex that satisfies exact conservation.

If this is right

  • The same learned kernel produces compatible fluxes on any point cloud whose features lie in the training range.
  • Single-solution training transfers to unseen geometries, boundary conditions, and physical parameters.
  • The solution-error bound splits into discretization and kernel-approximation terms independent of problem geometry.
  • On five canonical PDE benchmarks MEEC-Net achieves 1-2 orders of magnitude lower out-of-distribution error than baseline neural-operator approaches.
  • On the SimJEB structural-bracket benchmark it achieves competitive error while using substantially fewer training geometries.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach could eliminate mesh generation entirely for simulations on irregular or deforming domains where conventional meshing fails.
  • Embedding exact discrete conservation directly into the network architecture may improve long-term stability when the model is used inside larger time-stepping loops.
  • The SO(d)-invariant local frame could allow the same trained model to handle data from both two- and three-dimensional sources without retraining.

Load-bearing premise

The ε-ball graph plus single Schur complement solve produces a complex that satisfies discrete conservation exactly and remains end-to-end differentiable for arbitrary point clouds whose features lie in the training range.

What would settle it

Apply the trained model to a point cloud whose local geometric features lie outside the training distribution and check whether discrete conservation is violated or the observed error exceeds the geometry-independent bound.

Figures

Figures reproduced from arXiv: 2605.08436 by Benjamin D. Shaffer, Brooks Kinch, M. Ani Hsieh, Nathaniel Trask.

Figure 1
Figure 1. Figure 1: Mesh-based simulation is sen￾sitive to mesh quality: on a chevron mesh with skew θ, FEM fails to con￾verge at θ > 40◦ while MEEC remains accurate. Meshfree stencils (red) trade compact stencils for robustness. Direct neural operators learn global solution maps from ex￾amples; when the shape, boundary conditions, resolution, or physical parameters shift, they must extrapolate how the solution field changes … view at source ↗
Figure 2
Figure 2. Figure 2: Paper overview. Left: MEEC attaches virtual volumes and areas to an ϵ-ball graph via a quadratic program, obtaining an end-to-end differentiable, convergent discretization of conservation laws natively on a point cloud. Center: MEEC-Net learns an SO(d)-invariant description of physics mapping state (ui , uj ) to flux R eij σ · dl, able to generalize across edge lengths and orientations unobserved in traini… view at source ↗
Figure 3
Figure 3. Figure 3: Convergence under mesh refinement for the Poisson equation. Both our meshless DEC and conven￾tional FEM exhibit O(h 2 ) solution convergence, with comparable error constants. Given X sampling Ω, we construct edges defining an ϵ-ball graph, E = {(i, j) : i < j, ∥xi − xj∥ < ϵ}. For oriented edge e = (i, j) we define displacement δxe = xj −xi ∈ R d , length re = ∥δxe∥, midpoint x¯e = 1 2 (xi + xj ), unit tang… view at source ↗
Figure 4
Figure 4. Figure 4: Learned flux field for advection-diffusion recovered from a single training solution, where b is the edge aligned velocity input. The plots are shown along slices corresponding to b = 0 and ∇u = 0. The model qualitatively identifies the advective and diffusive contributions across the edge-feature space, enabling general￾ization to unseen geometries, bound￾ary conditions, and advection veloci￾ties without … view at source ↗
Figure 5
Figure 5. Figure 5: Single-shot generalization for advection-diffusion. Trained from one data point, the learned model generalizes across variations in geometry, physical parameters (velocity), and boundary conditions. While direct prediction baselines may fit the single training sample, they are unable to extrapolate meaningfully. From left to right, ∆θ is the variation from the training velocity direction which results in v… view at source ↗
Figure 6
Figure 6. Figure 6: Data-efficient training. Our approach achieves substantially improved data efficiency, shown here for advection-diffusion (left) and elasticity (right). MEEC-Net achieves 1–2 orders of magnitude lower error than baselines across all training set sizes, and demonstrates single-shot recovery on advection-diffusion [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Convergence under geom￾etry extrapolation for well trained flux. Solution error on two un￾seen geometries for a pre-trained flux map, demonstrating O(h 2 ) conver￾gence (dashed line) under refinement. Consistent discretization. To evaluate discretization error under controlled model error, we consider a problem with a known governing transport law. The flux kernel is pretrained on synthetic feature samples… view at source ↗
Figure 8
Figure 8. Figure 8: Application to complex 3D geometries. Our approach provides improved performance for the engineering-relevant SimJEB displacement pre￾diction task with much less data than is typically required. We present MEEC and a consistent learnable flux model (MEEC-Net) to learn generalizable physics natively on unstructured point cloud data. We demonstrate single-shot recovery and generalization over geometry, bound… view at source ↗
Figure 9
Figure 9. Figure 9: The model inputs are purely in a SO(d) invariant edge-projected space, which results in invariance in the solutions resulting from solving the model. H Additional results [PITH_FULL_IMAGE:figures/full_fig_p021_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: OOD data efficiency for advection diffusion experiment, corresponding to Figure 6 [PITH_FULL_IMAGE:figures/full_fig_p022_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: We show the edge feature distributions over a single sample ID and OOD problem for the [PITH_FULL_IMAGE:figures/full_fig_p022_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Training and in-distribution geometries (top row) and out-of-distribution test geometries [PITH_FULL_IMAGE:figures/full_fig_p023_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Qualitative prediction comparison on a representative SimJEB validation geometry from [PITH_FULL_IMAGE:figures/full_fig_p024_13.png] view at source ↗
read the original abstract

We introduce a meshfree exterior calculus (MEEC) for learning structure-preserving descriptions of physics on point clouds, and use it to build MEEC-Net, a data-efficient surrogate that transfers across resolutions, geometries, and physical parameters. MEEC equips an $\varepsilon$-ball graph with virtual node and edge measures via a single sparse Schur complement solve; the resulting complex satisfies discrete conservation exactly, is end-to-end differentiable in the point positions, and exposes a direct geometry-to-physics link without the mesh-generation step required by conventional structure-preserving discretizations. MEEC-Net learns unknown physics as a shared edge-wise flux law in an SO($d$)-invariant local frame, so the same kernel produces compatible fluxes on any point cloud whose features lie in the training range. We prove a solution-error bound that splits into discretization and kernel-approximation terms which is independent of problem geometry, explaining the observed transfer from very few examples. We show that single-solution training transfers to unseen geometries, boundary conditions, and physical parameters. On five canonical PDE benchmarks MEEC-Net achieves 1-2 orders of magnitude lower out-of-distribution error than baseline neural-operator approaches. On the SimJEB structural-bracket benchmark it achieves competitive error while using substantially fewer training geometries.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces a meshfree exterior calculus (MEEC) that augments an ε-ball graph on point clouds with virtual node and edge measures obtained from a single sparse Schur complement solve. This construction yields a discrete complex that satisfies exact conservation (d²=0) and is end-to-end differentiable with respect to point positions. MEEC-Net then learns unknown physics as a shared, SO(d)-invariant edge-wise flux law, enabling transfer across resolutions, geometries, and parameters. The paper proves a solution-error bound that decomposes into geometry-independent discretization and kernel-approximation terms, and reports 1–2 orders of magnitude lower out-of-distribution error than neural-operator baselines on five PDE benchmarks while using substantially fewer training geometries; competitive results are also shown on the SimJEB structural-bracket benchmark.

Significance. If the claimed geometry-independent error bound and the robustness of the Schur-based complex hold under the stated conditions, the work would constitute a meaningful advance at the intersection of discrete exterior calculus and neural operators. The exact discrete conservation, differentiability in point positions, and direct geometry-to-physics link without meshing are notable technical strengths. The provision of a theoretical bound that explains observed transfer from very few examples adds explanatory power beyond empirical gains.

major comments (2)
  1. [Abstract] Abstract: the solution-error bound is asserted to split into discretization and kernel-approximation terms that are independent of problem geometry. This independence presupposes that the single Schur complement solve on an arbitrary ε-ball graph always produces a valid chain complex satisfying d²=0 exactly and remaining numerically stable. No regularity conditions on point-cloud density, minimum degree, or quasi-uniformity are referenced to preclude disconnected components or singular Schur complements, which would violate the exact-conservation premise and invalidate the geometry-independent claim.
  2. [Abstract] Abstract and theoretical development: the end-to-end differentiability and exact conservation are stated to hold for any point cloud whose local features lie in the training range. For globally non-uniform or sparse distributions the ε-ball graph can produce ill-conditioned incidence matrices or disconnected subgraphs; the manuscript should supply either a proof that the Schur construction remains well-defined or an explicit statement of the sampling assumptions under which the bound and differentiability are guaranteed.
minor comments (2)
  1. The abstract refers to “five canonical PDE benchmarks” without naming them; an explicit list (with references to the corresponding tables or figures) would improve reproducibility.
  2. Notation for the virtual measures obtained from the Schur complement and for the local SO(d)-invariant frame should be introduced with a single, self-contained equation block early in the methods section.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the careful identification of implicit assumptions underlying the geometry-independent error bound and the claims of exact conservation and differentiability. We agree that these claims require explicit regularity conditions on the point cloud to guarantee a valid chain complex. We will revise the abstract, Section 3, and the proof of the error bound to state the necessary sampling assumptions. Responses to each major comment follow.

read point-by-point responses
  1. Referee: The solution-error bound is asserted to split into discretization and kernel-approximation terms that are independent of problem geometry. This independence presupposes that the single Schur complement solve on an arbitrary ε-ball graph always produces a valid chain complex satisfying d²=0 exactly and remaining numerically stable. No regularity conditions on point-cloud density, minimum degree, or quasi-uniformity are referenced to preclude disconnected components or singular Schur complements.

    Authors: We agree that the geometry-independence claim presupposes a valid complex. In the construction, d²=0 holds exactly by the algebraic properties of the Schur complement applied to the incidence matrix (the virtual measures are chosen to enforce the chain complex relation). Numerical stability and connectedness, however, do require regularity. We will add a new paragraph in Section 3 stating the assumptions: the ε-ball graph must be connected with minimum degree at least 1, and the point cloud must be quasi-uniform (local density bounded between positive constants c1 and c2 independent of the global geometry). Under these conditions the Schur complement is positive definite and the discretization-error term in the bound is independent of geometry. The proof will be annotated accordingly. This is a clarification rather than a change to the core result. revision: yes

  2. Referee: The end-to-end differentiability and exact conservation are stated to hold for any point cloud whose local features lie in the training range. For globally non-uniform or sparse distributions the ε-ball graph can produce ill-conditioned incidence matrices or disconnected subgraphs; the manuscript should supply either a proof that the Schur construction remains well-defined or an explicit statement of the sampling assumptions under which the bound and differentiability are guaranteed.

    Authors: Exact conservation (d²=0) is algebraic and holds for any ε-ball graph on which the Schur complement is defined, because the virtual node/edge measures are constructed to lie in the kernel of the coboundary operator. Differentiability with respect to point positions follows from the implicit-function theorem applied to the sparse linear solve, which is differentiable wherever the matrix is invertible. We acknowledge that global non-uniformity or sparsity can produce disconnected components or ill-conditioning. We will revise the abstract and Section 3 to list the same quasi-uniformity and connectivity assumptions above, and add a short remark that, in practice, disconnected graphs can be handled by a connectivity check or adaptive ε before the Schur solve. This makes the scope of the differentiability and bound claims explicit without altering the technical development. revision: yes

Circularity Check

0 steps flagged

No circularity: construction and bound are presented as direct mathematical results from the Schur complement on the graph.

full rationale

The paper defines MEEC via a single sparse Schur complement solve on the ε-ball graph, asserts that the resulting complex satisfies d²=0 exactly and is differentiable by construction, and states that a solution-error bound is proved with terms independent of geometry. No quoted step reduces a claimed prediction or uniqueness result to a fitted parameter or self-citation whose content is the target claim itself. The transferability across geometries is supported by the explicit construction plus empirical benchmarks rather than by re-labeling inputs as outputs. The derivation chain therefore remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The approach rests on the existence of a well-conditioned sparse Schur complement for any epsilon-ball graph on point clouds and on the assumption that the resulting discrete complex exactly encodes conservation laws without additional fitting.

free parameters (1)
  • epsilon for ball graph
    Radius defining neighbor connectivity; must be chosen so the graph remains connected and the Schur complement is stable.
axioms (1)
  • domain assumption The Schur complement of the graph Laplacian or incidence matrix yields exact discrete conservation on the virtual complex.
    Invoked to guarantee structure preservation independent of point distribution.

pith-pipeline@v0.9.0 · 5539 in / 1395 out tokens · 39907 ms · 2026-05-12T02:04:20.638891+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

71 extracted references · 71 canonical work pages · 2 internal anchors

  1. [1]

    Learning nonlinear operators via deeponet based on the universal approximation theorem of operators

    Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature machine intelligence, 3(3):218–229, 2021

  2. [2]

    Neural Operator: Graph Kernel Network for Partial Differential Equations

    Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Graph kernel network for partial differential equations.arXiv preprint arXiv:2003.03485, 2020

  3. [3]

    Pointnet: Deep learning on point sets for 3d classification and segmentation

    Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 652–660, 2017

  4. [4]

    A point-cloud deep learning framework for prediction of fluid flow fields on irregular geometries.Physics of Fluids, 33(2), 2021

    Ali Kashefi, Davis Rempe, and Leonidas J Guibas. A point-cloud deep learning framework for prediction of fluid flow fields on irregular geometries.Physics of Fluids, 33(2), 2021

  5. [5]

    3d gaussian splatting for real-time radiance field rendering.ACM Trans

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, George Drettakis, et al. 3d gaussian splatting for real-time radiance field rendering.ACM Trans. Graph., 42(4):139–1, 2023

  6. [6]

    Meshless methods: an overview and recent developments.Computer methods in applied mechanics and engineering, 139(1-4):3–47, 1996

    Ted Belytschko, Yury Krongauz, Daniel Organ, Mark Fleming, and Petr Krysl. Meshless methods: an overview and recent developments.Computer methods in applied mechanics and engineering, 139(1-4):3–47, 1996

  7. [7]

    CRC press, 2009

    Gui-Rong Liu.Meshfree methods: moving beyond the finite element method. CRC press, 2009

  8. [8]

    Dart system analysis

    Paul T Boggs, Alan Althsuler, Alex R Larzelere, Edward J Walsh, Ruuobert L Clay, and Michael F Hardwick. Dart system analysis. Technical report, Sandia National Laboratories, 2005

  9. [9]

    California Institute of Technology, 2003

    Anil Nirmal Hirani.Discrete exterior calculus. California Institute of Technology, 2003

  10. [10]

    SIAM, 2018

    Douglas N Arnold.Finite element exterior calculus. SIAM, 2018

  11. [11]

    Enforcing exact physics in scientific machine learning: a data-driven exterior calculus on graphs.Journal of Computational Physics, 456: 110969, 2022

    Nathaniel Trask, Andy Huang, and Xiaozhe Hu. Enforcing exact physics in scientific machine learning: a data-driven exterior calculus on graphs.Journal of Computational Physics, 456: 110969, 2022

  12. [12]

    Data-driven whitney forms for structure-preserving control volume analysis.Journal of Computational Physics, 496:112520, 2024

    Jonas A Actor, Xiaozhe Hu, Andy Huang, Scott A Roberts, and Nathaniel Trask. Data-driven whitney forms for structure-preserving control volume analysis.Journal of Computational Physics, 496:112520, 2024

  13. [13]

    Structure-preserving digital twins via conditional neural whitney forms.arXiv preprint arXiv:2508.06981, 2025

    Brooks Kinch, Benjamin Shaffer, Elizabeth Armstrong, Michael Meehan, John Hewson, and Nathaniel Trask. Structure-preserving digital twins via conditional neural whitney forms.arXiv preprint arXiv:2508.06981, 2025

  14. [14]

    Structure-preserving learning improves geometry generalization in neural pdes.arXiv preprint arXiv:2602.02788, 2026

    Benjamin D Shaffer, Shawn Koohy, Brooks Kinch, M Ani Hsieh, and Nathaniel Trask. Structure-preserving learning improves geometry generalization in neural pdes.arXiv preprint arXiv:2602.02788, 2026

  15. [15]

    Physics-informed machine learning.Nature Reviews Physics, 3(6):422–440, 2021

    George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed machine learning.Nature Reviews Physics, 3(6):422–440, 2021

  16. [16]

    Scientific machine learning for closure models in multiscale problems: A review.arXiv preprint arXiv:2403.02913, 2024

    Benjamin Sanderse, Panos Stinis, Romit Maulik, and Shady E Ahmed. Scientific machine learning for closure models in multiscale problems: A review.arXiv preprint arXiv:2403.02913, 2024. 10

  17. [17]

    Physics Informed Deep Learning (Part I): Data-driven Solutions of Nonlinear Partial Differential Equations

    Maziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learn- ing (part i): Data-driven solutions of nonlinear partial differential equations.arXiv preprint arXiv:1711.10561, 2017

  18. [18]

    Neural operator: Learning maps between function spaces with applications to pdes.Journal of Machine Learning Research, 24(89):1–97, 2023

    Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Neural operator: Learning maps between function spaces with applications to pdes.Journal of Machine Learning Research, 24(89):1–97, 2023

  19. [19]

    Learning mesh- based simulation with graph networks

    Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter Battaglia. Learning mesh- based simulation with graph networks. InInternational Conference on Learning Representations, 2020

  20. [20]

    Message passing neural pde solvers

    Johannes Brandstetter, Daniel Worrall, and Max Welling. Message passing neural pde solvers. arXiv preprint arXiv:2202.03376, 2022

  21. [21]

    Phympgn: Physics-encoded message passing graph network for spatiotemporal pde systems.arXiv preprint arXiv:2410.01337, 2024

    Bocheng Zeng, Qi Wang, Mengtao Yan, Yang Liu, Ruizhi Chengze, Yi Zhang, Hongsheng Liu, Zidong Wang, and Hao Sun. Phympgn: Physics-encoded message passing graph network for spatiotemporal pde systems.arXiv preprint arXiv:2410.01337, 2024

  22. [22]

    Continuum attention for neural operators.Journal of Machine Learning Research, 26(300):1–52, 2025

    Edoardo Calvello, Nikola B Kovachki, Matthew E Levine, and Andrew M Stuart. Continuum attention for neural operators.Journal of Machine Learning Research, 26(300):1–52, 2025

  23. [23]

    Transolver: A fast transformer solver for pdes on general geometries.arXiv preprint arXiv:2402.02366, 2024

    Haixu Wu, Huakun Luo, Haowen Wang, Jianmin Wang, and Mingsheng Long. Transolver: A fast transformer solver for pdes on general geometries.arXiv preprint arXiv:2402.02366, 2024

  24. [24]

    Universal physics transformers: A framework for efficiently scaling neural operators.Advances in Neural Information Processing Systems, 37:25152–25194, 2024

    Benedikt Alkin, Andreas Fürst, Simon Schmid, Lukas Gruber, Markus Holzleitner, and Johannes Brandstetter. Universal physics transformers: A framework for efficiently scaling neural operators.Advances in Neural Information Processing Systems, 37:25152–25194, 2024

  25. [25]

    Gnot: A general neural operator transformer for operator learning

    Zhongkai Hao, Zhengyi Wang, Hang Su, Chengyang Ying, Yinpeng Dong, Songming Liu, Ze Cheng, Jian Song, and Jun Zhu. Gnot: A general neural operator transformer for operator learning. InInternational Conference on Machine Learning, pages 12556–12569. PMLR, 2023

  26. [26]

    Geometry-informed neural operator transformer for partial differential equations on arbitrary geometries.Computer Methods in Applied Mechanics and Engineering, 451:118668, 2026

    Qibang Liu, Weiheng Zhong, Hadi Meidani, Diab Abueidda, Seid Koric, and Philippe Geubelle. Geometry-informed neural operator transformer for partial differential equations on arbitrary geometries.Computer Methods in Applied Mechanics and Engineering, 451:118668, 2026

  27. [27]

    Latent neural operator for solving forward and inverse pde problems.Advances in Neural Information Processing Systems, 37:33085–33107, 2024

    Tian Wang and Chuang Wang. Latent neural operator for solving forward and inverse pde problems.Advances in Neural Information Processing Systems, 37:33085–33107, 2024

  28. [28]

    Geometry-informed neural operator for large-scale 3d pdes.Advances in Neural Information Processing Systems, 36:35836–35854, 2023

    Zongyi Li, Nikola Kovachki, Chris Choy, Boyi Li, Jean Kossaifi, Shourya Otta, Moham- mad Amin Nabian, Maximilian Stadler, Christian Hundt, Kamyar Azizzadenesheli, et al. Geometry-informed neural operator for large-scale 3d pdes.Advances in Neural Information Processing Systems, 36:35836–35854, 2023

  29. [29]

    Geometry-aware operator transformer as an efficient and accurate neural surrogate for PDEs on arbitrary domains.arXiv preprint arXiv:2505.18781, 2025

    Shizheng Wen, Arsh Kumbhat, Levi Lingsch, Sepehr Mousavi, Praveen Chandrashekar, and Siddhartha Mishra. Geometry aware operator transformer as an efficient and accurate neural surrogate for pdes on arbitrary domains.arXiv preprint arXiv:2505.18781, 2025

  30. [30]

    Mesh-informed neural networks for operator learning in finite element spaces.Journal of Scientific Computing, 97(2):35, 2023

    Nicola Rares Franco, Andrea Manzoni, and Paolo Zunino. Mesh-informed neural networks for operator learning in finite element spaces.Journal of Scientific Computing, 97(2):35, 2023

  31. [31]

    A finite element-based physics-informed operator learning framework for spatiotemporal partial differential equations on arbitrary domains

    Yusuke Yamazaki, Ali Harandi, Mayu Muramatsu, Alexandre Viardin, Markus Apel, Tim Brepols, Stefanie Reese, and Shahed Rezaei. A finite element-based physics-informed operator learning framework for spatiotemporal partial differential equations on arbitrary domains. Engineering with Computers, 41(1):1–29, 2025

  32. [32]

    Neuralcfd: Deep learning on high-fidelity automotive aerodynamics simulations.arXiv e-prints, pages arXiv–2502, 2025

    Maurits Bleeker, Matthias Dorfer, Tobias Kronlachner, Reinhard Sonnleitner, Benedikt Alkin, and Johannes Brandstetter. Neuralcfd: Deep learning on high-fidelity automotive aerodynamics simulations.arXiv e-prints, pages arXiv–2502, 2025

  33. [33]

    Aroma: Preserving spatial structure for latent pde modeling with local neural fields.Advances in Neural Information Processing Systems, 37:13489–13521, 2024

    Louis Serrano, Thomas X Wang, Etienne Le Naour, Jean-Noël Vittaut, and Patrick Gallinari. Aroma: Preserving spatial structure for latent pde modeling with local neural fields.Advances in Neural Information Processing Systems, 37:13489–13521, 2024. 11

  34. [34]

    Benedikt Alkin, Maurits Bleeker, Richard Kurle, Tobias Kronlachner, Reinhard Sonnleit- ner, Matthias Dorfer, and Johannes Brandstetter. Ab-upt: Scaling neural cfd surrogates for high-fidelity automotive aerodynamics simulations via anchored-branched universal physics transformers.arXiv preprint arXiv:2502.09692, 2025

  35. [35]

    Optnet: Differentiable optimization as a layer in neural networks.International Conference on Machine Learning, 2017

    Brandon Amos and J Zico Kolter. Optnet: Differentiable optimization as a layer in neural networks.International Conference on Machine Learning, 2017

  36. [36]

    Deep equilibrium models.Advances in neural information processing systems, 32, 2019

    Shaojie Bai, J Zico Kolter, and Vladlen Koltun. Deep equilibrium models.Advances in neural information processing systems, 32, 2019

  37. [37]

    Efficient and modular implicit differentiation

    Mathieu Blondel, Quentin Berthet, Marco Cuturi, Roy Frostig, Stephan Hoyer, Felipe Llinares- López, Fabian Pedregosa, and Jean-Philippe Vert. Efficient and modular implicit differentiation. Advances in neural information processing systems, 35:5230–5242, 2022

  38. [38]

    Machine learning–accelerated computational fluid dynamics.Proceedings of the National Academy of Sciences, 118(21):e2101784118, 2021

    Dmitrii Kochkov, Jamie A Smith, Ayya Alieva, Qing Wang, Michael P Brenner, and Stephan Hoyer. Machine learning–accelerated computational fluid dynamics.Proceedings of the National Academy of Sciences, 118(21):e2101784118, 2021

  39. [39]

    Learning deep implicit fourier neural operators (ifnos) with applications to heterogeneous material modeling

    Huaiqian You, Quinn Zhang, Colton J Ross, Chung-Hao Lee, and Yue Yu. Learning deep implicit fourier neural operators (ifnos) with applications to heterogeneous material modeling. Computer Methods in Applied Mechanics and Engineering, 398:115296, 2022

  40. [40]

    One-shot learning for solution operators of partial differential equations.Nature Communications, 16(1):8386, 2025

    Anran Jiao, Haiyang He, Rishikesh Ranade, Jay Pathak, and Lu Lu. One-shot learning for solution operators of partial differential equations.Nature Communications, 16(1):8386, 2025

  41. [41]

    Meshfree methods: progress made after 20 years.Journal of Engineering Mechanics, 143(4):04017001, 2017

    Jiun-Shyan Chen, Michael Hillman, and Sheng-Wei Chi. Meshfree methods: progress made after 20 years.Journal of Engineering Mechanics, 143(4):04017001, 2017

  42. [42]

    Surfaces generated by moving least squares methods

    Peter Lancaster and Kes Salkauskas. Surfaces generated by moving least squares methods. Mathematics of computation, 37(155):141–158, 1981

  43. [43]

    A high-order staggered meshless method for elliptic problems.SIAM Journal on Scientific Computing, 39(2):A479–A502, 2017

    Nathaniel Trask, Mauro Perego, and Pavel Bochev. A high-order staggered meshless method for elliptic problems.SIAM Journal on Scientific Computing, 39(2):A479–A502, 2017

  44. [44]

    Discretization correction of general integral pse operators for particle methods.Journal of Computational Physics, 229(11):4159– 4182, 2010

    Birte Schrader, Sylvain Reboux, and Ivo F Sbalzarini. Discretization correction of general integral pse operators for particle methods.Journal of Computational Physics, 229(11):4159– 4182, 2010

  45. [45]

    Cambridge Monographs on Applied and Computational Mathematics

    Holger Wendland.Scattered Data Approximation. Cambridge Monographs on Applied and Computational Mathematics. Cambridge University Press, Cambridge, 2004

  46. [46]

    On generalized moving least squares and diffuse derivatives.IMA Journal of Numerical Analysis, 32(3):983–1000, 2012

    Davoud Mirzaei, Robert Schaback, and Mehdi Dehghan. On generalized moving least squares and diffuse derivatives.IMA Journal of Numerical Analysis, 32(3):983–1000, 2012

  47. [47]

    Variational and momentum preservation aspects of smooth particle hydrodynamic formulations.Computer Methods in applied mechanics and engineering, 180 (1-2):97–115, 1999

    Javier Bonet and T-SL Lok. Variational and momentum preservation aspects of smooth particle hydrodynamic formulations.Computer Methods in applied mechanics and engineering, 180 (1-2):97–115, 1999

  48. [48]

    A conservative mesh- free scheme and generalized framework for conservation laws.SIAM Journal on Scientific Computing, 34(6):A2896–A2916, 2012

    Edmond Kay-yu Chiu, Qiqi Wang, Rui Hu, and Antony Jameson. A conservative mesh- free scheme and generalized framework for conservation laws.SIAM Journal on Scientific Computing, 34(6):A2896–A2916, 2012

  49. [49]

    A conservative, consistent, and scalable meshfree mimetic method.Journal of Computational Physics, 409:109187, 2020

    Nathaniel Trask, Pavel Bochev, and Mauro Perego. A conservative, consistent, and scalable meshfree mimetic method.Journal of Computational Physics, 409:109187, 2020

  50. [50]

    Trask, R

    Nathaniel Trask, Ravi G. Patel, Ben J. Gross, and Paul J. Atzberger. GMLS-Nets: A framework for learning from unstructured data.arXiv preprint arXiv:1909.05371, 2019

  51. [51]

    SPNets: Differentiable fluid dynamics for deep neural networks

    Connor Schenck and Dieter Fox. SPNets: Differentiable fluid dynamics for deep neural networks. InProceedings of the 2nd Conference on Robot Learning, volume 87 ofProceedings of Machine Learning Research, pages 317–335. PMLR, 2018. 12

  52. [52]

    Toshev, Gianluca Galletti, Fabian Fritz, Stefan Adami, and Johannes Brandstetter

    Artur P. Toshev, Gianluca Galletti, Fabian Fritz, Stefan Adami, and Johannes Brandstetter. Neural SPH: Improved neural modeling of lagrangian fluid dynamics. InProceedings of the 41st International Conference on Machine Learning, volume 235 ofProceedings of Machine Learning Research. PMLR, 2024

  53. [53]

    Geometric deep learning: going beyond euclidean data.IEEE Signal Processing Magazine, 34 (4):18–42, 2017

    Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data.IEEE Signal Processing Magazine, 34 (4):18–42, 2017

  54. [54]

    Principles of mimetic discretizations of differential operators

    Pavel B Bochev and James M Hyman. Principles of mimetic discretizations of differential operators. InCompatible spatial discretizations, pages 89–119. Springer, 2006

  55. [55]

    Hamiltonian neural networks.Advances in neural information processing systems, 32, 2019

    Samuel Greydanus, Misko Dzamba, and Jason Yosinski. Hamiltonian neural networks.Advances in neural information processing systems, 32, 2019

  56. [56]

    E (n) equivariant graph neural networks

    Vıctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. InInternational conference on machine learning, pages 9323–9332. PMLR, 2021

  57. [57]

    E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials.Nature communications, 13(1): 2453, 2022

    Simon Batzner, Albert Musaelian, Lixin Sun, Mario Geiger, Jonathan P Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E Smidt, and Boris Kozinsky. E (3)-equivariant graph neural networks for data-efficient and accurate interatomic potentials.Nature communications, 13(1): 2453, 2022

  58. [58]

    Se (3) equivariant graph neural networks with complete local frames

    Weitao Du, He Zhang, Yuanqi Du, Qi Meng, Wei Chen, Nanning Zheng, Bin Shao, and Tie-Yan Liu. Se (3) equivariant graph neural networks with complete local frames. InInternational Conference on Machine Learning, pages 5583–5608. PMLR, 2022

  59. [59]

    Conservation-preserved fourier neural operator through adaptive correction.arXiv preprint arXiv:2505.24579, 2025

    Chaoyu Liu, Yangming Li, Zhongying Deng, Chris Budd, and Carola-Bibiane Schönlieb. Conservation-preserved fourier neural operator through adaptive correction.arXiv preprint arXiv:2505.24579, 2025

  60. [60]

    A structure-preserving domain decomposition method for data-driven modeling.arXiv preprint arXiv:2406.05571, 2024

    Shuai Jiang, Jonas Actor, Scott Roberts, and Nathaniel Trask. A structure-preserving domain decomposition method for data-driven modeling.arXiv preprint arXiv:2406.05571, 2024

  61. [61]

    Lax and Robert D

    Peter D. Lax and Robert D. Richtmyer. Survey of the stability of linear finite difference equations.Communications on Pure and Applied Mathematics, 9(2):267–293, 1956. doi: 10.1002/cpa.3160090206

  62. [62]

    Simjeb: simulated jet engine bracket dataset

    Eamon Whalen, Azariah Beyene, and Caitlin Mueller. Simjeb: simulated jet engine bracket dataset. InComputer Graphics Forum, volume 40, pages 9–17. Wiley Online Library, 2021

  63. [63]

    Deepjeb: 3d deep learning-based synthetic jet engine bracket dataset.Journal of Mechanical Design, 147 (4):041703, 2025

    Seongjun Hong, Yongmin Kwon, Dongju Shin, Jangseop Park, and Namwoo Kang. Deepjeb: 3d deep learning-based synthetic jet engine bracket dataset.Journal of Mechanical Design, 147 (4):041703, 2025

  64. [64]

    Multi-level monte carlo training of neural operators.Computer Methods in Applied Mechanics and Engineering, 453:118800, 2026

    James Rowbottom, Stefania Fresca, Pietro Lio, Carola-Bibiane Schönlieb, and Nicolas Boullé. Multi-level monte carlo training of neural operators.Computer Methods in Applied Mechanics and Engineering, 453:118800, 2026

  65. [65]

    Kreiss, T

    H.-O. Kreiss, T. A. Manteuffel, B. Swartz, B. Wendroff, and A. B. White. Supra-convergent schemes on irregular grids.Mathematics of Computation, 47(176):537–554, 1986

  66. [66]

    Osqp: an operator splitting solver for quadratic programs.Mathematical Programming Computation, 12(4):637–672, 2020

    B. Stellato, G. Banjac, P. Goulart, A. Bemporad, and S. Boyd. OSQP: an operator splitting solver for quadratic programs.Mathematical Programming Computation, 12(4):637–672, 2020. doi: 10.1007/s12532-020-00179-2. URLhttps://doi.org/10.1007/s12532-020-00179-2

  67. [67]

    SOAP: Improving and Stabilizing Shampoo using Adam

    Nikhil Vyas, Depen Morwani, Rosie Zhao, Mujin Kwun, Itai Shapira, David Brandfonbrener, Lucas Janson, and Sham Kakade. Soap: Improving and stabilizing shampoo using adam.arXiv preprint arXiv:2409.11321, 2024

  68. [68]

    Inductive representation learning on large graphs.Advances in neural information processing systems, 30, 2017

    Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs.Advances in neural information processing systems, 30, 2017. 13

  69. [69]

    Transformers are rnns: Fast autoregressive transformers with linear attention

    Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. InInternational conference on machine learning, pages 5156–5165. PMLR, 2020

  70. [70]

    Choose a transformer: Fourier or galerkin.Advances in neural information processing systems, 34:24924–24940, 2021

    Shuhao Cao. Choose a transformer: Fourier or galerkin.Advances in neural information processing systems, 34:24924–24940, 2021. A Discrete exterior calculus background This appendix is a short, self-contained primer on the discrete exterior calculus (DEC) constructions we use, following the graph-cochain conventions of Trask et al.[11]. The body of the pap...

  71. [71]

    Practitioners deploying the method in safety-relevant workflows should verify feature coverage and validate against held-out high-fidelity simulations before relying on predictions

    holds only when test features lie in the training feature distribution. Practitioners deploying the method in safety-relevant workflows should verify feature coverage and validate against held-out high-fidelity simulations before relying on predictions. We do not see acute misuse risks of the kind associated with generative or surveillance technologies; t...