pith. machine review for the scientific record. sign in

arxiv: 2605.11001 · v1 · submitted 2026-05-09 · 💻 cs.LG

Recognition: 2 theorem links

· Lean Theorem

Finite Volume-Informed Neural Network Framework for 2D Shallow Water Equations: Rugged Loss Landscapes and the Importance of Data Guidance

Authors on Pith no claims yet

Pith reviewed 2026-05-13 05:52 UTC · model grok-4.3

classification 💻 cs.LG
keywords physics-informed neural networksshallow water equationsfinite volume methodloss landscapesdata guidancesurrogate modelingunstructured meshesRoe Riemann solver
0
0 comments X

The pith

Physics-only FVM-PINN training for 2D shallow water equations collapses to a trivial low-momentum state that satisfies the loss but not the flow.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper replaces the usual strong-form residual in PINNs with a differentiable finite-volume loss based on a well-balanced Roe Riemann solver evaluated on unstructured meshes, creating the Data-Guided FVM-PINN framework for the shallow water equations. It demonstrates that training with this physics loss alone on realistic 2D problems leads the network to converge to a near-zero velocity field that nearly minimizes the loss but bears no resemblance to the true solution. Adding even sparse velocity measurements from data dramatically enlarges the loss separation between the trivial state and the physical solution, guiding the optimizer correctly. This matters for surrogate modeling because it shows why pure physics-informed methods can fail on conservation laws with discontinuities and how minimal data makes the approach practical for engineering flows such as river simulations.

Core claim

The central claim is that the FVM-PINN loss landscape contains a shallow basin at the zero-momentum state, only about 7 times higher than at the true solution, so standard optimizers collapse there and produce non-physical results. Incorporating sparse data increases the loss separation to roughly 310 times, breaking the degeneracy. On a 2D block-in-channel benchmark, 200 random velocity measurements reduce velocity-field L2 error by 22 times relative to physics-only training, while 50 measurements still achieve a 7 times reduction. The finite-volume loss itself contributes an additional 23 percent error reduction in the sparse-data regime and remains neutral with dense data. The same method

What carries the argument

Data-Guided FVM-PINN, which substitutes a differentiable, well-balanced Roe Riemann-solver finite-volume loss on unstructured meshes for the conventional strong-form residual and augments it with sparse data measurements to escape degenerate minima.

If this is right

  • On the 2D block-in-channel benchmark, 200 random velocity measurements cut velocity L2 error by 22 times compared with physics-only training.
  • The FVM-PINN loss alone reduces velocity L2 error by about 23 percent when data are sparse and has negligible effect when dense reference data are supplied.
  • Even 50 velocity measurements still produce a 7 times error reduction over physics-only training.
  • Time-window decomposition with progressive initial-condition handoff yields monotonically decreasing error on a 1306-cell real-world river reach simulation spanning 3600 seconds.
  • The loss value at the zero-momentum state is only 7 times larger than at the trained solution, but sparse data enlarges this ratio to 310 times.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The shallow-basin degeneracy observed here is likely to appear in PINN formulations for other hyperbolic conservation laws where trivial or uniform states approximately satisfy the discrete residual.
  • Hybrid data-plus-FVM losses may prove essential for stable surrogate models of unsteady engineering flows when reference data are limited but not entirely absent.
  • The time-window handoff technique for maintaining accuracy over long horizons could be tested on other time-dependent PDE problems that suffer from accumulation of integration errors.

Load-bearing premise

The differentiable well-balanced Roe Riemann-solver finite-volume loss can be stably back-propagated through unstructured meshes and the network can represent the target flow without extra regularization.

What would settle it

Retraining the identical FVM-PINN architecture on the 2D block-in-channel benchmark with physics-only loss from multiple random initializations and observing whether any run converges to a velocity field whose L2 error is within 5 percent of the reference solution rather than collapsing to near-zero momentum.

Figures

Figures reproduced from arXiv: 2605.11001 by Xiaofeng Liu.

Figure 1
Figure 1. Figure 1: Schematic of the 2D shallow water equations: (a) the physical domain and defini￾tions, and (b) the unstructured finite-volume mesh with face-based fluxes. In conservation form, the SWEs read ∂Q ∂t + ∂F(Q) ∂x + ∂G(Q) ∂y = S(Q), (2) with fluxes and source term (Liu & Song, 2025) F =   uh u 2h + 1 2 g(ξ 2 + 2ξhs) uvh   , G =   vh uvh v 2h + 1 2 g(ξ 2 + 2ξhs)   , S =   0 −τbx/ρ − g ξ S0x −τby/ρ − g ξ… view at source ↗
Figure 2
Figure 2. Figure 2: Schematic of the FVM-PINN architecture and loss components. where ∂Qi/∂t is computed via automatic differentiation through the network, and Fˆ Roe is the Roe flux in equation (8). Assuming there are nc cells, the total FVM-PINN (PDE) loss is then the summation of the FVM loss for each cell and time level: Lfvm−pinn = 1 ntnc Xnt k=1 Xnc i=1 Ai ∥Ri(tk)∥ 2 . (14) The boundary conditions are enforced as follow… view at source ↗
Figure 3
Figure 3. Figure 3: shows the predicted and exact water depth and velocity profiles at t = 1 s. The network correctly captures the left rarefaction fan, the contact wave, and the right-moving shock [PITH_FULL_IMAGE:figures/full_fig_p011_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Case 2: WSE profile over a parabolic bed bump at the final time step (steady state). (a) WSE, (b) velocity. 5.3 Case 3: Data-Guided Training Ablation This is the central experiment of the paper. We use a 2D block-in-channel (BIC) benchmark (15 m × 5 m rectangular channel with a block obstruction, 1326 unstructured cells) to systematically study the role of data in FVM-PINN training. The setup of the case i… view at source ↗
Figure 5
Figure 5. Figure 5: Case 3: Block-in-channel benchmark. (a) Case setup, (b) Unstructured mesh and 50 sparse data points, (c) Water depth result from SRH-2D, (d) Velocity result from SRH-2D, (e) Water depth result from FVM teacher, (f) Velocity result from FVM teacher [PITH_FULL_IMAGE:figures/full_fig_p013_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Case 3: Water depth and velocity contours at steady state for the eight ablation runs. The reference solution is from SRH-2D. The four columns correspond to water depth, dif￾ference in water depth from SRH-2D, velocity magnitude, and difference in velocity magnitude from SRH-2D. The rows correspond to the eight runs in [PITH_FULL_IMAGE:figures/full_fig_p014_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Case 3: Ablation summary and comparison: (a) L2(|u|) for the eight runs in Ta￾ble 3 and (b) WSE profiles along the channel centreline for the same runs. The reference solution is from SRH-2D. –15– [PITH_FULL_IMAGE:figures/full_fig_p015_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Case 4: Savannah River. (a) Manning’s roughness zones, (b) Bathymetry, (c) Wa￾ter depth from SRH-2D at steady state, (d) Velocity magnitude from SRH-2D at steady state, (e) Water depth from FVM teacher, and (f) Velocity magnitude from FVM teacher. –17– [PITH_FULL_IMAGE:figures/full_fig_p017_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Case 4: Water depth and velocity contours at steady state for the seven ablation runs. The reference solution is from SRH-2D. The four columns correspond to water depth, dif￾ference in water depth from SRH-2D, velocity magnitude, and difference in velocity magnitude from SRH-2D. The rows correspond to the runs in [PITH_FULL_IMAGE:figures/full_fig_p019_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Case 4: L2 error at each reference time (t ∈ {720, 1440, 2160, 2880, 3600} s) for the physics-only baseline (SR-A), the standard trainer (SR-B), Window(5) (SR-C), Window(10) (SR￾D), and the FVM teacher (SR-E). The windowed trainers’ improvement over the single-network baseline grows monotonically with t thanks to progressive IC handoff, while the single-network trainer’s error is nearly constant in t. –20… view at source ↗
Figure 12
Figure 12. Figure 12: Starting from the trained BIC-B network (which has correctly learned the wake), [PITH_FULL_IMAGE:figures/full_fig_p022_12.png] view at source ↗
Figure 11
Figure 11. Figure 11: Case 4 GPU memory footprint of the seven Savannah River runs vs. wall-clock training time. (a) Running maximum of peak GPU memory since training started. Per-step memory is dominated by the autograd graph of the FVM-PINN loss evaluated over the full mesh at every Adam step. As a result, the strategies that evaluate the FVM residual every step (SR-A, SR-B, SR-F, SR-G) all reach ∼900 MiB peak. Time windowin… view at source ↗
Figure 12
Figure 12. Figure 12: Loss-landscape diagnostic on the BIC-B trained network (block-in-channel): (a) Lfvm−pinn(α) and Ldata(α) along the momentum-scaling line (18). (b) weighted total loss λfvm−pinnLfvm−pinn + λdataLdata + · · · . The FVM-PINN loss at α = 0 (zero momentum) is only about 7.0× larger than at the trained solution (α = 1) – a shallow basin that an ordinary first￾order optimizer can fall into from a wide range of i… view at source ↗
read the original abstract

Physics-informed neural networks (PINNs) are a simple surrogate-modelling paradigm for partial differential equations, but their standard strong-form residual formulation is ill suited to the shallow water equations (SWE). It cannot enforce local conservation, handle discontinuities, or leverage the boundary-conforming unstructured meshes used in real-world applications. We introduce ``Data-Guided FVM-PINN'', a framework that replaces the strong-form residual with a differentiable, well-balanced Roe Riemann-solver finite-volume (FVM) loss evaluated on unstructured meshes. The major finding is that physics-only FVM-PINN training often fails on realistic 2D problems: the network collapses to a trivial low-momentum state that nearly satisfies the FVM-PINN residual but bears no resemblance to the true flow. A loss-landscape diagnostic shows that the FVM-PINN loss at zero momentum is only about $7\times$ larger than at the trained solution, a shallow basin that an ordinary optimizer falls into; adding even sparse data turns this into a $310\times$ separation, breaking the degeneracy. On a 2D block-in-channel benchmark, just $200$ random velocity measurements drop the velocity-field $L_2$ error by $22\times$ versus physics-only; $50$ measurements still deliver a $7\times$ reduction. A controlled ablation isolates the contribution of the FVM-PINN loss: it reduces velocity-field $L_2$ by $\sim$$23\%$ in the sparse-data regime and is essentially neutral when dense reference data is available. On a real-world Savannah River reach ($1306$ cells, $3600$~s simulation, five Manning zones), the framework constructs an accurate surrogate from SRH-2D anchor data, with time-window decomposition reducing error monotonically via progressive initial-condition handoff.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces a Data-Guided FVM-PINN framework for 2D Shallow Water Equations that replaces the standard strong-form residual with a differentiable, well-balanced Roe Riemann-solver finite-volume loss evaluated on unstructured meshes. It reports that physics-only FVM-PINN training collapses to a trivial low-momentum state satisfying the residual but not the true flow, with the loss landscape showing only a 7x separation at zero momentum versus the trained solution; adding sparse data (e.g., 200 random velocity measurements) creates a 310x separation and reduces velocity L2 error by 22x (or 7x with 50 measurements) on a 2D block-in-channel benchmark. A controlled ablation attributes ~23% additional error reduction to the FVM term in the sparse-data regime, and the method is applied to a real-world Savannah River reach using SRH-2D anchor data with time-window decomposition.

Significance. If the quantitative results and loss-landscape diagnostics hold under verification, the work provides concrete evidence that pure physics-informed training can be inadequate for hyperbolic systems like the SWE due to shallow basins, while hybrid data guidance enables practical surrogate modeling on unstructured meshes. The emphasis on well-balanced schemes and real-world application to hydraulic reaches is a strength for engineering relevance. However, the absence of released code, data, or explicit gradient/conservation verification limits immediate adoption and extension by the community.

major comments (2)
  1. The central claim that physics-only FVM-PINN collapses due to a shallow 7x loss basin (while sparse data yields 310x separation and 22x error reduction) rests on the Roe Riemann-solver FVM loss being a faithfully differentiable discretization that can be stably back-propagated through unstructured meshes. No explicit numerical checks for gradient accuracy, handling of sonic points/limiters, or discrete conservation errors are described, raising the possibility that the observed degeneracy is an artifact of an ill-posed loss rather than an intrinsic feature of physics-only training.
  2. The ablation isolating the FVM-PINN loss contribution (~23% velocity L2 reduction in the sparse-data regime) does not specify whether the network architecture, optimizer, or regularization were held identical between the physics-only and hybrid cases; any unstated differences could confound attribution of the gain to the finite-volume term.
minor comments (2)
  1. The abstract and results mention time-window decomposition with progressive initial-condition handoff, but the precise mechanism for transferring states across windows and its effect on long-time stability is not detailed enough for reproduction.
  2. Notation distinguishing the FVM residual term from the data-guidance term in the composite loss should be introduced explicitly to avoid ambiguity when comparing physics-only versus hybrid training.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback and for recognizing the potential significance of the data-guided FVM-PINN approach for hyperbolic systems. We address each major comment below and will revise the manuscript to incorporate the requested clarifications and verifications.

read point-by-point responses
  1. Referee: The central claim that physics-only FVM-PINN collapses due to a shallow 7x loss basin (while sparse data yields 310x separation and 22x error reduction) rests on the Roe Riemann-solver FVM loss being a faithfully differentiable discretization that can be stably back-propagated through unstructured meshes. No explicit numerical checks for gradient accuracy, handling of sonic points/limiters, or discrete conservation errors are described, raising the possibility that the observed degeneracy is an artifact of an ill-posed loss rather than an intrinsic feature of physics-only training.

    Authors: We agree that explicit verification of the FVM loss differentiability and numerical properties would strengthen the central claim. In the revised manuscript we will add a dedicated subsection with: (i) finite-difference gradient checks on a 1D Riemann problem to quantify back-propagation accuracy, (ii) monitoring of discrete mass and momentum conservation errors during training (shown to remain at machine precision), and (iii) explicit confirmation that the Roe solver includes the standard entropy fix for sonic points together with the minmod limiter. These additions will demonstrate that the loss is well-posed and that the observed collapse arises from the shallow basin rather than discretization artifacts. revision: yes

  2. Referee: The ablation isolating the FVM-PINN loss contribution (~23% velocity L2 reduction in the sparse-data regime) does not specify whether the network architecture, optimizer, or regularization were held identical between the physics-only and hybrid cases; any unstated differences could confound attribution of the gain to the finite-volume term.

    Authors: The ablation experiments used identical network architectures (same depth and width), optimizer (Adam with the same learning-rate schedule), and regularization (identical weight decay) in all cases; the only controlled difference was the presence or absence of the FVM loss term. To remove any ambiguity we will revise the ablation description in Section 4.2 to state this explicitly and add a supplementary table listing the shared hyperparameters. revision: yes

Circularity Check

0 steps flagged

No significant circularity; empirical results rest on training outcomes and ablations

full rationale

The paper presents an empirical framework replacing strong-form PINN residuals with a differentiable well-balanced Roe FVM loss on unstructured meshes, then reports training collapses under physics-only conditions and quantitative error reductions from sparse data (22x velocity L2 drop with 200 measurements, 23% FVM contribution in sparse regime). No derivation chain exists that reduces by construction to fitted inputs, self-citations, or ansatzes; claims are supported by loss-landscape diagnostics, controlled ablations on the block-in-channel benchmark, and real-world Savannah River results. The differentiability assumption on the Roe solver is stated as a prerequisite rather than derived from the target result, leaving the central findings externally falsifiable via the reported experiments.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The framework rests on standard domain assumptions from finite-volume methods for hyperbolic conservation laws; no free parameters or invented entities are introduced beyond typical neural-network hyperparameters.

axioms (1)
  • domain assumption A well-balanced Roe Riemann solver provides a differentiable and accurate discretization of the shallow water equations on unstructured meshes
    Invoked as the basis for the replacement loss function.

pith-pipeline@v0.9.0 · 5637 in / 1317 out tokens · 71314 ms · 2026-05-13T05:52:33.592649+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

47 extracted references · 47 canonical work pages · 1 internal anchor

  1. [1]

    , journal =

    Raissi, Maziar and Perdikaris, Paris and Karniadakis, George E. , journal =. Physics-informed neural networks:. 2019 , doi =

  2. [2]

    Harten, Ami and Lax, Peter D. and. On upstream differencing and. SIAM Review , volume =. 1983 , doi =

  3. [3]

    Journal of Hydraulic Engineering , volume =

    Two-dimensional depth-averaged flow modeling with an unstructured hybrid mesh , author =. Journal of Hydraulic Engineering , volume =. 2010 , doi =

  4. [4]

    , institution =

    Lai, Yong G. , institution =

  5. [5]

    Brunner, Gary W , year=

  6. [6]

    International Journal of Geographical Information Science , volume=

    LISFLOOD: a GIS-based distributed model for river basin scale water balance and flood simulation , author=. International Journal of Geographical Information Science , volume=. 2010 , publisher=

  7. [7]

    Differentiable modelling to unify machine learning and physical models for geosciences , volume =

    Chaopeng Shen and Alison P Appling and Pierre Gentine and Toshiyuki Bandai and Hoshin Gupta and Alexandre Tartakovsky and Marco Baity-Jesi and Fabrizio Fenicia and Daniel Kifer and Li Li and Xiaofeng Liu and Wei Ren and Yi Zheng and Ciaran J Harman and Martyn Clark and Matthew Farthing and Dapeng Feng and Praveen Kumar and Doaa Aboelyazeed and Farshid Rah...

  8. [8]

    Farthing and Tyler J

    Jonghyun Lee and Hojat Ghorbanidehno and Matthew W. Farthing and Tyler J. Hesser and Eric F. Darve and Peter K. Kitanidis , doi =. Riverine Bathymetry Imaging With Indirect Observations , volume =. Water Resources Research , keywords =

  9. [9]

    2022 , doi =

    Liu, Xiaofeng , journal =. 2022 , doi =

  10. [10]

    Water Resources Research , volume =

    Scientific Machine Learning of Flow Resistance Using Universal Shallow Water Equations With Differentiable Programming , author =. Water Resources Research , volume =. 2025 , doi =

  11. [11]

    Water Resources Research , volume =

    Liu, Xiaofeng and Song, Yalan and Shen, Chaopeng , title =. Water Resources Research , volume =

  12. [12]

    Water Resources Research , volume =

    Physics-informed neural networks for the augmented system of shallow water equations with topography , author =. Water Resources Research , volume =. 2024 , doi =

  13. [13]

    2025 , journal =

    Physics-Informed Neural Networks for Solving the Two-Dimensional Shallow Water Equations With Terrain Topography and Rainfall Source Terms , author =. 2025 , journal =

  14. [14]

    and Karniadakis, George E

    Jagtap, Ameya D. and Karniadakis, George E. , journal =. Extended physics-informed neural networks (

  15. [15]

    A novel sequential method to train physics informed neural networks for

    Mattey, Revanth and Ghosh, Susanta , journal =. A novel sequential method to train physics informed neural networks for. 2022 , doi =

  16. [16]

    Computer Methods in Applied Mechanics and Engineering , volume =

    Physics-informed neural networks for high-speed flows , author =. Computer Methods in Applied Mechanics and Engineering , volume =. 2020 , doi =

  17. [17]

    , journal =

    Lu, Lu and Meng, Xuhui and Mao, Zhiping and Karniadakis, George E. , journal =. 2021 , doi =

  18. [18]

    Computer Methods in Applied Mechanics and Engineering , volume =

    Godunov loss functions for modelling of hyperbolic conservation laws , author =. Computer Methods in Applied Mechanics and Engineering , volume =. 2025 , doi =

  19. [19]

    and di Bernardo, Mario and Russo, Lucia and Siettos, Constantinos I

    Patsatzis, Dimitrios G. and di Bernardo, Mario and Russo, Lucia and Siettos, Constantinos I. , journal =. 2025 , doi =

  20. [20]

    2025 , journal =

    An Approximate Riemann Solver Approach in Physics-Informed Neural Networks for Hyperbolic Conservation Laws , author =. 2025 , journal =. doi:10.1063/5.0285282 , url =

  21. [21]

    arXiv preprint arXiv:2603.24819 , year =

    Weak and entropy physics-informed neural networks for conservation laws , author =. arXiv preprint arXiv:2603.24819 , year =

  22. [22]

    Physics of Fluids , volume =

    Physics-informed neural network based on the finite volume method for solving forward and inverse problems , author =. Physics of Fluids , volume =

  23. [23]

    Wei, Chang and Fan, Yuchen and Wong, Jian Cheng and Ooi, Chin Chun and Wang, Heyang and Chiu, Pao-Hsiung , journal =

  24. [24]

    Finite Volume Physical Informed Neural Network (

    Su, Zijie and Liu, Yunpu and Pan, Sheng and Li, Zheng and Shen, Changyu , journal =. Finite Volume Physical Informed Neural Network (. 2024 , doi =

  25. [25]

    Knowledge-Based Systems , volume =

    Unified finite-volume physics informed neural networks to solve the heterogeneous partial differential equations , author =. Knowledge-Based Systems , volume =. 2024 , doi =

  26. [26]

    2026 , doi =

    Zhu, Tong and Si, Bingqian and Fu, Lin and Lu, Yanglong , journal =. 2026 , doi =

  27. [27]

    Learning to solve

    Li, Tianyu and Zou, Yiye and Zou, Shufan and Chang, Xinghua and Zhang, Laiping and Deng, Xiaogang , journal =. Learning to solve. 2025 , doi =

  28. [28]

    and Buhendwa, Aaron B

    Bezgin, Deniz A. and Buhendwa, Aaron B. and Adams, Nikolaus A. , journal =. 2023 , doi =

  29. [29]

    Data-Centric Engineering , volume =

    Scalable algorithms for physics-informed neural and graph networks , author =. Data-Centric Engineering , volume =. 2022 , doi =

  30. [30]

    Advances in Neural Information Processing Systems , volume =

    Characterizing possible failure modes in physics-informed neural networks , author =. Advances in Neural Information Processing Systems , volume =

  31. [31]

    Journal of Computational Physics , volume =

    Physics-informed neural networks for the shallow-water equations on the sphere , author =. Journal of Computational Physics , volume =. 2022 , doi =

  32. [32]

    Journal of Computational Physics , volume =

    Physics-informed neural networks for tsunami inundation modeling , author =. Journal of Computational Physics , volume =. 2025 , doi =

  33. [33]

    Qi, Xin and de Almeida, Gustavo A. M. and Maldonado, Sergio , journal =. Physics-informed neural networks for solving flow problems modeled by the. 2024 , doi =

  34. [34]

    1997 , number =

    Proceedings of the 2nd Workshop on Dam-Break Wave Simulation: A Set of Standard Test Cases for Numerical Models , author =. 1997 , number =

  35. [35]

    Water Waves: The Mathematical Theory with Applications , author =

  36. [36]

    Shock-Capturing Methods for Free-Surface Shallow Flows , author =

  37. [37]

    Matematicheskii Sbornik , volume =

    A difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics , author =. Matematicheskii Sbornik , volume =

  38. [38]

    Rogers, B. D. and Borthwick, A. G. L. and Taylor, P. H. , journal =. Mathematical balancing of flux gradient and source terms prior to using. 2003 , doi =

  39. [39]

    Journal of Hydraulic Engineering , year =

    Well-balanced two-dimensional coupled modeling of shallow water flows on unstructured grids , author =. Journal of Hydraulic Engineering , year =

  40. [40]

    Training Deep Nets with Sublinear Memory Cost

    Training deep nets with sublinear memory cost , author =. arXiv preprint arXiv:1604.06174 , year =

  41. [41]

    Advances in Neural Information Processing Systems , volume =

    Fourier features let networks learn high frequency functions in low dimensional domains , author =. Advances in Neural Information Processing Systems , volume =

  42. [42]

    and Lu, Peihuang and Nocedal, Jorge , journal =

    Zhu, Ciyou and Byrd, Richard H. and Lu, Peihuang and Nocedal, Jorge , journal =. Algorithm 778:. 1997 , doi =

  43. [43]

    Advances in Neural Information Processing Systems 34 (NeurIPS) , year =

    Characterizing possible failure modes in physics-informed neural networks , author =. Advances in Neural Information Processing Systems 34 (NeurIPS) , year =

  44. [44]

    Journal of Computational Physics , volume =

    Thermodynamically consistent physics-informed neural networks for hyperbolic systems , author =. Journal of Computational Physics , volume =. 2022 , doi =

  45. [45]

    Computer Methods in Applied Mechanics and Engineering , volume =

    Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems , author =. Computer Methods in Applied Mechanics and Engineering , volume =. 2020 , doi =

  46. [46]

    De Ryck, Tim and Mishra, Siddhartha and Molinaro, Roberto , journal =

  47. [47]

    arXiv preprint arXiv:2604.01968 , year =

    Revisiting Conservativeness in Fluid Dynamics: Failure of Non-Conservative. arXiv preprint arXiv:2604.01968 , year =