pith. machine review for the scientific record. sign in

arxiv: 2604.15617 · v1 · submitted 2026-04-17 · ⚛️ physics.comp-ph · cs.LG· cs.NA· math.NA

Recognition: unknown

A Structure-Preserving Graph Neural Solver for Parametric Hyperbolic Conservation Laws

Authors on Pith no claims yet

Pith reviewed 2026-05-10 08:03 UTC · model grok-4.3

classification ⚛️ physics.comp-ph cs.LGcs.NAmath.NA
keywords graph neural networkshyperbolic conservation lawsstructure-preserving solversparametric flowssupersonic benchmarksnumerical methodsmachine learning surrogatesshock capturing
0
0 comments X

The pith

A graph neural network solver for hyperbolic conservation laws preserves local conservation and upwinding by learning reconstruction-and-flux operators from classical numerical principles.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a graph neural solver that treats the network as a learned reconstruction-and-flux operator rather than a direct state updater. This structure draws on high-order space-time prediction ideas to keep updates conservative and upwind-biased even when time steps are large. The approach targets parametric studies of flows with shocks and discontinuities, where repeated simulations are needed but full-resolution classical methods remain costly. If the preservation properties hold, the solver can deliver stable long-horizon predictions across changes in geometry, initial conditions, and flow regimes while running much faster than traditional codes.

Core claim

The central claim is that recasting message-passing graph neural networks as high-order space-time predictors inside a reconstruction-and-flux framework produces an interpretable solver that inherently respects local conservation and upwinding. When tested on supersonic flow benchmarks that span wide parametric variations, the resulting updates remain stable and accurate over long rollouts, outperform both surrogate baselines and low-order discretizations, and run orders of magnitude faster than high-resolution classical simulations.

What carries the argument

The learned reconstruction-and-flux operator, implemented by recasting graph message passing as high-order space-time predictors, which computes conservative cell updates while respecting upwind directions on an unstructured graph of the flow field.

If this is right

  • The solver maintains superior long-horizon rollout stability and accuracy relative to strong neural surrogate baselines.
  • It outperforms low-order classical discretizations on the same flow problems.
  • It delivers orders-of-magnitude runtime reductions compared with high-resolution traditional simulations.
  • It remains reliable when geometry, initial and boundary conditions, and flow regimes vary over wide ranges.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same operator design could support repeated-query tasks such as design optimization or uncertainty propagation where classical codes are too slow.
  • Because the updates stay conservative by construction, the method may reduce reliance on post-hoc projection steps that many learned PDE solvers require.
  • The graph-based formulation opens a route to adaptive or moving meshes without retraining the core operator.

Load-bearing premise

That designing the graph neural network as a reconstruction-and-flux operator will automatically enforce local conservation, upwinding, and stability across broad parametric changes without extra constraints or corrections.

What would settle it

A long-horizon rollout on a supersonic benchmark case that produces measurable violation of discrete conservation or develops non-physical oscillations after many steps would disprove the claim of inherent structure preservation.

Figures

Figures reproduced from arXiv: 2604.15617 by Jiamin Jiang, Jingrun Chen, Shanglin Lv.

Figure 1
Figure 1. Figure 1: Schematic illustration of the mesh ingredients and numerical flux in the Godunov framework. [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the proposed structure-preserving graph neural solver for interpretable modeling of [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Schematic of graph representations for a simulation mesh. Dots denote nodes, and dashed [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Schematics of neural reconstruction process via message-passing GNN as a counterpart to [PITH_FULL_IMAGE:figures/full_fig_p012_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Schematic of the forward pass through the proposed EPD architecture (CPGNet). [PITH_FULL_IMAGE:figures/full_fig_p014_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Example DGSEM simulation mesh (top) and the associated point cloud (bottom) for the [PITH_FULL_IMAGE:figures/full_fig_p018_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Mean RMSE over all test cases vs. time steps under the two training strategies. [PITH_FULL_IMAGE:figures/full_fig_p021_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Solution fields of two representative cases in the Supersonic Bump dataset (generated with the [PITH_FULL_IMAGE:figures/full_fig_p023_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Density and pressure fields of representative cases in the Supersonic Bump dataset (generated [PITH_FULL_IMAGE:figures/full_fig_p023_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Solution fields of two representative cases in the Supersonic Bump dataset (generated with [PITH_FULL_IMAGE:figures/full_fig_p024_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Density and pressure fields of representative cases in the Supersonic Bump dataset (generated [PITH_FULL_IMAGE:figures/full_fig_p024_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Solution fields of two representative cases in the Forward Step dataset. [PITH_FULL_IMAGE:figures/full_fig_p025_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Density and pressure fields of representative cases in the Forward Step dataset. [PITH_FULL_IMAGE:figures/full_fig_p025_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Comparison of solution fields across DGSEM polynomial degrees and graph neural solver for [PITH_FULL_IMAGE:figures/full_fig_p026_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Solution fields of two representative cases in the Diffraction dataset. [PITH_FULL_IMAGE:figures/full_fig_p028_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Density and pressure fields of representative cases in the Diffraction dataset. [PITH_FULL_IMAGE:figures/full_fig_p028_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Solution fields of two representative cases in the Supersonic Cylinder dataset. [PITH_FULL_IMAGE:figures/full_fig_p029_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Density and pressure fields of representative cases in the Supersonic Cylinder dataset. [PITH_FULL_IMAGE:figures/full_fig_p029_18.png] view at source ↗
read the original abstract

Hyperbolic conservation laws govern a wide range of transport-driven dynamics featuring shocks, contact discontinuities, and complex wave interactions, posing distinct challenges for deep-learning-based surrogate modeling. While classical numerical methods provide robust and physically admissible solutions, their computational cost restricts applicability in many-query tasks such as parametric studies and design optimization. Conversely, existing neural surrogates offer rapid inference but often fail to respect intrinsic PDE structures, leading to non-physical artifacts, rollout instability, and poor generalization. We present an interpretable, structure-preserving graph neural solver that bridges classical numerical principles with graph neural networks (GNNs). The network is designed as a learned reconstruction-and-flux operator rather than a black-box state updater, thereby inherently preserving key properties such as local conservation and upwinding. Inspired by Arbitrary high-order DERivatives schemes, we further recast message-passing GNNs as high-order space-time predictors, enabling conservative and stable neural updates with large time steps. Evaluation is performed on challenging supersonic flow benchmarks spanning broad parametric variations in geometry, initial/boundary conditions, and flow regimes. The neural solver achieves superior long-horizon rollout stability and accuracy compared with strong surrogate baselines, outperforms low-order discretizations, and delivers orders-of-magnitude runtime speedups over high-resolution simulations.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript introduces a graph neural network (GNN) designed as a learned reconstruction-and-flux operator for parametric hyperbolic conservation laws. Recast via ADER-inspired high-order space-time predictors, the architecture is claimed to inherently enforce local conservation and upwinding. On supersonic flow benchmarks spanning variations in geometry, initial/boundary conditions, and regimes, the solver is reported to deliver superior long-horizon rollout stability and accuracy versus strong surrogate baselines, to outperform low-order discretizations, and to achieve orders-of-magnitude speedups relative to high-resolution simulations.

Significance. If the structure-preservation claims are substantiated, the work could meaningfully advance reliable neural surrogates for conservation laws in many-query settings by combining classical numerical principles with GNN message passing. The emphasis on interpretable, physics-aligned operators addresses a recognized weakness of black-box neural PDE solvers.

major comments (2)
  1. [Abstract and §3] Abstract and §3 (method): the assertion that the GNN 'inherently preserving key properties such as local conservation and upwinding' is load-bearing for the long-horizon stability claim, yet the manuscript supplies no explicit verification that interface fluxes are antisymmetric or that net flux into each control volume equals the state update to machine precision. A concrete demonstration (e.g., conservation-error plots or a short proof that the learned predictors enforce discrete conservation by construction) is required; without it the advantage over black-box baselines remains unproven.
  2. [§4] §4 (experiments): the reported superiority in stability and accuracy is presented without error bars, ablation studies isolating the reconstruction-and-flux versus ADER-predictor components, or direct quantification of conservation drift over rollouts. These omissions make it impossible to assess whether the architecture truly mitigates the parametric instability issues highlighted in the introduction.
minor comments (2)
  1. [Figures] Figure captions and axis labels in the rollout visualizations should explicitly state the time horizon and the norm used for error computation.
  2. [Introduction] The introduction would benefit from a concise table contrasting the proposed operator with prior GNN-PDE and structure-preserving neural methods.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback and for recognizing the potential significance of combining classical numerical principles with GNNs for hyperbolic conservation laws. We address each major comment point by point below, providing clarifications on the architecture and committing to revisions that strengthen the evidence for the claims.

read point-by-point responses
  1. Referee: [Abstract and §3] Abstract and §3 (method): the assertion that the GNN 'inherently preserving key properties such as local conservation and upwinding' is load-bearing for the long-horizon stability claim, yet the manuscript supplies no explicit verification that interface fluxes are antisymmetric or that net flux into each control volume equals the state update to machine precision. A concrete demonstration (e.g., conservation-error plots or a short proof that the learned predictors enforce discrete conservation by construction) is required; without it the advantage over black-box baselines remains unproven.

    Authors: We thank the referee for this important observation. The architecture is explicitly designed as a learned reconstruction-and-flux operator: message passing between adjacent nodes computes interface fluxes that are antisymmetric by construction (the flux contribution from node i to j is the negation of that from j to i), and the state update for each control volume is exactly the discrete divergence of these fluxes, mirroring a finite-volume scheme. The ADER-inspired high-order space-time predictors further ensure that the local updates remain conservative. While this follows directly from the formulation in §3, we acknowledge that the original manuscript did not include explicit numerical verification or a concise proof sketch. In the revised version we will add both: a brief derivation showing discrete conservation by construction and conservation-error plots (L1 drift of total conserved quantities) over long-horizon rollouts on the supersonic benchmarks. These additions will make the advantage over black-box baselines explicit. revision: yes

  2. Referee: [§4] §4 (experiments): the reported superiority in stability and accuracy is presented without error bars, ablation studies isolating the reconstruction-and-flux versus ADER-predictor components, or direct quantification of conservation drift over rollouts. These omissions make it impossible to assess whether the architecture truly mitigates the parametric instability issues highlighted in the introduction.

    Authors: We agree that additional statistical controls and component-wise ablations would improve the experimental section. In the revised manuscript we will augment §4 with: (i) error bars obtained from five independent training runs using different random seeds for all reported metrics; (ii) ablation studies that isolate the reconstruction-and-flux operator (by comparing against a non-antisymmetric message-passing variant) and the ADER predictor (by comparing against a first-order Euler update); and (iii) direct quantification of conservation drift via plots of total mass, momentum, and energy errors over rollout horizons across the parametric variations. These revisions will provide clearer evidence that the structure-preserving design addresses the instability issues raised in the introduction. revision: yes

Circularity Check

0 steps flagged

No circularity: design claims rest on explicit architectural choices rather than self-referential definitions or fitted inputs

full rationale

The paper presents its GNN as a learned reconstruction-and-flux operator explicitly inspired by ADER schemes and classical conservation principles, with preservation of local conservation and upwinding asserted as a direct consequence of that design choice rather than derived from any fitted quantity or prior self-citation. No equations or sections in the provided text reduce a claimed prediction or stability result back to the same fitted parameters by construction; the evaluation on parametric supersonic benchmarks is independent of the model definition. This is a standard non-circular bridging paper whose central claims remain falsifiable against external numerical baselines.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard domain assumptions from numerical PDE methods and GNN architectures; no free parameters, new axioms, or invented entities are introduced or fitted in the abstract description.

axioms (1)
  • domain assumption Classical finite-volume and ADER schemes preserve local conservation and upwinding when using reconstruction and flux operators.
    Invoked to justify the GNN design as inherently structure-preserving.

pith-pipeline@v0.9.0 · 5535 in / 1300 out tokens · 48206 ms · 2026-05-10T08:03:02.516335+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

87 extracted references · 12 canonical work pages · 4 internal anchors

  1. [1]

    Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar, Fourier neural operator for parametric partial differential equa- tions, in: International Conference on Learning Representations, Vol. 9, 2021

  2. [2]

    L. Lu, P. Jin, G. Pang, Z. Zhang, G. E. Karniadakis, Learning nonlinear operators via deeponet based on the universal approximation theorem of operators, Nature Machine Intelligence 3 (3) (2021) 218–229

  3. [3]

    Q. Cao, S. Goswami, G. E. Karniadakis, Laplace neural operator for solving differ- ential equations, Nature Machine Intelligence 6 (6) (2024) 631–640

  4. [4]

    arXiv preprint arXiv:2402.16845 , year=

    M. Liu-Schiaffini, J. Berner, B. Bonev, T. Kurth, K. Azizzadenesheli, A. Anand- kumar, Neural operators with localized integral and differential kernels (2024). arXiv:2402.16845

  5. [5]

    Zappala, A

    E. Zappala, A. H. d. O. Fonseca, J. O. Caro, A. H. Moberly, M. J. Higley, J. Cardin, D.v.Dijk, Learningintegraloperatorsvianeuralintegralequations, NatureMachine Intelligence 6 (9) (2024) 1046–1062

  6. [6]

    Peyvan, V

    A. Peyvan, V. Oommen, A. D. Jagtap, G. E. Karniadakis, Riemannonets: Inter- pretable neural operators for riemann problems, Computer Methods in Applied Me- chanics and Engineering 426 (2024) 116996

  7. [7]

    S. K. Godunov, I. Bohachevsky, Finite difference method for numerical computation ofdiscontinuoussolutionsoftheequationsoffluiddynamics, MatematičeskijSbornik 47 (3) (1959) 271–306

  8. [8]

    E. F. Toro, Riemann solvers and numerical methods for fluid dynamics: a practical introduction, Springer Science & Business Media, 2013

  9. [9]

    Van Leer, Towards the ultimate conservative difference scheme, Journal of Com- putational Physics 135 (2) (1997) 229–248

    B. Van Leer, Towards the ultimate conservative difference scheme, Journal of Com- putational Physics 135 (2) (1997) 229–248

  10. [10]

    X.-D. Liu, S. Osher, T. Chan, Weighted essentially non-oscillatory schemes, Journal of Computational Physics 115 (1) (1994) 200–212

  11. [11]

    Jiang, C.-W

    G.-S. Jiang, C.-W. Shu, Efficient implementation of weighted eno schemes, Journal of Computational Physics 126 (1) (1996) 202–228

  12. [12]

    C.-W. Shu, S. Osher, Efficient implementation of essentially non-oscillatory shock- capturing schemes, Journal of Computational Physics 77 (2) (1988) 439–471

  13. [13]

    Gottlieb, C.-W

    S. Gottlieb, C.-W. Shu, E. Tadmor, Strong stability-preserving high-order time dis- cretization methods, SIAM Review 43 (1) (2001) 89–112

  14. [14]

    V. A. Titarev, E. F. Toro, Ader: Arbitrary high order godunov approach, Journal of Scientific Computing 17 (1) (2002) 609–618

  15. [15]

    Sun, Convolution neural network shock detector for numerical solution of conser- vation laws, Communications in Computational Physics 28 (5) (2020) 2075–2108

    Z. Sun, Convolution neural network shock detector for numerical solution of conser- vation laws, Communications in Computational Physics 28 (5) (2020) 2075–2108

  16. [16]

    Y. Feng, T. Liu, A characteristic-featured shock wave indicator on unstructured grids based on training an artificial neuron, Journal of Computational Physics 443 (2021) 110446. 32

  17. [17]

    Kossaczká, M

    T. Kossaczká, M. Ehrhardt, M. Günther, Enhanced fifth order weno shock-capturing schemes with deep learning, Results in Applied Mathematics 12 (2021) 100201

  18. [18]

    Nogueira, J

    X. Nogueira, J. Fernández-Fidalgo, L. Ramos, I. Couceiro, L. Ramírez, Machine learning-based weno5 scheme, Computers & Mathematics with Applications 168 (2024) 84–99

  19. [19]

    D.A.Bezgin, S.J.Schmidt, N.A.Adams, Weno3-nn: Amaximum-orderthree-point data-driven weighted essentially non-oscillatory scheme, Journal of Computational Physics 452 (2022) 110920

  20. [20]

    Z. Chen, A. Gelb, Y. Lee, Learning the dynamics for unknown hyperbolic conserva- tion laws using deep neural networks, SIAM Journal on Scientific Computing 46 (2) (2024) A825–A850

  21. [21]

    Morand, N

    V. Morand, N. Müller, R. Weightman, B. Piccoli, A. Keimer, A. M. Bayen, Deep learning of first-order nonlinear hyperbolic conservation law solvers, Journal of Com- putational Physics 511 (2024) 113114

  22. [22]

    M. Lino, S. Fotiadis, A. A. Bharath, C. D. Cantwell, Current and emerging deep- learning methods for the simulation of fluid dynamics, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 479 (2023) 20230058

  23. [23]

    Sekar, Q

    V. Sekar, Q. Jiang, C. Shu, B. C. Khoo, Fast flow field prediction over airfoils using deep learning approach, Physics of Fluids 31 (5) (2019) 057103

  24. [24]

    Bhatnagar, Y

    S. Bhatnagar, Y. Afshar, S. Pan, K. Duraisamy, S. Kaushik, Prediction of aero- dynamic flow fields using convolutional neural networks, Computational Mechanics 64 (2) (2019) 525–545

  25. [25]

    Thuerey, K

    N. Thuerey, K. Weißenow, L. Prantl, X. Hu, Deep learning methods for reynolds- averaged navier–stokes simulations of airfoil flows, AIAA Journal 58 (1) (2020) 25– 36

  26. [26]

    Sanchez-Gonzalez, J

    A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, P. Battaglia, Learn- ing to simulate complex physics with graph networks, in: International Conference on Machine Learning, PMLR, 2020, pp. 8459–8468

  27. [27]

    Pfaff, M

    T. Pfaff, M. Fortunato, A. Sanchez-Gonzalez, P. Battaglia, Learning mesh-based simulation with graph networks, in: International Conference on Learning Repre- sentations, 2020

  28. [28]

    Message passing neural pde solvers

    J. Brandstetter, D. Worrall, M. Welling, Message passing neural pde solvers (2023). arXiv:2202.03376

  29. [29]

    Geneva, N

    N. Geneva, N. Zabaras, Transformers for modeling physical systems, Neural Net- works 146 (2022) 272–289

  30. [30]

    R. Lam, A. Sanchez-Gonzalez, M. Willson, P. Wirnsberger, M. Fortunato, F. Alet, S. Ravuri, T. Ewalds, Z. Eaton-Rosen, W. Hu, et al., Learning skillful medium-range global weather forecasting, Science 382 (2023) 1416–1421

  31. [31]

    Z. Li, D. Shu, A. Barati Farimani, Scalable transformer for pde surrogate modeling, Advances in Neural Information Processing Systems 36 (2023) 28010–28039

  32. [32]

    H. Wu, H. Luo, H. Wang, J. Wang, M. Long, Transolver: A fast transformer solver for pdes on general geometries (2024).arXiv:2402.02366

  33. [33]

    J.Jiang, J.Chen, Z.Yang, Alocal-globalgraphtransformermodelforfluiddynamics simulations, Journal of Computational Science (2025) 102773. 33

  34. [34]

    P. Jin, S. Meng, L. Lu, Mionet: Learning multiple-input operators via tensor prod- uct, SIAM Journal on Scientific Computing 44 (6) (2022) A3490–A3514

  35. [35]

    Raissi, P

    M. Raissi, P. Perdikaris, G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving non- linear partial differential equations, Journal of Computational Physics 378 (2019) 686–707

  36. [36]

    Z. Mao, A. D. Jagtap, G. E. Karniadakis, Physics-informed neural networks for high-speed flows, Computer Methods in Applied Mechanics and Engineering 360 (2020) 112789

  37. [37]

    S. Wang, H. Wang, P. Perdikaris, Learning the solution operator of parametric partial differential equations with physics-informed deeponets, Science Advances 7 (40) (2021) eabi8605

  38. [38]

    E. J. R. Coutinho, M. Dall’Aqua, L. McClenny, M. Zhong, U. Braga-Neto, E. Gildin, Physics-informed neural networks with adaptive localized artificial viscosity, Journal of Computational Physics 489 (2023) 112265

  39. [39]

    T.DeRyck, S.Mishra, R.Molinaro, wpinns: Weakphysicsinformedneuralnetworks for approximating entropy solutions of hyperbolic conservation laws, SIAM Journal on Numerical Analysis 62 (2) (2024) 811–841

  40. [40]

    L. Liu, S. Liu, H. Xie, F. Xiong, T. Yu, M. Xiao, L. Liu, H. Yong, Discontinuity computing using physics-informed neural networks, Journal of Scientific Computing 98 (1) (2024) 22

  41. [41]

    Cassia, R

    R. Cassia, R. Kerswell, Godunov loss functions for modelling of hyperbolic conser- vation laws, Computer Methods in Applied Mechanics and Engineering 437 (2025) 117782

  42. [42]

    K. Lee, K. T. Carlberg, Deep conservation: A latent-dynamics model for exact satisfaction of physical conservation laws, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, 2021, pp. 277–285

  43. [43]

    Cardoso-Bihlo, A

    E. Cardoso-Bihlo, A. Bihlo, Exactly conservative physics-informed neural networks and deep operator networks for dynamical systems, Neural Networks 181 (2025) 106826

  44. [44]

    Richter-Powell, Y

    J. Richter-Powell, Y. Lipman, R. T. Chen, Neural conservation laws: A divergence- free perspective, Advances in Neural Information Processing Systems 35 (2022) 38075–38088

  45. [45]

    N. Liu, Y. Fan, X. Zeng, M. Klöwer, L. Zhang, Y. Yu, Harnessing the power of neural operators with automatically encoded conservation laws (2024).arXiv:2312.11176

  46. [46]

    E. H. Müller, Exact conservation laws for neural network integrators of dynamical systems, Journal of Computational Physics 488 (2023) 112234

  47. [47]

    Horie, N

    M. Horie, N. Mitsume, Graph neural pde solvers with conservation and similarity- equivariance (2024).arXiv:2405.16183

  48. [48]

    Van Leer, Towards the ultimate conservative difference scheme

    B. Van Leer, Towards the ultimate conservative difference scheme. v. a second-order sequel to godunov’s method, Journal of Computational Physics 32 (1) (1979) 101– 136

  49. [49]

    Barth, D

    T. Barth, D. Jespersen, The design and application of upwind schemes on unstruc- tured meshes, in: 27th Aerospace Sciences Meeting, 1989, p. 366. 34

  50. [50]

    Venkatakrishnan, Convergence to steady state solutions of the euler equations on unstructured grids with limiters, Journal of Computational Physics 118 (1) (1995) 120–130

    V. Venkatakrishnan, Convergence to steady state solutions of the euler equations on unstructured grids with limiters, Journal of Computational Physics 118 (1) (1995) 120–130

  51. [51]

    J. S. Park, S.-H. Yoon, C. Kim, Multi-dimensional limiting process for hyperbolic conservation laws on unstructured grids, Journal of Computational Physics 229 (3) (2010) 788–812

  52. [52]

    Hu, C.-W

    C. Hu, C.-W. Shu, Weighted essentially non-oscillatory schemes on triangular meshes, Journal of Computational Physics 150 (1) (1999) 97–127

  53. [53]

    Y. Zhao, H. Li, H. Zhou, H. R. Attar, T. Pfaff, N. Li, A review of graph neural network applications in mechanics-related domains, Artificial Intelligence Review 57 (11) (2024) 315

  54. [54]

    Gilmer, S

    J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, G. E. Dahl, Neural message passing for quantum chemistry, in: International Conference on Machine Learning, Pmlr, 2017, pp. 1263–1272

  55. [55]

    K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778

  56. [56]

    Rusanov, The calculation of the interaction of non-stationary shock waves and obstacles, USSR Computational Mathematics and Mathematical Physics 1 (1962) 304–320

    V. Rusanov, The calculation of the interaction of non-stationary shock waves and obstacles, USSR Computational Mathematics and Mathematical Physics 1 (1962) 304–320

  57. [57]

    Harten, P

    A. Harten, P. D. Lax, B. v. Leer, On upstream differencing and godunov-type schemes for hyperbolic conservation laws, SIAM Review 25 (1) (1983) 35–61

  58. [58]

    E. F. Toro, M. Spruce, W. Speares, Restoration of the contact surface in the hll- riemann solver, Shock Waves 4 (1) (1994) 25–34

  59. [59]

    Osher, F

    S. Osher, F. Solomon, Upwind difference schemes for hyperbolic systems of conser- vation laws, Mathematics of Computation 38 (158) (1982) 339–374

  60. [60]

    P. L. Roe, Approximate riemann solvers, parameter vectors, and difference schemes, Journal of Computational Physics 43 (2) (1981) 357–372

  61. [61]

    E. F. Toro, V. Titarev, Solution of the generalized riemann problem for advection– reaction equations, Proceedings of the Royal Society of London. Series A: Mathe- matical, Physical and Engineering Sciences 458 (2018) (2002) 271–281

  62. [62]

    Montecinos, C

    G. Montecinos, C. E. Castro, M. Dumbser, E. F. Toro, Comparison of solvers for the generalized riemann problem for hyperbolic systems with source terms, Journal of Computational Physics 231 (19) (2012) 6472–6494

  63. [63]

    C. E. Castro, E. F. Toro, Solvers for the high-order riemann problem for hyperbolic balance laws, Journal of Computational Physics 227 (4) (2008) 2481–2513

  64. [64]

    J. L. Ba, J. R. Kiros, G. E. Hinton, Layer normalization (2016).arXiv:1607.06450

  65. [65]

    3208–3216

    Z.Long, Y.Lu, X.Ma, B.Dong, Pde-net: Learningpdesfromdata, in: International Conference on Machine Learning, PMLR, 2018, pp. 3208–3216

  66. [66]

    D. P. Kingma, J. Ba, Adam: A method for stochastic optimization (2017).arXiv: 1412.6980. 35

  67. [67]

    Paszke, S

    A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., Pytorch: An imperative style, high-performance deep learning library, in: Advances in Neural Information Processing Systems, Vol. 32, 2019

  68. [68]

    M. Fey, J. E. Lenssen, Fast graph representation learning with pytorch geometric (2019).arXiv:1903.02428

  69. [69]

    Ranocha, M

    H. Ranocha, M. Schlottke-Lakemper, A. R. Winters, E. Faulhaber, J. Chan, G. J. Gassner, Adaptive numerical simulations with trixi.jl: A case study of julia for scientific computing, Proceedings of the JuliaCon Conferences 1 (1) (2022) 77.doi: 10.21105/jcon.00077

  70. [70]

    Schlottke-Lakemper, A

    M. Schlottke-Lakemper, A. R. Winters, H. Ranocha, G. J. Gassner, A purely hy- perbolic discontinuous galerkin approach for self-gravitating gas dynamics, Journal of Computational Physics 442 (2021) 110467

  71. [71]

    Geuzaine, J.-F

    C. Geuzaine, J.-F. Remacle, P. Dular, Gmsh: a three-dimensional finite element mesh generator, International Journal for Numerical Methods in Engineering 79 (11) (2009) 1309–1331

  72. [72]

    D. A. Kopriva, G. Gassner, On the quadrature and weak form choices in collo- cation type discontinuous galerkin spectral element methods, Journal of Scientific Computing 44 (2) (2010) 136–155

  73. [73]

    Hennemann, A

    S. Hennemann, A. M. Rueda-Ramírez, F. J. Hindenlang, G. J. Gassner, A provably entropy stable subcell shock capturing approach for high order split form dg for the compressible euler equations, Journal of Computational Physics 426 (2021) 109935

  74. [74]

    J. F. B. M. Kraaijevanger, Contractivity of runge-kutta methods, BIT Numerical Mathematics 31 (3) (1991) 482–528

  75. [75]

    H.Ranocha, L.Dalcin, M.Parsani, D.I.Ketcheson, Optimizedrunge-kuttamethods with automatic step size control for compressible computational fluid dynamics, Communications on Applied Mathematics and Computation 4 (4) (2022) 1191–1228

  76. [76]

    Zhang, C.-W

    X. Zhang, C.-W. Shu, Maximum-principle-satisfying and positivity-preserving high- order schemes for conservation laws: survey and new developments, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 467 (2134) (2011) 2752–2776

  77. [77]

    Moukalled, M

    F. Moukalled, M. Darwish, A high-resolution pressure-based algorithm for fluid flow at all speeds, Journal of Computational Physics 168 (1) (2001) 101–130

  78. [78]

    Woodward, P

    P. Woodward, P. Colella, The numerical simulation of two-dimensional fluid flow with strong shocks, Journal of Computational Physics 54 (1) (1984) 115–173

  79. [79]

    B.Cockburn, C.-W.Shu, Therunge–kuttadiscontinuousgalerkinmethodforconser- vation laws v: multidimensional systems, Journal of computational physics 141 (2) (1998) 199–224

  80. [80]

    Nazarov, A

    M. Nazarov, A. Larcher, Numerical investigation of a viscous regularization of the euler equations by entropy viscosity, Computer Methods in Applied Mechanics and Engineering 317 (2017) 128–152

Showing first 80 references.