pith. machine review for the scientific record. sign in

arxiv: 2604.15645 · v1 · submitted 2026-04-17 · 💻 cs.LG · physics.comp-ph· quant-ph

Recognition: unknown

PINNACLE: An Open-Source Computational Framework for Classical and Quantum PINNs

Authors on Pith no claims yet

Pith reviewed 2026-05-10 08:43 UTC · model grok-4.3

classification 💻 cs.LG physics.comp-phquant-ph
keywords physics-informed neural networksPINNshybrid quantum-classicalbenchmarkingcomputational frameworkopen-sourcemachine learning for physics
0
0 comments X

The pith

PINNACLE framework shows PINNs are sensitive to design choices, cost more than classical solvers, and hybrid quantum versions can use fewer parameters in some regimes

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces PINNACLE as an open-source framework for physics-informed neural networks that supports modern training methods, multi-GPU scaling, and hybrid quantum-classical models in one workflow. It runs systematic tests on problems such as conservation laws, fluid flows, and wave propagation to measure how choices like Fourier embeddings or adaptive loss balancing affect accuracy and speed. The benchmarks confirm that these networks often demand more computing resources than traditional numerical methods. They also identify cases where adding quantum components reduces the number of parameters needed while maintaining performance. This gives concrete data that helps decide when learning-based approaches are practical for solving physics equations.

Core claim

PINNACLE provides a modular system with classical PINN enhancements including Fourier feature embeddings, random weight factorization, strict boundary enforcement, adaptive loss balancing, curriculum training, and second-order optimization, plus extensions to hybrid quantum-classical PINNs with a derived estimate of circuit-evaluation complexity under parameter-shift differentiation. Benchmarks on 1D hyperbolic conservation laws, incompressible flows, and electromagnetic wave propagation quantify impacts on convergence, accuracy, and cost, along with distributed scaling analysis. The results establish that PINNs are highly sensitive to architectural and training choices, remain more costly,

What carries the argument

The PINNACLE modular workflow that unifies training strategies, multi-GPU acceleration, and hybrid quantum-classical architectures to support systematic benchmarking and complexity analysis of PINN performance

If this is right

  • Enhancements such as Fourier embeddings and adaptive balancing change convergence and accuracy on the benchmark problems
  • PINNs require higher computational resources than classical numerical solvers across tested cases
  • Hybrid quantum-classical PINNs achieve improved parameter efficiency in certain identified regimes
  • Distributed data parallel training shows measurable runtime and memory scaling on multi-GPU hardware

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Researchers can use the framework to prototype and compare solver options quickly before selecting a method for a new problem
  • The circuit complexity estimate could be checked against timings on real quantum devices to test its accuracy
  • Running the same evaluations on three-dimensional or turbulent flow problems might reveal additional regimes where hybrids perform well

Load-bearing premise

The chosen benchmark problems and the analytical estimate of circuit-evaluation complexity are representative enough to indicate when hybrid quantum PINNs would be preferable in practice

What would settle it

A set of tests on new problems or actual quantum hardware where hybrid models show no parameter reduction or where classical solvers match or beat the reported costs would disprove the efficiency claims

Figures

Figures reproduced from arXiv: 2604.15645 by Aaron Goldgewert, Boris Shragner, Gal Shaviner, Hemanth Chandravamsi, Shimon Pisnoy, Steven H. Frankel, Ziv Chen.

Figure 1
Figure 1. Figure 1: Schematic of a multi-layer perceptron mapping two inputs ( [PITH_FULL_IMAGE:figures/full_fig_p007_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Schematic of a basic PINNs model for solving the Burgers’ equation. Here, [PITH_FULL_IMAGE:figures/full_fig_p009_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Schematic of the parametrized quantum circuit (PQC) integration between two classical neural [PITH_FULL_IMAGE:figures/full_fig_p010_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Schematic of the training model incorporating a [PITH_FULL_IMAGE:figures/full_fig_p012_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Schematic of two-layer perceptron and the matrix multiplication required for calculating network [PITH_FULL_IMAGE:figures/full_fig_p013_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Visualization of weights during network initialization and their corresponding discrete probability [PITH_FULL_IMAGE:figures/full_fig_p014_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Schematic network model showing the imposition of strict periodic boundary condition in [PITH_FULL_IMAGE:figures/full_fig_p014_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Temporal evolution of the solution for the advection/convection equation with the initial [PITH_FULL_IMAGE:figures/full_fig_p019_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Temporal evolution of the solution for the Allen-Cahn equation. The plot demonstrates the [PITH_FULL_IMAGE:figures/full_fig_p020_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: PINNs solution of Burger’s equation studied using two domain sizes. 1st column: (a,c,e) [PITH_FULL_IMAGE:figures/full_fig_p022_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Lid-driven cavity (No curriculum training) centerline velocity profiles for [PITH_FULL_IMAGE:figures/full_fig_p023_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Lid-driven cavity results at Re = 1000. Panels (a,b) show a baseline model: (a) contour of the horizontal velocity component u, and (b) centerline profiles compared with Ghia et al. [70]. Panels (c,d) show the improved model obtained with curriculum training, 256 × 256 collocation points, and LHS: (c) contour of u, and (d) corresponding centerline profiles u(x = 0.5, y) and v(x, y = 0.5) compared with DNS… view at source ↗
Figure 13
Figure 13. Figure 13: Lid-driven cavity centerline velocity profiles at [PITH_FULL_IMAGE:figures/full_fig_p024_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Collocation points sampled for the 2D stenosis benchmark. Interior collocation points (blue) [PITH_FULL_IMAGE:figures/full_fig_p025_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Stenosis benchmark validation against CFD. (a,b) Predicted streamwise velocity [PITH_FULL_IMAGE:figures/full_fig_p026_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Sod shock tube benchmark. Comparison of PINN predictions with a finite-volume CFD [PITH_FULL_IMAGE:figures/full_fig_p027_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Two-dimensional Riemann problem, configuration 6. Comparison of PINN and CFD (computed [PITH_FULL_IMAGE:figures/full_fig_p028_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Electric field Ez (top row) and absolute error (bottom row) at t = 1.5 for the 2-D Gaussian pulse in a periodic box, computed using different PINN configurations and a Padé reference solution. 4.9 QPINNs - Quantum physics-informed neural networks 4.9.1 Solving Maxwell’s equations using QPINNs We assess how replacing a classical component of the PINN with a parametrized quantum circuit (PQC) affects traini… view at source ↗
Figure 19
Figure 19. Figure 19: L2 errors for all combinations of ansatz, angle scaling, and energy conservation loss. 3. Energy regularization: With the energy constraint, several QPINN configurations outperform the classical baseline while using fewer parameters (approximately 19% reduction). The best configuration improves the relative L2 error by up to ∼ 19%. These observations suggest that QPINN performance on unsteady 2D Maxwell d… view at source ↗
Figure 20
Figure 20. Figure 20: GPU scaling results for 100 epochs with 40,000 collocation points across A100, A6000, and [PITH_FULL_IMAGE:figures/full_fig_p033_20.png] view at source ↗
Figure 21
Figure 21. Figure 21: Memory capacity scaling and per-epoch runtime breakdown on the eight-GPU L40S system. [PITH_FULL_IMAGE:figures/full_fig_p034_21.png] view at source ↗
Figure 22
Figure 22. Figure 22: TorQ GPU execution pipeline. Setup precomputes dense entanglers Ent if they are not parameter-dependent, and registers trainable parameters θ on CUDA. In the forward pass, inputs x are angle-scaled and embedded by a batched Kronecker product implemented with torch.einsum to form the statevector ψ ∈ C B×2 n . Each layer builds a dense rotation wall Rℓ (via einsum), composes a dense operator Uℓ with the pre… view at source ↗
Figure 23
Figure 23. Figure 23: Average running time per epoch comparison between [PITH_FULL_IMAGE:figures/full_fig_p035_23.png] view at source ↗
Figure 24
Figure 24. Figure 24: Overview of the Distributed Data Parallel (DDP) approach. This schematic illustrates how [PITH_FULL_IMAGE:figures/full_fig_p040_24.png] view at source ↗
read the original abstract

We present PINNACLE, an open-source computational framework for physics-informed neural networks (PINNs) that integrates modern training strategies, multi-GPU acceleration, and hybrid quantum-classical architectures within a unified modular workflow. The framework enables systematic evaluation of PINN performance across benchmark problems including 1D hyperbolic conservation laws, incompressible flows, and electromagnetic wave propagation. It supports a range of architectural and training enhancements, including Fourier feature embeddings, random weight factorization, strict boundary condition enforcement, adaptive loss balancing, curriculum training, and second-order optimization strategies, with extensibility to additional methods. We provide a comprehensive benchmark study quantifying the impact of these methods on convergence, accuracy, and computational cost, and analyze distributed data parallel scaling in terms of runtime and memory efficiency. In addition, we extend the framework to hybrid quantum-classical PINNs and derive a formal estimate for circuit-evaluation complexity under parameter-shift differentiation. Results highlight the sensitivity of PINNs to architectural and training choices, confirm their high computational cost relative to classical solvers, and identify regimes where hybrid quantum models offer improved parameter efficiency. PINNACLE provides a foundation for benchmarking physics-informed learning methods and guiding future developments through quantitative assessment of their trade-offs.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper presents PINNACLE, an open-source computational framework for physics-informed neural networks (PINNs) that integrates classical and hybrid quantum-classical architectures within a modular workflow. It supports enhancements such as Fourier feature embeddings, random weight factorization, adaptive loss balancing, curriculum training, and second-order optimization, along with multi-GPU acceleration. The manuscript includes a benchmark study on 1D hyperbolic conservation laws, incompressible flows, and electromagnetic wave propagation, quantifies impacts on convergence/accuracy/cost, analyzes distributed scaling, and derives a formal estimate for circuit-evaluation complexity under parameter-shift differentiation. Results highlight PINN sensitivity to choices, high costs relative to classical solvers, and regimes of improved parameter efficiency for hybrid quantum models.

Significance. If the released code faithfully implements the described methods and benchmarks are reproducible, PINNACLE provides a valuable open-source platform for systematic evaluation of PINN variants, including quantum hybrids. The empirical benchmarks and complexity estimate offer concrete quantitative insights into trade-offs, which is a strength for guiding developments in physics-informed learning. The modular design and extensibility further enhance its utility for the community.

major comments (2)
  1. Benchmark section: the chosen problems (1D hyperbolic conservation laws, incompressible flows, EM propagation) are low-dimensional and noise-free. The claim identifying regimes of improved parameter efficiency for hybrid quantum models rests on these; without discussion of how results extrapolate to noisy or higher-dimensional settings, the practical guidance for hardware use is weakened.
  2. Circuit complexity estimate (derived under parameter-shift differentiation): the formal estimate assumes ideal conditions without decoherence, gate errors, or hardware topology. This directly affects the weakest assumption about representativeness for real-device decisions; the paper should explicitly state the scope of applicability or provide sensitivity analysis.
minor comments (2)
  1. The abstract and introduction could more explicitly link the benchmark results to the complexity estimate to clarify how they jointly support the hybrid quantum claims.
  2. Ensure all implementation details for strict boundary condition enforcement and adaptive loss balancing are accompanied by pseudocode or clear references to the open-source repository for full reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments and positive recommendation. We address each major point below with targeted revisions to clarify scope and assumptions while preserving the manuscript's focus on the framework and benchmarks.

read point-by-point responses
  1. Referee: Benchmark section: the chosen problems (1D hyperbolic conservation laws, incompressible flows, EM propagation) are low-dimensional and noise-free. The claim identifying regimes of improved parameter efficiency for hybrid quantum models rests on these; without discussion of how results extrapolate to noisy or higher-dimensional settings, the practical guidance for hardware use is weakened.

    Authors: We agree the benchmarks use standard low-dimensional, noise-free problems to isolate effects of architecture and training choices, consistent with much of the PINN literature. The parameter-efficiency observations for hybrid models are tied to these controlled settings. In revision we will add a concise limitations paragraph in the discussion section that explicitly notes the benchmark scope, cautions against direct extrapolation to noisy or high-dimensional regimes, and cites relevant work on quantum noise in PINNs. This will strengthen practical guidance without altering the reported results. revision: yes

  2. Referee: Circuit complexity estimate (derived under parameter-shift differentiation): the formal estimate assumes ideal conditions without decoherence, gate errors, or hardware topology. This directly affects the weakest assumption about representativeness for real-device decisions; the paper should explicitly state the scope of applicability or provide sensitivity analysis.

    Authors: The estimate is a formal, ideal-case derivation under parameter-shift rules, as is conventional for theoretical quantum-circuit analyses. We will revise the relevant section to state the ideal assumptions explicitly and frame the result as a lower-bound reference for noise-free circuits. A full sensitivity analysis to decoherence or topology lies outside the paper's scope, but we will add a brief contextual note referencing noisy quantum-PINN literature to clarify applicability to hardware decisions. revision: partial

Circularity Check

0 steps flagged

No circularity in benchmark study or complexity estimate

full rationale

The paper is a software framework release accompanied by empirical benchmarks on standard PDE problems and a formal derivation of circuit-evaluation complexity for hybrid quantum PINNs under the parameter-shift rule. No load-bearing claim reduces by construction to its own inputs: the complexity estimate follows directly from counting parameter-shift evaluations on a standard quantum circuit ansatz without fitting or self-referential redefinition, and the reported performance numbers are direct measurements from the implemented code on the chosen test cases. No self-citation is used to justify uniqueness or to smuggle an ansatz, and the work does not rename known empirical patterns as novel derivations. The central results remain independent empirical observations and a straightforward complexity calculation.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The paper is a software-framework contribution rather than a theoretical derivation, so the axiom ledger contains only standard background assumptions from neural-network training and quantum-circuit differentiation.

axioms (1)
  • standard math Standard assumptions of automatic differentiation and the parameter-shift rule for quantum gradients hold.
    Invoked when deriving the circuit-evaluation complexity estimate.

pith-pipeline@v0.9.0 · 5540 in / 1254 out tokens · 28222 ms · 2026-05-10T08:43:34.629193+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

77 extracted references · 10 canonical work pages · 3 internal anchors

  1. [1]

    Artificial neural networks for solving ordinary and partial differential equations.IEEE transactions on neural networks, 9(5):987–1000, 1998

    Isaac E Lagaris, Aristidis Likas, and Dimitrios I Fotiadis. Artificial neural networks for solving ordinary and partial differential equations.IEEE transactions on neural networks, 9(5):987–1000, 1998

  2. [2]

    Karniadakis

    Maziar Raissi, Paris Perdikaris, and George E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations.Journal of Computational Physics, 378:686–707, 2019

  3. [3]

    Physics- informed neural networks for heat transfer problems.Journal of Heat Transfer, 143(6):060801, 2021

    Shengze Cai, Zhicheng Wang, Sifan Wang, Paris Perdikaris, and George Em Karniadakis. Physics- informed neural networks for heat transfer problems.Journal of Heat Transfer, 143(6):060801, 2021

  4. [4]

    Physics-guided deep learning for dynamical systems: A survey.ACM Computing Surveys, 58(5):1–31, 2025

    Rui Wang and Rose Yu. Physics-guided deep learning for dynamical systems: A survey.ACM Computing Surveys, 58(5):1–31, 2025

  5. [5]

    Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data.Computer Methods in Applied Mechanics and Engineering, 361:112732, 2020

    Luning Sun, Han Gao, Shaowu Pan, and Jian-Xun Wang. Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data.Computer Methods in Applied Mechanics and Engineering, 361:112732, 2020

  6. [6]

    Data assimilation and flow estimation

    Tamer A Zaki and Mengze Wang. Data assimilation and flow estimation. InData Driven Analysis and Modeling of Turbulent Flows, pages 129–181. Elsevier, 2025

  7. [7]

    Physics-informed machine learning.Nature Reviews Physics, 3:422–440, 2021

    George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed machine learning.Nature Reviews Physics, 3:422–440, 2021

  8. [8]

    arXiv preprint arXiv:2507.08972 , year=

    Sifan Wang, Shyam Sankaran, Xiantao Fan, Panos Stinis, and Paris Perdikaris. Simulating three- dimensional turbulence with physics-informed neural networks.arXiv preprint arXiv:2507.08972, 2025

  9. [9]

    Characterizing possible failure modes in physics-informed neural networks

    Aditi S Krishnapriyan, Amir Gholami, Shandian Zhe, Robert M Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. InAdvances in Neural Information Processing Systems, volume 34, pages 26548–26560, 2021

  10. [10]

    Understanding and mitigating gradient flow pathologies in physics-informed neural networks.SIAM Journal on Scientific Computing, 43(5):A3055–A3081, 2021

    Sifan Wang, Yujun Teng, and Paris Perdikaris. Understanding and mitigating gradient flow pathologies in physics-informed neural networks.SIAM Journal on Scientific Computing, 43(5):A3055–A3081, 2021. 44 PINNACLE: Multi-GPU Physics-Informed Machine Learning

  11. [11]

    State estimation in minimal turbulent channel flow: A comparative study of 4dvar and pinn.International Journal of Heat and Fluid Flow, 99:109073, 2023

    Yifan Du, Mengze Wang, and Tamer A Zaki. State estimation in minimal turbulent channel flow: A comparative study of 4dvar and pinn.International Journal of Heat and Fluid Flow, 99:109073, 2023

  12. [12]

    When and why PINNs fail to train: A neural tangent kernel perspective.Journal of Computational Physics, 449:110768, 2022

    Sifan Wang, Xinling Yu, and Paris Perdikaris. When and why PINNs fail to train: A neural tangent kernel perspective.Journal of Computational Physics, 449:110768, 2022

  13. [13]

    On the spectral bias of neural networks

    Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. InProceedings of the 36th International Conference on Machine Learning, volume 97 ofProceedings of Machine Learning Research, pages 5301–5310. PMLR, 2019

  14. [14]

    Shenoy, Itay Zinn, Ziv Chen, Shimon Pisnoy, and Steven H

    Hemanth Chandravamsi, Dhanush V. Shenoy, Itay Zinn, Ziv Chen, Shimon Pisnoy, and Steven H. Frankel. Spectral bottleneck in sinusoidal representation networks: Noise is all you need, 2025

  15. [15]

    Respecting causality for training physics-informed neural networks.Computer Methods in Applied Mechanics and Engineering, 421:116813, 2024

    Sifan Wang, Shyam Sankaran, and Paris Perdikaris. Respecting causality for training physics-informed neural networks.Computer Methods in Applied Mechanics and Engineering, 421:116813, 2024

  16. [16]

    Pinns for solving unsteady maxwell’s equations: convergence issues and comparative assessment with compact schemes.Neural Computing and Applications, 37(29):24103–24122, 2025

    Gal G Shaviner, Hemanth Chandravamsi, Shimon Pisnoy, Ziv Chen, and Steven H Frankel. Pinns for solving unsteady maxwell’s equations: convergence issues and comparative assessment with compact schemes.Neural Computing and Applications, 37(29):24103–24122, 2025

  17. [17]

    On the generalization of pinns outside the training domain and the hyperparameters influencing it.Neural Computing and Applications, 36(36):22677–22696, 2024

    Andrea Bonfanti, Roberto Santana, Marco Ellero, and Babak Gholami. On the generalization of pinns outside the training domain and the hyperparameters influencing it.Neural Computing and Applications, 36(36):22677–22696, 2024

  18. [18]

    From pinns to pikans: Recent advances in physics- informed machine learning.Machine Learning for Computational Science and Engineering, 1(1):1–43, 2025

    Juan Diego Toscano, Vivek Oommen, Alan John Varghese, Zongren Zou, Nazanin Ahmadi Daryake- nari, Chenxi Wu, and George Em Karniadakis. From pinns to pikans: Recent advances in physics- informed machine learning.Machine Learning for Computational Science and Engineering, 1(1):1–43, 2025

  19. [19]

    Random features for large-scale kernel machines

    Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. InAdvances in neural information processing systems, pages 1177–1184, 2007

  20. [20]

    Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T

    Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains, 2020

  21. [21]

    Piratenets: Physics-informed deep learning with residual adaptive networks.Journal of Machine Learning Research, 25(402):1–51, 2024

    Sifan Wang, Bowen Li, Yuhan Chen, and Paris Perdikaris. Piratenets: Physics-informed deep learning with residual adaptive networks.Journal of Machine Learning Research, 25(402):1–51, 2024

  22. [22]

    Jagtap and George Em Karniadakis

    Ameya D. Jagtap and George Em Karniadakis. Extended physics-informed neural networks (XPINNs): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations.Communications in Computational Physics, 28(5):2002–2041, 2020

  23. [23]

    Adaptive interface-pinns (adai-pinns): An efficient physics-informed neural networks framework for interface problems.arXiv preprint arXiv:2406.04626, 2024

    Sumanta Roy, Chandrasekhar Annavarapu, Pratanu Roy, and Antareep Kumar Sarma. Adaptive interface-pinns (adai-pinns): An efficient physics-informed neural networks framework for interface problems.arXiv preprint arXiv:2406.04626, 2024

  24. [24]

    Enforcing dirichlet boundary conditions in physics-informed neural networks and variational physics-informed neural networks.Heliyon, 9(8), 2023

    Stefano Berrone, Claudio Canuto, Moreno Pintore, and Natarajan Sukumar. Enforcing dirichlet boundary conditions in physics-informed neural networks and variational physics-informed neural networks.Heliyon, 9(8), 2023

  25. [25]

    Physics-informed neural networks with hard constraints for inverse design.SIAM Journal on Scientific Computing, 43(6):B1105–B1132, 2021

    Lu Lu, Raphael Pestourie, Wenjie Yao, Zhicheng Wang, Francesc Verdugo, and Steven G Johnson. Physics-informed neural networks with hard constraints for inverse design.SIAM Journal on Scientific Computing, 43(6):B1105–B1132, 2021

  26. [26]

    Dong and N

    S. Dong and N. Ni. A method for representing periodic functions and enforcing exactly periodic boundary conditions with deep neural networks.Journal of Computational Physics, 435:110242, 2021

  27. [27]

    Weight normalization: A simple reparameterization to accelerate training of deep neural networks.Advances in neural information processing systems, 29, 2016

    Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks.Advances in neural information processing systems, 29, 2016

  28. [28]

    Self-adaptive physics-informed neural networks.Journal of Computational Physics, 474:111722, 2023

    Levi D McClenny and Ulisses M Braga-Neto. Self-adaptive physics-informed neural networks.Journal of Computational Physics, 474:111722, 2023

  29. [29]

    Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks

    Zhao Chen, Vijay Badrinarayanan, Chen-Yu Lee, and Andrew Rabinovich. Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. InInternational conference on machine learning, pages 794–803. PMLR, 2018

  30. [30]

    Chenxi Wu, Min Zhu, Qinyang Tan, Yadhu Kartha, and Lu Lu. A comprehensive study of non- adaptive and residual-based adaptive sampling for physics-informed neural networks.Computer Methods in Applied Mechanics and Engineering, 403:115671, 2023. 45 PINNACLE: Multi-GPU Physics-Informed Machine Learning

  31. [31]

    Zhiping Mao and Xuhui Meng. Physics-informed neural networks with residual/gradient-based adaptive sampling methods for solving partial differential equations with sharp solutions.Applied Mathematics and Mechanics, 44(7):1069–1084, 2023

  32. [32]

    Curriculum learning

    Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pages 41–48, 2009

  33. [33]

    Jagtap, Shandian Zhe, George Em Karniadakis, and Robert M

    Michael Penwarden, Ameya D. Jagtap, Shandian Zhe, George Em Karniadakis, and Robert M. Kirby. A unified scalable framework for causal sweeping strategies for physics-informed neural networks and their temporal decompositions.Journal of Computational Physics, 481:112001, 2023

  34. [34]

    Kingma and Jimmy Ba

    Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014

  35. [35]

    Liu and Jorge Nocedal

    Dong C. Liu and Jorge Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(1):503–528, 1989

  36. [36]

    Nys-Newton: Nystr"om-approximated curvature for stochastic optimization, 2021

    Dinesh Singh, Hardik Tankaria, and Makoto Yamada. Nys-Newton: Nystr"om-approximated curvature for stochastic optimization, 2021

  37. [37]

    Challenges in training PINNs: A loss landscape perspective

    Pratik Rathore, Weimu Lei, Zachary Frangella, Lu Lu, and Madeleine Udell. Challenges in training PINNs: A loss landscape perspective. InProceedings of the 41st International Conference on Machine Learning, volume 235 ofProceedings of Machine Learning Research, pages 42159–42191. PMLR, 2024

  38. [38]

    Physics-informed neural networks for quantum control.Physical Review Letters, 132(1):010801, 2024

    Ariel Norambuena, Marios Mattheakis, Francisco J González, and Raúl Coto. Physics-informed neural networks for quantum control.Physical Review Letters, 132(1):010801, 2024

  39. [39]

    Quantum physics-informed neural networks

    Corey Trahan, Mark Loveland, and Samuel Dent. Quantum physics-informed neural networks. Entropy, 26(8):649, 2024

  40. [40]

    black hole

    Ziv Chen, Gal G Shaviner, Hemanth Chandravamsi, Shimon Pisnoy, Steven H Frankel, and Uzi Pereg. Quantum physics-informed neural networks for maxwell’s equations: Circuit design," black hole" barren plateaus mitigation, and gpu acceleration.arXiv preprint arXiv:2506.23246, 2025

  41. [41]

    Quantum circuit learning

    Kosuke Mitarai, Makoto Negoro, Masahiro Kitagawa, and Keisuke Fujii. Quantum circuit learning. Physical Review A, 98(3):032309, 2018

  42. [42]

    Evaluating analytic gradients on quantum hardware.Physical Review A, 99(3):032331, 2019

    Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, and Nathan Killoran. Evaluating analytic gradients on quantum hardware.Physical Review A, 99(3):032331, 2019

  43. [43]

    Pytorch: An imperative style, high- performance deep learning library.Advances in neural information processing systems, 32, 2019

    Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high- performance deep learning library.Advances in neural information processing systems, 32, 2019

  44. [44]

    Deepxde: A deep learning library for solving differential equations.SIAM Review, 63(1):208–228, 2021

    Lu Lu, Xuhui Meng, Zhiping Mao, and George Em Karniadakis. Deepxde: A deep learning library for solving differential equations.SIAM Review, 63(1):208–228, 2021

  45. [45]

    Mixed precision training, 2018

    Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. Mixed precision training, 2018

  46. [46]

    Mixed Precision Training

    Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. Mixed precision training.arXiv preprint arXiv:1710.03740, 2017

  47. [47]

    A novel automatic mixed precision approach for physics informed training.Advances in Neural Information Processing Systems, 2022

    Jinze Xue, Akshay Subramaniam, and Mark Hoemmen. A novel automatic mixed precision approach for physics informed training.Advances in Neural Information Processing Systems, 2022

  48. [48]

    Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism

    Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019

  49. [49]

    Gpipe: Efficient training of giant neural networks using pipeline parallelism.Advances in neural information processing systems, 32, 2019

    Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism.Advances in neural information processing systems, 32, 2019

  50. [50]

    Efficient large-scale language model training on gpu clusters using megatron-lm

    Deepak Narayanan, Mohammad Shoeybi, Jared Casper, Patrick LeGresley, Mostofa Patwary, Vijay Korthikanti, Dmitri Vainbrand, Prethvi Kashinkunti, Julie Bernauer, Bryan Catanzaro, et al. Efficient large-scale language model training on gpu clusters using megatron-lm. InProceedings of the international conference for high performance computing, networking, st...

  51. [51]

    Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.Journal of Machine Learning Research, 23(120):1–39, 2022

    William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.Journal of Machine Learning Research, 23(120):1–39, 2022. 46 PINNACLE: Multi-GPU Physics-Informed Machine Learning

  52. [52]

    GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding

    Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding.arXiv preprint arXiv:2006.16668, 2020

  53. [53]

    Nvidia simnet™: An ai-accelerated multi-physics simulation framework

    Oliver Hennigh, Susheela Narasimhan, Mohammad Amin Nabian, Akshay Subramaniam, Kaustubh Tangsali, Zhiwei Fang, Max Rietmann, Wonmin Byeon, and Sanjay Choudhry. Nvidia simnet™: An ai-accelerated multi-physics simulation framework. InInternational conference on computational science, pages 447–461. Springer, 2021

  54. [54]

    Ehsan Haghighat and Ruben Juanes. Sciann: A keras/tensorflow wrapper for scientific computations and physics-informed deep learning using artificial neural networks.Computer Methods in Applied Mechanics and Engineering, 373:113552, 2021

  55. [55]

    The quantum simulation library we created, TorQ - Tensor Operations for Research of Quantum systems, is available athttps://github.com/zivchen9993/TorQ.git

  56. [56]

    An expert’s guide to training physics-informed neural networks, 2023

    Sifan Wang, Shyam Sankaran, Hanwen Wang, and Paris Perdikaris. An expert’s guide to training physics-informed neural networks, 2023

  57. [57]

    Quantum algorithms for supervised and unsupervised machine learning

    Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsupervised machine learning.arXiv preprint arXiv:1307.0411, 2013

  58. [58]

    Cambridge university press, 2010

    Michael A Nielsen and Isaac L Chuang.Quantum computation and quantum information. Cambridge university press, 2010

  59. [59]

    Measurement of qubits

    Daniel FV James, Paul G Kwiat, William J Munro, and Andrew G White. Measurement of qubits. Physical Review A, 64(5):052312, 2001

  60. [60]

    Random weight factorization improves the training of continuous neural representations.arXiv preprint arXiv:2210.01274, 2022

    Sifan Wang, Hanwen Wang, Jacob H Seidman, and Paris Perdikaris. Random weight factorization improves the training of continuous neural representations.arXiv preprint arXiv:2210.01274, 2022

  61. [61]

    Implicit neural representations with periodic activation functions.Advances in neural information processing systems, 33:7462–7473, 2020

    Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions.Advances in neural information processing systems, 33:7462–7473, 2020

  62. [62]

    Finer: Flexible spectral-bias tuning in implicit neural representation by variable-periodic activation functions

    Zhen Liu, Hao Zhu, Qi Zhang, Jingde Fu, Weibing Deng, Zhan Ma, Yanwen Guo, and Xun Cao. Finer: Flexible spectral-bias tuning in implicit neural representation by variable-periodic activation functions. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2713–2722, 2024

  63. [63]

    Self-adaptive loss balanced physics-informed neural networks.Neurocomputing, 496:11–34, 2022

    Zixue Xiang, Wei Peng, Xu Liu, and Wen Yao. Self-adaptive loss balanced physics-informed neural networks.Neurocomputing, 496:11–34, 2022

  64. [64]

    Nysnewton: Fast neural network training with nyström approximation.arXiv preprint, 2023

    Shaobo Gu, Xiao Wang, Ziyan Wang, and Jinchao Xu. Nysnewton: Fast neural network training with nyström approximation.arXiv preprint, 2023

  65. [65]

    Springer, 2006

    Jorge Nocedal and Stephen J Wright.Numerical optimization. Springer, 2006

  66. [66]

    Respecting causality for training physics- informed neural networks.Computer Methods in Applied Mechanics and Engineering, 421:116762,

    Sifan Wang, Shyam Sankaran, and Paris Perdikaris. Respecting causality for training physics- informed neural networks.Computer Methods in Applied Mechanics and Engineering, 421:116762,

  67. [67]

    Preprint originally released in 2022 as arXiv:2203.07404

  68. [68]

    Michael Penwarden, Shandian Zhe, Akil Narayan, and Robert M. Kirby. A unified scalable framework for causal sweeping strategies in physics-informed neural networks.Journal of Computational Physics, 493:112461, 2023

  69. [69]

    Evolutional deep neural network.Physical Review E, 104(4):045303, 2021

    Yifan Du and Tamer A Zaki. Evolutional deep neural network.Physical Review E, 104(4):045303, 2021

  70. [70]

    Discovery of unstable singularities.arXiv preprint arXiv:2509.14185, 2025

    Yongji Wang, Mehdi Bennani, James Martens, Sébastien Racanière, Sam Blackwell, Alex Matthews, Stanislav Nikolov, Gonzalo Cao-Labora, Daniel S Park, Martin Arjovsky, et al. Discovery of unstable singularities.arXiv preprint arXiv:2509.14185, 2025

  71. [71]

    U. Ghia, K. N. Ghia, and C. T. Shin. High-Re solutions for incompressible flow using the navier-stokes equations and a multigrid method.Journal of Computational Physics, 48(3):387–411, 1982

  72. [72]

    Amirhossein Arzani, Jian Xun Wang, and Roshan M. D’Souza. Uncovering near-wall blood flow from sparse data with physics-informed neural networks.Physics of Fluids, 33(7):071905, 2021

  73. [73]

    Gary A. Sod. A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws.Journal of Computational Physics, 27(1):1–31, 1978. 47 PINNACLE: Multi-GPU Physics-Informed Machine Learning

  74. [74]

    Toro.Riemann Solvers and Numerical Methods for Fluid Dynamics

    Eleuterio F. Toro.Riemann Solvers and Numerical Methods for Fluid Dynamics. Springer, 3rd edition, 2009

  75. [75]

    Classification of the riemann problem for two-dimensional gas dynamics

    Carsten W Schulz-Rinne. Classification of the riemann problem for two-dimensional gas dynamics. SIAM journal on mathematical analysis, 24(1):76–88, 1993

  76. [76]

    Semidiscrete central-upwind schemes for hyperbolic conservation laws and hamilton–jacobi equations.SIAM Journal on Scientific Com- puting, 23(3):707–740, 2001

    Alexander Kurganov, Sebastian Noelle, and Guergana Petrova. Semidiscrete central-upwind schemes for hyperbolic conservation laws and hamilton–jacobi equations.SIAM Journal on Scientific Com- puting, 23(3):707–740, 2001

  77. [77]

    Accurate monotonicity-preserving schemes with runge–kutta time stepping.Journal of Computational Physics, 136(1):83–99, 1997

    Ambady Suresh and Hung T Huynh. Accurate monotonicity-preserving schemes with runge–kutta time stepping.Journal of Computational Physics, 136(1):83–99, 1997. 48