Recognition: unknown
Balance-Guided Sparse Identification of Multiscale Nonlinear PDEs with Small-coefficient Terms
Pith reviewed 2026-05-10 04:52 UTC · model grok-4.3
The pith
Balance-guided sparse regression identifies small-coefficient terms in nonlinear PDEs by their contribution to the equation balance.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Balance-Guided SINDy reformulates the sparse regression task as a term-level ℓ_{2,0}-regularized problem and solves it with a progressive pruning strategy. Terms are ranked according to their relative contributions to the governing equation balance rather than their absolute coefficient magnitudes. By alternating between least-squares regression and elimination of negligible terms, the method retains dynamically significant terms even when their coefficients are small. Experiments on the Korteweg-de Vries equation with a small dispersion coefficient, a modified Burgers equation with vanishing hyperviscosity, a modified Kuramoto-Sivashinsky equation with multiple small-coefficient terms, and
What carries the argument
The progressive pruning strategy that ranks and eliminates terms according to their relative contribution to the instantaneous governing-equation balance.
Where Pith is reading between the lines
- The same balance-ranking idea could be tested on ordinary differential equations or on real experimental time series from engineering systems.
- If the ranking proves stable, it might be combined with other sparsity penalties to handle even higher-dimensional or chaotic multiscale problems.
- A direct check would be to apply the method to data generated from a known PDE whose small term is known to control long-term behavior and verify recovery.
Load-bearing premise
Ranking terms by relative contribution to the instantaneous governing-equation balance correctly identifies which small-coefficient terms are dynamically significant.
What would settle it
An experiment on the Korteweg-de Vries equation in which the dispersion coefficient is made progressively smaller while noise is added to the data, showing whether the small term is still recovered or is incorrectly pruned.
Figures
read the original abstract
Data-driven discovery of governing equations has advanced significantly in recent years; however, existing methods often struggle in multiscale systems where dynamically significant terms may have small coefficients. Therefore, we propose Balance-Guided SINDy (BG-SINDy) inspired by the principle of dominant balance, which reformulates $\ell_0$-constrained sparse regression as a term-level $\ell_{2,0}$-regularized problem and solves it using a progressive pruning strategy. Terms are ranked according to their relative contributions to the governing equation balance rather than their absolute coefficient magnitudes. Based on this criterion, BG-SINDy alternates between least-squares regression and elimination of negligible terms, thereby preserving dynamically significant terms even when their coefficients are small. Numerical experiments on the Korteweg--de Vries equation with a small dispersion coefficient, a modified Burgers equation with vanishing hyperviscosity, a modified Kuramoto--Sivashinsky equation with multiple small-coefficient terms, and a two-dimensional reaction--diffusion system demonstrate the validity of BG-SINDy in discovering small-coefficient terms. The proposed method thus provides an efficient approach for discovering governing equations that contain small-coefficient terms.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces Balance-Guided SINDy (BG-SINDy), which reformulates ℓ₀-constrained sparse regression for PDE discovery as a term-level ℓ_{2,0} problem solved by progressive pruning. Terms are ranked and eliminated according to their relative contribution to the instantaneous residual balance of the governing equation rather than absolute coefficient size. This is claimed to recover dynamically significant small-coefficient terms in multiscale systems. Validity is asserted via numerical experiments on the KdV equation with small dispersion, a modified Burgers equation, a modified Kuramoto–Sivashinsky equation, and a 2D reaction–diffusion system.
Significance. If the central claim holds, the work would supply a lightweight, balance-aware extension to the SINDy family that addresses a recurring practical failure mode—loss of small but dynamically essential terms—without adding free parameters beyond the pruning schedule. The procedural nature (alternating least-squares and balance-ranked elimination) is attractive for reproducibility, yet the absence of quantitative recovery metrics, ablation studies, and out-of-sample dynamical tests limits the strength of the significance assessment at present.
major comments (3)
- [§4] §4 (numerical experiments): The four example systems are presented only with qualitative recovery statements; no coefficient error norms, trajectory prediction errors on held-out initial conditions, or long-time integration tests are reported. This leaves the central claim that small-coefficient terms are “preserved” without quantitative support and prevents direct comparison to standard SINDy or other balance-aware baselines.
- [§3] §3 (algorithm description): No derivation or invariance argument is supplied showing that the relative-balance ranking remains reliable when the small term is invisible in the training trajectory or becomes dominant only after long-time evolution. The skeptic’s concern is therefore not addressed: the method could still discard a term whose coefficient is orders of magnitude smaller yet essential outside the sampled regime.
- [Table 1 / Figure 2] Table 1 / Figure 2 (KdV and modified Burgers results): The reported “recovered” equations are shown without the corresponding residual norms or comparison against a simple coefficient-threshold baseline run on the same data; it is therefore impossible to isolate the contribution of the balance-guided pruning step.
minor comments (2)
- [§3] Notation for the balance residual and the pruning threshold schedule is introduced without a compact algorithmic pseudocode block; a single-line algorithm box would improve clarity.
- [Abstract] The abstract states that the method “demonstrates the validity” of BG-SINDy, yet the experiments section contains no statistical summary across multiple noise realizations or initial-condition ensembles.
Simulated Author's Rebuttal
We thank the referee for the constructive and insightful comments. We address each major point below, indicating the revisions we will incorporate to strengthen the manuscript.
read point-by-point responses
-
Referee: [§4] §4 (numerical experiments): The four example systems are presented only with qualitative recovery statements; no coefficient error norms, trajectory prediction errors on held-out initial conditions, or long-time integration tests are reported. This leaves the central claim that small-coefficient terms are “preserved” without quantitative support and prevents direct comparison to standard SINDy or other balance-aware baselines.
Authors: We agree that quantitative metrics are necessary to rigorously support the central claims and enable comparisons. In the revised manuscript we will add relative coefficient error norms, out-of-sample trajectory prediction errors on held-out initial conditions, and long-time integration stability tests for all four example systems. These additions will also include direct comparisons against standard SINDy. revision: yes
-
Referee: [§3] §3 (algorithm description): No derivation or invariance argument is supplied showing that the relative-balance ranking remains reliable when the small term is invisible in the training trajectory or becomes dominant only after long-time evolution. The skeptic’s concern is therefore not addressed: the method could still discard a term whose coefficient is orders of magnitude smaller yet essential outside the sampled regime.
Authors: This concern is valid. The balance-ranking criterion is computed from the training data; if a term contributes negligibly to the observed residual balance it may be pruned even if it becomes important later. We will expand the discussion in §3 to explicitly state this assumption and the requirement for sufficiently rich training trajectories, but a full invariance proof lies beyond the current scope. revision: partial
-
Referee: [Table 1 / Figure 2] Table 1 / Figure 2 (KdV and modified Burgers results): The reported “recovered” equations are shown without the corresponding residual norms or comparison against a simple coefficient-threshold baseline run on the same data; it is therefore impossible to isolate the contribution of the balance-guided pruning step.
Authors: We will augment Table 1 and Figure 2 with residual norms of the recovered equations and add side-by-side comparisons against a standard coefficient-threshold SINDy baseline applied to identical data. This will isolate the effect of the balance-guided pruning. revision: yes
- A rigorous derivation or invariance argument guaranteeing that relative-balance ranking recovers terms invisible in the training data but dominant at long times.
Circularity Check
No significant circularity: BG-SINDy is a procedural heuristic on standard sparse regression
full rationale
The paper introduces BG-SINDy as an algorithmic modification to SINDy that ranks candidate terms by their relative contribution to the instantaneous residual balance and prunes via progressive elimination before re-solving least squares. This construction does not reduce any claimed result to a fitted parameter that is later renamed a prediction, nor does it rely on a self-citation chain or uniqueness theorem imported from the authors' prior work. The governing equations recovered in the numerical examples are obtained directly from the data via the described procedure; they are not equivalent to the inputs by definition. The central modeling choice (balance-based ranking) is presented as an empirical heuristic inspired by dominant balance, not as a derived identity. Hence the derivation chain is self-contained and non-circular.
Axiom & Free-Parameter Ledger
free parameters (1)
- pruning threshold schedule
axioms (1)
- domain assumption The principle of dominant balance holds for the target multiscale systems.
Reference graph
Works this paper leans on
-
[1]
H. Voss, M. J. Bünner, M. Abel, Identification of continuous, spatiotem- poral systems, Phys. Rev. E 57 (1998) 2820–2823. https://doi.org/ 10.1103/PhysRevE.57.2820
-
[2]
M. Bär, R. Hegger, H. Kantz, Fitting partial differential equations to space-time dynamics, Phys. Rev. E 59 (1999) 337–342. https://doi. org/10.1103/PhysRevE.59.337
-
[3]
Schmidt, H
M. Schmidt, H. Lipson, Distilling Free-Form Natural Laws from Exper- imental Data, Science 324 (5923) (2009) 81–85. https://doi.org/10. 1126/science.1165893
2009
-
[4]
Bongard, H
J. Bongard, H. Lipson, Automated reverse engineering of nonlinear dy- namical systems, Proc. Natl. Acad. Sci. U.S.A. 104 (24) (2007) 9943–
2007
-
[5]
https://doi.org/10.1073/pnas.0609476104
-
[6]
S. L. Brunton, J. L. Proctor, J. N. Kutz, Discovering governing equations from data by sparse identification of nonlinear dynamical systems, Proc. Natl. Acad. Sci. U.S.A. 113 (15) (2016) 3932–3937. https://doi.org/ 10.1073/pnas.1517384113
-
[7]
S. H. Rudy, S. L. Brunton, J. L. Proctor, J. N. Kutz, Data-driven dis- covery of partial differential equations, Sci. Adv. 3 (4) (2017) e1602614. https://doi.org/10.1126/sciadv.1602614
-
[8]
K. Champion, B. Lusch, J. N. Kutz, S. L. Brunton, Data-driven dis- covery of coordinates and governing equations, Proc. Natl. Acad. Sci. U.S.A. 116 (45) (2019) 22445–22451. https://doi.org/10.1073/pnas. 1906995116
- [9]
-
[10]
Prandtl, über flüssigkeitsbewegung bei sehr kleiner reibung, in: Ver- handlungen des III
L. Prandtl, über flüssigkeitsbewegung bei sehr kleiner reibung, in: Ver- handlungen des III. Internationalen Mathematiker-Kongresses, Heidel- berg 1904, B. G. Teubner, Leipzig, 1905, pp. 484–491. 23
1904
-
[11]
Schlichting, K
H. Schlichting, K. Gersten, Boundary-Layer Theory, 9th Edi- tion, Springer, Berlin, 2017. https://doi.org/10.1007/ 978-3-662-52919-5
2017
-
[12]
C. D. Murray, S. F. Dermott, Solar System Dynamics, Cambridge University Press, Cambridge, 2000. https://doi.org/10.1017/ CBO9781139174817
2000
-
[13]
G.-J. Both, S. Choudhury, P. Sens, R. Kusters, DeepMoD: Deep learning for model discovery in noisy data, J. Comput. Phys. 428 (2021) 109985. https://doi.org/10.1016/j.jcp.2020.109985
-
[14]
C. M. Bender, S. A. Orszag, Advanced mathematical methods for scientists and engineers: Asymptotic methods and perturba- tion theory, Springer, New York, 1999. https://doi.org/10.1007/ 978-1-4757-3069-2
1999
-
[15]
A. H. Nayfeh, Perturbation methods, John Wiley & Sons, Hoboken, 2024
2024
-
[16]
M. H. Holmes, Introduction to perturbation methods, Springer, New York, 2012
2012
-
[17]
J. K. Kevorkian, J. D. Cole, Multiple scale and singular perturbation methods, Vol. 114, Springer, New York, 2012
2012
-
[18]
D. A. Messenger, D. M. Bortz, Weak SINDy for partial differential equations, J. Comput. Phys. 443 (2021) 110525. https://doi.org/ 10.1016/j.jcp.2021.110525
-
[19]
U. Fasel, J. N. Kutz, B. W. Brunton, S. L. Brunton, Ensemble-SINDy: Robust sparse model discovery in the low-data, high-noise limit, with active learning and control, Proc. R. Soc. A, Math. Phys. Eng. Sci. 478 (2260) (2022) 20210904. https://doi.org/10.1098/rspa.2021. 0904
-
[20]
B. M. de Silva, K. Champion, M. Quade, J.-C. Loiseau, J. N. Kutz, S. L. Brunton, PySINDy: A Python package for the sparse identification of nonlinear dynamical systems from data, J. Open Source Softw. 5 (49) (2020) 2104. https://doi.org/10.21105/joss.02104. 24
-
[21]
L. Zhang, S. Tang, G. He, Learning chaotic systems from noisy data via multi-step optimization and adaptive training, Chaos 32 (12) (2022) 123134. https://doi.org/10.1063/5.0114542
-
[22]
M. Raissi, P. Perdikaris, G. Karniadakis, Physics-informed neural net- works: A deep learning framework for solving forward and inverse prob- lems involving nonlinear partial differential equations, J. Comput. Phys. 378 (2019) 686–707. https://doi.org/10.1016/j.jcp.2018.10.045
-
[23]
G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, L. Yang, Physics-informed machine learning, Nat. Rev. Phys. 3 (6) (2021) 422–440. https://doi.org/10.1038/s42254-021-00314-5
-
[24]
Z. Chen, Y. Liu, H. Sun, Physics-informed learning of governing equa- tions from scarce data, Nat. Commun. 12 (1) (2021) 6136. https: //doi.org/10.1038/s41467-021-26434-1
-
[25]
L. Zhang, G. He, Scale-Decomposed Physics-Informed Neural Networks for singular perturbation problems, Commun. Comput. Phys. 39 (4) (2026) 1267–1298. https://doi.org/10.4208/cicp.OA-2024-0149
-
[26]
Science Advances6(16), eaay2631 (2020).https: //doi.org/10.1126/sciadv.aay2631
S.-M. Udrescu, M. Tegmark, AI Feynman: A physics-inspired method for symbolic regression, Sci. Adv. 6 (16) (2020) eaay2631. https:// doi.org/10.1126/sciadv.aay2631
-
[27]
Y. Chen, Y. Luo, Q. Liu, H. Xu, D. Zhang, Symbolic genetic algo- rithm for discovering open-form partial differential equations (SGA- PDE), Phys. Rev. Res. 4 (2) (2022) 023174. https://doi.org/10. 1103/PhysRevResearch.4.023174
2022
-
[28]
Interpretable Machine Learning for Science with PySR and SymbolicRegression.jl
M. Cranmer, Interpretable Machine Learning for Science with PySR and SymbolicRegression.jl (2023). arXiv:2305.01582
work page internal anchor Pith review arXiv 2023
-
[29]
Z. Liu, Y. Wang, S. Vaidya, F. Ruehle, J. Halverson, M. Soljacic, T. Y. Hou, M. Tegmark, KAN: Kolmogorov–Arnold networks, in: The Thir- teenth International Conference on Learning Representations, 2025
2025
-
[30]
Z. Zhou, Z. Xu, Y. Liu, S. Wang, asKAN: Active subspace embedded Kolmogorov–Arnold network, Neural Netw. 195 (2026) 108280. https: //doi.org/10.1016/j.neunet.2025.108280. 25
-
[31]
M. Du, Y. Chen, Z. Wang, L. Nie, D. Zhang, Large language models for automatic equation discovery of nonlinear dynamics, Phys. Fluids 36 (9) (2024) 097121. https://doi.org/10.1063/5.0224297
-
[32]
H. Xu, Y. Chen, R. Cao, T. Tang, M. Du, J. Li, A. H. Callaghan, D. Zhang, Generative discovery of partial differential equations by learning from math handbooks, Nat. Commun. 16 (1) (2025) 10255. https://doi.org/10.1038/s41467-025-65114-2
-
[33]
D. R. Gurevich, M. R. Golden, P. A. Reinbold, R. O. Grigoriev, Learning fluid physics from highly turbulent data using sparse physics-informed discovery of empirical relations (SPIDER), J. Fluid Mech. 996 (2024) A25. https://doi.org/10.1017/jfm.2024.813
- [34]
-
[35]
M. Golden, R. O. Grigoriev, J. Nambisan, A. Fernandez-Nieves, Physi- cally informed data-driven modeling of active nematics, Sci. Adv. 9 (27) (2023) eabq6120. https://doi.org/10.1126/sciadv.abq6120
- [36]
-
[37]
Golden, Scalable Sparse Regression for Model Discovery: The Fast Lane to Insight (2024)
M. Golden, Scalable Sparse Regression for Model Discovery: The Fast Lane to Insight (2024). arXiv:2405.09579
-
[38]
Z. Li, H. Yuan, W. Han, Y. Hou, H. Li, H. Ding, Z. Jiang, L. Yang, Bi- level identification of governing equations for nonlinear physical systems, Nat. Comput. Sci. 5 (6) (2025) 456–466. https://doi.org/10.1038/ s43588-025-00804-x
2025
-
[39]
E. J. Hinch, Perturbation Methods, Cambridge University Press, Cam- bridge, 1991. https://doi.org/10.1017/CBO9781139172189
-
[40]
Kevorkian, J
J. Kevorkian, J. D. Cole, Multiple Scale and Singular Perturbation Methods, Springer, New York, 1996. https://doi.org/10.1007/ 978-1-4612-3968-0 . 26
1996
-
[41]
J. L. Callaham, J. V. Koch, B. W. Brunton, J. N. Kutz, S. L. Brunton, Learning dominant physical processes with data-driven balance mod- els, Nat. Commun. 12 (1) (2021) 1016. https://doi.org/10.1038/ s41467-021-21331-z
2021
-
[42]
K. Fukami, T. Murata, K. Zhang, K. Fukagata, Sparse identification of nonlinear dynamics with low-dimensionalized flow representations, J. Fluid Mech. 926 (2021) A10. https://doi.org/10.1017/jfm.2021. 697
-
[43]
K. Kaheman, J. N. Kutz, S. L. Brunton, SINDy-PI: a robust algorithm for parallel implicit sparse identification of nonlinear dynamics, Proc. R. Soc. A, Math. Phys. Eng. Sci. 476 (2234) (2020) 20200279. https: //doi.org/10.1098/rspa.2020.0279
-
[44]
A. G. Baydin, B. A. Pearlmutter, A. A. Radul, J. M. Siskind, Automatic Differentiation in Machine Learning: a Survey, J. Mach. Learn. Res. 18 (153) (2018) 1–43
2018
-
[45]
H. Xu, H. Chang, D. Zhang, DL-PDE: Deep-learning based data-driven discovery of partial differential equations from discrete and noisy data, Commun. Comput. Phys. 29 (3) (2021) 698–728. https://doi.org/ 10.4208/cicp.OA-2020-0142
-
[46]
K. Kaheman, S. L. Brunton, J. Nathan Kutz, Automatic differentiation to simultaneously identify nonlinear dynamics and extract noise proba- bility distributions from data, Mach. Learn.: Sci. Technol. 3 (1) (2022) 015031. https://doi.org/10.1088/2632-2153/ac567a
-
[47]
B. K. Natarajan, Sparse Approximate Solutions to Linear Systems, SIAM J. Comput. 24 (2) (1995) 227–234. https://doi.org/10.1137/ S0097539792240406
1995
-
[48]
X. Cai, F. Nie, H. Huang, Exact top- k feature selection via ℓ2,0-norm constraint, in: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, Vol. 13, 2013, pp. 1240–1246
2013
-
[49]
T. Pang, F. Nie, J. Han, X. Li, Efficient Feature Selection via ℓ2,0-norm Constrained Sparse Regression, IEEE Trans. Knowl. Data Eng. 31 (5) (2019) 880–893. https://doi.org/10.1109/TKDE.2018.2847685. 27
-
[50]
P. Lemos, N. Jeffrey, M. Cranmer, S. Ho, P. Battaglia, Rediscovering orbital mechanics with machine learning, Mach. Learn.: Sci. Technol. 4 (4) (2022) 045021. https://doi.org/10.1088/2632-2153/acfa63
-
[51]
Paszke, S
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, PyTorch: An Imperative Style, High- Performance Deep Learning Library, in: Advances in Neural Information Processing Syste...
2019
-
[52]
G. H. Golub, C. F. Van Loan, Matrix Computations, 4th Edition, Johns Hopkins University Press, Baltimore, MD, 2013
2013
-
[53]
D. J. Korteweg, G. de Vries, XLI. On the change of form of long waves advancing in a rectangular canal, and on a new type of long stationary waves, Phil. Mag. 39 (240) (1895) 422–443. https://doi.org/10.1080/ 14786449508620739
-
[54]
H. Xu, D. Zhang, Robust discovery of partial differential equations in complex situations, Phys. Rev. Res. 3 (3) (2021) 033270. https://doi. org/10.1103/PhysRevResearch.3.033270
-
[55]
R. Stephany, C. Earls, Weak-PDE-LEARN: A weak form-based ap- proach to discovering PDEs from noisy, limited data, J. Comput. Phys. 506 (2024) 112950. https://doi.org/10.1016/j.jcp.2024.112950
-
[56]
İ. Dağ, Y. Dereli, Numerical solutions of KdV equation using radial basis functions, Appl. Math. Modell. 32 (4) (2008) 535–546. https: //doi.org/10.1016/j.apm.2007.02.001
-
[57]
A.-K. Kassam, L. N. Trefethen, Fourth-Order Time-Stepping for Stiff PDEs, SIAM J. Sci. Comput. 26 (4) (2005) 1214–1233. https://doi. org/10.1137/S1064827502410633
-
[58]
M. D. McKay, R. J. Beckman, W. J. Conover, Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code, Technometrics 21 (2) (1979) 239–245. https://doi.org/10.1080/00401706.1979.10489755. 28
-
[59]
Tadmor, Burgers’ equation with vanishing hyper-viscosity, Com- mun
E. Tadmor, Burgers’ equation with vanishing hyper-viscosity, Com- mun. Math. Sci. 2 (2) (2004) 317–324. https://doi.org/10.4310/ CMS.2004.V2.N2.A9
2004
-
[60]
Frisch, S
U. Frisch, S. Kurien, R. Pandit, W. Pauls, S. S. Ray, A. Wirth, J.-Z. Zhu, Hyperviscosity, galerkin truncation, and bottlenecks in turbulence, Phys. Rev. Lett. 101 (4) (2008) 144501. https://doi.org/10.1103/ PhysRevLett.101.144501
2008
-
[61]
J. C. Butcher, Implicit Runge-Kutta Processes, Math. Comp. 18 (85) (1964) 50–64. https://doi.org/10.1090/ S0025-5718-1964-0159424-9
1964
- [62]
-
[63]
J. D. Murray, Mathematical Biology II: Spatial Models and Biomedical Applications, 3rd Edition, Springer, New York, 2003
2003
-
[64]
J. R. Dormand, P. J. Prince, A Family of Embedded Runge-Kutta For- mulae, J. Comput. Appl. Math. 6 (1) (1980) 19–26. https://doi.org/ 10.1016/0771-050X(80)90013-3
-
[65]
Virtanen, R
P. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey, İ. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cim- rman, I. Henriksen...
2020
-
[66]
L. Zhao, Z. Li, B. Caswell, J. Ouyang, G. E. Karniadakis, Active learn- ing of constitutive relation from mesoscopic dynamics for macroscopic modeling of non-Newtonian flows, J. Comput. Phys. 363 (2018) 116–127. https://doi.org/10.1016/j.jcp.2018.02.039. 29
-
[67]
L. Zanna, T. Bolton, Data-Driven Equation Discovery of Ocean Mesoscale Closures, Geophys. Res. Lett. 47 (17) (2020) e2020GL088376. https://doi.org/10.1029/2020GL088376
-
[68]
B. Sanderse, P. Stinis, R. Maulik, S. E. Ahmed, Scientific machine learn- ing for closure models in multiscale problems: A review, Found. Data Sci. 7 (1) (2025) 298–337. https://doi.org/10.3934/fods.2024043
-
[69]
B. Choi, M. Ugliotti, M. Reynoso, D. R. Gurevich, R. O. Grigoriev, Data-driven modeling of multiscale phenomena with applications to fluid turbulence, Phys. Rev. E (2026). https://doi.org/10.1103/ rdj9-cjm9
2026
-
[70]
K. Jakhar, Y. Guan, P. Hassanzadeh, Analytical and AI-Discovered Stable, Accurate, and Generalizable Subgrid-Scale Closure for Geo- physical Turbulence, Phys. Rev. Lett. 136 (2026) 064201. https: //doi.org/10.1103/v28b-5qmp. 30
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.