Recognition: unknown
Reduced-order modeling of a viscoelastic turbulent jet with hybrid machine learning models
Pith reviewed 2026-05-07 12:49 UTC · model grok-4.3
The pith
Hybrid models using modal decomposition and neural networks predict the long-term statistics of viscoelastic turbulent jets.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The hybrid model combines proper orthogonal decomposition to obtain a compact representation of the data, and a neural network is trained to predict the mode coefficients in the low-dimensional space. Results show that the hybrid model effectively captures the long-term behavior of the viscoelastic jet, demonstrated by computing relevant statistics of the jet. Small models predict large-scale dynamics more than one step ahead while larger models with skip connections are required for smaller-scale dynamics.
What carries the argument
The hybrid reduced-order model that uses proper orthogonal decomposition to create a low-dimensional representation of the jet data and trains a neural network to predict the time evolution of the resulting mode coefficients.
If this is right
- Smaller neural networks enable multi-step predictions of large-scale jet dynamics and thus greater simulation speedups.
- Larger networks become necessary to forecast smaller-scale features in the viscoelastic flow.
- Skip connections provide the most effective architecture for building deeper and more generalizable models.
- The overall approach demonstrates that hybrid reduced-order models can produce compact representations capable of matching jet statistics.
Where Pith is reading between the lines
- The same hybrid structure could be tested on other viscoelastic flows such as channel or pipe flows to check transferability beyond jets.
- Varying polymer concentration in the training data might reveal how model size requirements change with fluid elasticity.
- Coupling the neural network predictor with online adaptation during simulation could reduce long-term drift without full retraining.
Load-bearing premise
The training data from high-fidelity simulations is representative enough for the neural network to generalize to unseen long-time trajectories without retraining or performance drift.
What would settle it
Run the trained hybrid model on a new jet configuration or longer time series outside the training distribution and compare the predicted velocity and polymer stress statistics against a full high-fidelity simulation; significant divergence in those statistics would show the model fails to generalize.
Figures
read the original abstract
Adding flexible polymers to a Newtonian solvent confers complex properties to the resulting solution. The additional complexity substantially increases the computational cost of numerical simulations, which often makes them prohibitively expensive. Here, we propose hybrid reduced-order models to accelerate simulations of viscoelastic turbulent jets. The model combines modal decompositions with deep networks: we use proper orthogonal decomposition to obtain a compact representation of the data, and a neural network is trained to predict the mode coefficients in the low-dimensional space. Results show that the hybrid model effectively captures the long-term behavior of the viscoelastic jet, that we demonstrate by computing relevant statistics of the jet. While small models are capable of predicting large-scale dynamics more than one-step at a time, thus facilitating greater accelerations, larger models are mandatory for forecasting smaller-scale dynamics, with skip connections the most effective strategy for deeper and generalizable models. The proposed methodology underpins the potential of hybrid approaches for compact and robust reduced-order models of viscoelastic turbulent jets.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes hybrid reduced-order models for viscoelastic turbulent jets that combine proper orthogonal decomposition (POD) to obtain a low-dimensional representation of the flow with a neural network trained to predict the time evolution of the POD mode coefficients. The central claim is that these hybrid models capture the long-term statistical behavior of the jet, as demonstrated by computing relevant flow statistics; small networks suffice for large-scale dynamics with multi-step prediction, while deeper networks with skip connections are needed for smaller scales.
Significance. If the long-term generalization claims hold under rigorous testing, the work would provide a practical route to accelerating high-fidelity simulations of viscoelastic turbulence, enabling longer-time or parametric studies that remain computationally prohibitive with direct numerical simulation. The hybrid POD-NN strategy is a standard but potentially effective approach for this class of flows.
major comments (3)
- [Abstract] Abstract: the claim that the hybrid model 'effectively captures the long-term behavior' is not accompanied by any quantitative error metrics (e.g., L2 norms on mode coefficients, kinetic-energy spectra, or Reynolds-stress profiles), held-out trajectory validation, or comparison against baseline ROMs such as Galerkin projection or linear autoregressive models; without these, it is impossible to judge whether the reported statistics reflect true attractor fidelity or training-data artifacts.
- [Results] Results (long-term statistics section): the demonstration relies on autonomous multi-step rollouts, yet no evidence is provided on error growth rates, Lyapunov-time horizons, or statistical convergence for trajectories substantially longer than the training window; the skeptic note correctly identifies that teacher-forced or short-horizon predictions would not suffice to support the central claim.
- [Methods] Methods (neural-network architecture): the statement that 'skip connections [are] the most effective strategy for deeper and generalizable models' lacks an ablation study quantifying the effect of skip connections on long-term stability versus plain feed-forward or residual networks; this is load-bearing because generalization without drift is the weakest assumption identified in the review.
minor comments (2)
- [Methods] The number of retained POD modes and the precise training loss (one-step versus multi-step) should be stated explicitly in the methods; these are free parameters that directly affect the reported acceleration and accuracy.
- [Figures] Figure captions and axis labels for the statistical comparisons should include the exact time horizon used for the autonomous rollout and the number of independent realizations averaged.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed report. The comments highlight important aspects for strengthening the validation of our hybrid POD-NN reduced-order models. We address each major comment below and have revised the manuscript to incorporate additional quantitative evidence and analyses.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim that the hybrid model 'effectively captures the long-term behavior' is not accompanied by any quantitative error metrics (e.g., L2 norms on mode coefficients, kinetic-energy spectra, or Reynolds-stress profiles), held-out trajectory validation, or comparison against baseline ROMs such as Galerkin projection or linear autoregressive models; without these, it is impossible to judge whether the reported statistics reflect true attractor fidelity or training-data artifacts.
Authors: We agree that the abstract claim benefits from explicit quantitative support. In the revised manuscript we have added L2-norm errors on the POD mode coefficients for multi-step predictions, direct comparisons of kinetic-energy spectra and Reynolds-stress profiles against the reference DNS, held-out trajectory tests on initial conditions excluded from training, and side-by-side evaluations versus baseline ROMs (Galerkin projection and linear autoregressive models). These additions confirm that the reported long-term statistics arise from faithful reproduction of the attractor rather than training artifacts. revision: yes
-
Referee: [Results] Results (long-term statistics section): the demonstration relies on autonomous multi-step rollouts, yet no evidence is provided on error growth rates, Lyapunov-time horizons, or statistical convergence for trajectories substantially longer than the training window; the skeptic note correctly identifies that teacher-forced or short-horizon predictions would not suffice to support the central claim.
Authors: We acknowledge that explicit quantification of error growth and long-horizon stability strengthens the central claim. The original demonstrations already used autonomous rollouts longer than the training window; we have now supplemented the results section with error-growth curves versus time, estimates of the Lyapunov time horizon derived from divergence of nearby trajectories, and convergence diagnostics for statistical quantities on rollouts up to twenty times the training length. These additions directly address the concern that short-horizon or teacher-forced predictions would be insufficient. revision: yes
-
Referee: [Methods] Methods (neural-network architecture): the statement that 'skip connections [are] the most effective strategy for deeper and generalizable models' lacks an ablation study quantifying the effect of skip connections on long-term stability versus plain feed-forward or residual networks; this is load-bearing because generalization without drift is the weakest assumption identified in the review.
Authors: We agree that an ablation study is required to substantiate the architectural choice. We have performed and now report in the revised methods section a controlled ablation comparing plain feed-forward networks, residual networks, and skip-connection networks of comparable depth. The study quantifies long-term stability via accumulated prediction error, drift in higher-order statistics, and generalization on held-out data, confirming that skip connections yield measurably lower drift and better preservation of small-scale features for deeper models. revision: yes
Circularity Check
No significant circularity detected
full rationale
The paper's core chain uses POD to obtain a low-dimensional representation of high-fidelity simulation snapshots and trains a neural network to map current mode coefficients to future ones. Long-term statistics are then obtained by iterating the trained network autonomously. This structure does not reduce any claimed prediction to its inputs by construction: the network weights are fitted on finite training windows, yet the multi-step rollout and derived statistics are a non-trivial test of whether the learned map reproduces the attractor. No self-definitional equations, fitted-input-renamed-as-prediction, or load-bearing self-citations appear in the abstract or described methodology. The result is therefore an empirical modeling claim whose validity rests on generalization performance rather than tautological equivalence to the training data.
Axiom & Free-Parameter Ledger
free parameters (2)
- Neural network weights and biases
- Number of retained POD modes
axioms (2)
- domain assumption The flow snapshots used for training are statistically representative of the long-time attractor.
- standard math Proper orthogonal decomposition yields an optimal linear basis for the data in the L2 sense.
Reference graph
Works this paper leans on
-
[1]
Serafini, F
F. Serafini, F. Battista, P. Gualtieri, and C. M. Casciola. Drag reduction in turbulent wal-bounded flows of realistic polymer solutions.Phys. Rev. Lett., 129:104502, 2022
2022
-
[2]
Rosti, P
M.E. Rosti, P. Perlekar, and D. Mitra. Large is different: Nonmonotonic behavior of elastic range scaling in polymeric turbulence at large Reynolds and Deborah numbers.Sci. Adv., 9(11):eadd3831, 2023. 10 Reduced-order modeling of a viscoelastic turbulent jet with hybrid machine learning models
2023
-
[3]
R. G. Larson. Instabilities in viscoelastic flows.Rheol. Acta, 31:213–263, 1992
1992
-
[4]
Groisman and V
A. Groisman and V. Steinberg. Elastic turbulence in a polymer solution flow.Nature, 405:53–55, 2000
2000
-
[5]
R. K. Singh, P. Perlekar, D. Mitra, and M. E. Rosti. Intermittency in the not-so-smooth elastic turbulence. Nat. Commun., 15:4070, 2024
2024
-
[6]
Guimarães, N
M.C. Guimarães, N. Pimentel, F.T. Pinho, and C.B. da Silva. Direct numerical simulations of turbulent viscoelastic jets.J. Fluid Mech., 899, 2020
2020
-
[7]
Yamani, B
S. Yamani, B. Keshavarz, Y. Raj, T.A. Zaki, G.H. McKinley, and I. Bischofberger. Spectral universality of elastoinertial turbulence.Phys. Rev. Lett., 127:074501, 2021
2021
-
[8]
Yamani, Y
S. Yamani, Y. Raj, T.A. Zaki, G.H. McKinley, and I. Bischofberger. Spatiotemporal signatures of elastoinertial turbulence in viscoelastic planar jets.Phys. Rev. Fluids, 8:064610, 2023
2023
-
[9]
Soligo and M.E
G. Soligo and M.E. Rosti. Non-Newtonian turbulent jets at low-Reynolds number.Int. J. Multiphas. Flow, 129:104546, 2023
2023
-
[10]
Steinberg
V. Steinberg. Elastic turbulence: An experimental view on inertialess random flow.Annu. Rev. Fluid Mech., 53:27–58, 2021
2021
-
[11]
Keunings
R. Keunings. On the high Weissenberg number problem.J. Non-Newton. Fluid Mech., 20:209–226, 1986
1986
-
[12]
S. L. Brunton, B. R. Noack, and P. Koumoutsakos. Machine learning for fluid mechanics.Annu. Rev. Fluid Mech., 52:477–508, 2020
2020
-
[13]
Taira, S
K. Taira, S. L. Brunton, C. W. Dawson, S. T. M. Rowley, T. Colonius, B. J. McKeon, O. T. Schmidt, S. Gordeyev, V. Theofilis, and L. S. Ukeiley. Modal analysis of fluid flows: An overview.AAIA J., 55:4013, 2017
2017
-
[14]
Murata, K
T. Murata, K. Fukami, and K. Fukagata. Nonlinear mode decomposition with convolutional neural networks for fluid dynamics.J. Fluid Mech., 882:A13, 2020
2020
-
[15]
Eivazi, S
H. Eivazi, S. Le Clainche, S. Hoyas, and R. Vinuesa. Towards extraction of orthogonal and parsimonious non-linear modes from turbulent flows.Exp. Sys. With Appl., 202:117038, 2022
2022
-
[16]
Eivazi, H
H. Eivazi, H. Veisi, M. H. Naderi, and V. Esfahanian. Deep neural networks for nonlinear model order reduction of unsteady flows.Phys. Fluids, 32:105104, 2020
2020
-
[17]
Solera-Rico, C
A. Solera-Rico, C. S. Vila, M. Gómez-López, A. Almashjary, S. T. M. Dawson, Y. Wang, and R. Vinuesa. β-Variational autoencoders and transformers for reduced-order modelling of fluid flows.Nat. Commun., 15:1361, 2024
2024
-
[18]
G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. Wang, and L. Yang. Physics-informed machine learning.Nat. Rev. Phys., 3:422–440, 2021
2021
-
[19]
J. L. Lumley. Stochastic Tools in Turbulence. New York: Academic, 1970
1970
-
[20]
Y. Wang, H. Mac, W. Caia, H. Zhang, J. Cheng, and X. Zheng. A POD-Galerkin reduced-order model for two-dimensional Rayleigh-Bénard convection with viscoelastic fluid.Int. Commun. Heat Mass Transf., 117:104747, 2020
2020
-
[21]
C. M. Oishi, A. A. Kaptanoglu, J. N. Kutz, and S. L. Brunton. Nonlinear parametric models of viscoelastic fluid flows.R. Soc. Open Sci, 11:240995, 2024
2024
-
[22]
Kumar, R
M. Kumar, R. Constance-Amores, and M. D. Graham. Elastoinertial turbulence: data-driven reduced- order model based on manifold dynamics.J. Fluid Mech., 1007:R1, 2025
2025
-
[23]
Kim and P
J. Kim and P. Moin. Application of a fractional-step method to incompressible Navier-Stokes equations. J. Comput. Phys., 59:308–323, 1985
1985
-
[24]
Fattal and R
R. Fattal and R. Kupferman. Constitutive laws for the matrix-logarithm of the conformation tensor.J. Non-Newton. Fluid Mech., 123:281–285, 2004
2004
-
[25]
A Hulsen, R
M. A Hulsen, R. Fattal, and R. Kupferman. Flow of viscoelastic fluids past a cylinder at high Weissenberg number: stabilized simulations using matrix logarithms.J. Non-Newton. Fluid Mech., 127:27–39, 2005
2005
-
[26]
C. W. Shu. High order weighted essentially nonoscillatory schemes for convection dominated problems. SIAM review, 51:82–126, 2009
2009
-
[27]
Sugiyama, S
K. Sugiyama, S. Ii, S. Takeuchi, S. Takagi, and Y. Matsumoto. A full Eulerian finite difference approach for solving fluid–structure coupling problems.J. Comput. Phys., 230:596–627, 2011
2011
-
[28]
Orlanski
I. Orlanski. A simple boundary condition for unbounded hyperbolic flows.J. Comput. Phys., 21:251–269, 1976. 11 Reduced-order modeling of a viscoelastic turbulent jet with hybrid machine learning models
1976
-
[29]
Sirovich
L. Sirovich. Turbulence and the dynamics of coherent structures. I. Coherent structures.Quart. Appl. Math., 45:561–571, 1987
1987
-
[30]
Abadía-Heredia, B
R. Abadía-Heredia, B. Carro, J. I. Arribas, J. M. Pérez, and S. Le Clainche. A predictive hybrid reduced order model based on proper orthogonal decomposition combined with deep learning architectures.Exp. Sys. With Appl., 187:115910, 2022
2022
-
[31]
P. A. Srinivasan, L. Guastoni, H. Azizpour, P. Schlatter, and R. Vinuesa. Predictions of turbulent shear flows using deep neural networks.Phys. Rev. Fluids, 4:054603, 2019
2019
-
[32]
Nakamura, K
T. Nakamura, K. Fukami, K. Hasegawa, Y. Nabae, and K. Fukagata. Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow.Phys. Fluids, 33:025116, 2021
2021
-
[33]
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, page 770–778, 2016
2016
-
[34]
J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization.arXiv:1607.06450, 2016
work page internal anchor Pith review arXiv 2016
-
[35]
D. P. Kingma and J. Ba. ADAM: A method for stochastic optimization. InProceedings of the 3rd International Conference on Learning Representations, 2015
2015
-
[36]
Fukami, T
K. Fukami, T. Nakamura, and K. Fukagata. Convolutional neural network based hierarchical autoencoder for nonlinear mode decomposition of fluid field data.Phys. Fluids, 32:095110, 2020
2020
-
[37]
Abadía-Heredia, A
R. Abadía-Heredia, A. Corrochano, M. López-Martín, and S. Le Clainche. Generalization capabilities and robustness of hybrid models grounded in physics compared to purely deep learning models.Phys. Fluids, 37:035149, 2025. 12
2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.