Recognition: no theorem link
Optimal-Transport-Guided Functional Flow Matching for Turbulent Field Generation in Hilbert Space
Pith reviewed 2026-05-10 19:33 UTC · model grok-4.3
The pith
Defining flow matching in Hilbert space with optimal transport paths generates turbulent fields that match high-order statistics.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
FOT-CFM treats physical fields as elements of an infinite-dimensional Hilbert space and learns resolution-invariant generative dynamics directly at the level of probability measures by integrating optimal transport to construct deterministic straight-line probability paths between noise and data measures.
What carries the argument
Functional Optimal Transport Conditional Flow Matching (FOT-CFM), which defines conditional flow matching in Hilbert space and uses optimal transport to form straight probability paths for functional data generation.
If this is right
- Enables training without simulating the forward dynamics at each step.
- Speeds up generation of new field samples compared to iterative grid-based methods.
- Produces fields whose statistics align more closely with reference turbulent data on tested chaotic systems.
- Remains invariant to spatial resolution because operations occur in function space rather than on discrete pixels.
Where Pith is reading between the lines
- The framework could be extended by adding explicit conservation laws or dissipation terms to improve stability over long time horizons.
- Similar Hilbert-space constructions might apply to generating other continuous functional data such as electromagnetic fields or density distributions in biology.
- Hybrid models could combine this generative approach with traditional numerical solvers to correct drift in data-driven predictions.
Load-bearing premise
Deterministic straight-line paths from optimal transport in Hilbert space can capture the chaotic multi-scale intermittency of turbulence without additional physics-based constraints.
What would settle it
If samples drawn from the trained model fail to reproduce the energy spectra or high-order moments observed in an independent set of turbulent flow realizations from the Navier-Stokes or similar equations, the central claim would not hold.
Figures
read the original abstract
High-fidelity modeling of turbulent flows requires capturing complex spatiotemporal dynamics and multi-scale intermittency, posing a fundamental challenge for traditional knowledge-based systems. While deep generative models, such as diffusion models and Flow Matching, have shown promising performance, they are fundamentally constrained by their discrete, pixel-based nature. This limitation restricts their applicability in turbulence computing, where data inherently exists in a functional form. To address this gap, we propose Functional Optimal Transport Conditional Flow Matching (FOT-CFM), a generative framework defined directly in infinite-dimensional function space. Unlike conventional approaches defined on fixed grids, FOT-CFM treats physical fields as elements of an infinite-dimensional Hilbert space, and learns resolution-invariant generative dynamics directly at the level of probability measures. By integrating Optimal Transport (OT) theory, we construct deterministic, straight-line probability paths between noise and data measures in Hilbert space. This formulation enables simulation-free training and significantly accelerates the sampling process. We rigorously evaluate the proposed system on a diverse suite of chaotic dynamical systems, including the Navier-Stokes equations, Kolmogorov Flow, and Hasegawa-Wakatani equations, all of which exhibit rich multi-scale turbulent structures. Experimental results demonstrate that FOT-CFM achieves superior fidelity in reproducing high-order turbulent statistics and energy spectra compared to state-of-the-art baselines.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces Functional Optimal Transport Conditional Flow Matching (FOT-CFM), a generative framework operating directly in infinite-dimensional Hilbert space for synthesizing turbulent fields. It constructs deterministic straight-line probability paths via optimal transport between noise and data measures to enable simulation-free training and resolution-invariant sampling. The approach is tested on the Navier-Stokes equations, Kolmogorov flow, and Hasegawa-Wakatani equations, with the central claim that it achieves superior fidelity in reproducing high-order turbulent statistics and energy spectra relative to state-of-the-art baselines.
Significance. If the empirical results are robust, the work offers a meaningful advance in generative modeling for functional scientific data by moving beyond grid-based discretizations. The combination of flow matching with OT-induced straight paths in Hilbert space provides an efficient, measure-theoretic route to resolution-independent generation, which could benefit turbulence simulation and related chaotic systems. The multi-system evaluation is a strength.
major comments (2)
- [Methods (FOT-CFM objective)] Methods section describing the FOT-CFM objective: the conditional flow-matching loss is defined solely via the OT-induced straight paths without explicit terms enforcing the divergence-free constraint (for incompressible NS) or the nonlinear advection/dissipation operators of the underlying PDEs. This is load-bearing for the claim of faithful high-order statistics, as the learned vector field on the probability path may not implicitly respect these structures.
- [Results (high-order statistics)] Results section on high-order statistics and energy spectra: the reported superiority lacks ablations isolating the contribution of the OT guidance versus standard functional CFM, and no quantitative tables with error bars or statistical significance tests are referenced to support the fidelity gains on intermittent structures.
minor comments (1)
- [Abstract] Abstract: the phrase 'rigorously evaluate' is used without naming the specific baselines or metrics, which should be clarified for precision.
Simulated Author's Rebuttal
We thank the referee for their positive assessment of the significance of the work and for the constructive major comments. We address each point below and have revised the manuscript to incorporate clarifications and additional analyses where appropriate.
read point-by-point responses
-
Referee: [Methods (FOT-CFM objective)] Methods section describing the FOT-CFM objective: the conditional flow-matching loss is defined solely via the OT-induced straight paths without explicit terms enforcing the divergence-free constraint (for incompressible NS) or the nonlinear advection/dissipation operators of the underlying PDEs. This is load-bearing for the claim of faithful high-order statistics, as the learned vector field on the probability path may not implicitly respect these structures.
Authors: We appreciate the referee highlighting this aspect of the formulation. FOT-CFM is a data-driven generative model that learns the pushforward map between noise and data measures in Hilbert space; the training data are drawn from solutions of the target PDEs and therefore already satisfy the relevant constraints (e.g., divergence-free fields for incompressible Navier-Stokes). Consequently, samples drawn from the learned measure reproduce the physical structures in a distributional sense, which is corroborated by the superior high-order statistics reported across all three systems. In the revised manuscript we have added a dedicated paragraph in the Methods section that explicitly discusses this implicit enforcement via measure matching and outlines possible future extensions that could incorporate physics-informed residuals into the objective. revision: yes
-
Referee: [Results (high-order statistics)] Results section on high-order statistics and energy spectra: the reported superiority lacks ablations isolating the contribution of the OT guidance versus standard functional CFM, and no quantitative tables with error bars or statistical significance tests are referenced to support the fidelity gains on intermittent structures.
Authors: We agree that an explicit ablation isolating the OT component and more rigorous quantitative reporting would strengthen the results. The revised manuscript now includes a new ablation subsection that compares FOT-CFM directly against a standard functional conditional flow-matching baseline (identical architecture and training protocol but without OT-guided paths). The ablation demonstrates that the OT component is responsible for the observed gains in high-order statistics. We have also added tables that report mean errors together with standard deviations computed over five independent random seeds, as well as p-values from paired t-tests confirming that the improvements are statistically significant. revision: yes
Circularity Check
No circularity: FOT-CFM is a new construction from standard OT and flow-matching primitives
full rationale
The paper defines FOT-CFM directly in Hilbert space by combining established Optimal Transport (for straight probability paths) with conditional flow matching; the abstract and described framework present this as an independent synthesis rather than a re-derivation of its own outputs. No equations, claims, or experimental results are shown to reduce by construction to fitted parameters, self-citations, or renamed inputs. Evaluations on Navier-Stokes, Kolmogorov, and Hasegawa-Wakatani systems are treated as external benchmarks. The derivation chain remains self-contained with independent content.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
S. B. Pope, Turbulent flows, Measurement Science and Technology 12 (11) (2001) 2020–2021
2001
-
[2]
Hussain, P
S. Hussain, P. H. Oosthuizen, A. Kalendar, Evaluation of various tur- bulence models for the prediction of the airflow and temperature distri- butions in atria, Energy and Buildings 48 (2012) 18–28
2012
-
[3]
G.Conway, Turbulencemeasurementsinfusionplasmas, PlasmaPhysics and Controlled Fusion 50 (12) (2008) 124026
2008
-
[4]
Fouladi, P
F. Fouladi, P. Henshaw, D. S.-K. Ting, S. Ray, Wind turbulence impact on solar energy harvesting, Heat Transfer Engineering 41 (5) (2020) 407–417
2020
-
[5]
F. Z. Wang, I. Animasaun, T. Muhammad, S. Okoya, Recent advance- ments in fluid dynamics: drag reduction, lift generation, computational fluid dynamics, turbulence modelling, and multiphase flow, Arabian Journal for Science and Engineering 49 (8) (2024) 10237–10249
2024
-
[6]
C.Drygala, B.Winhart, F.diMare, H.Gottschalk, Generativemodeling of turbulence, Physics of Fluids 34 (3) (2022)
2022
-
[7]
C. Drygala, E. Ross, F. di Mare, H. Gottschalk, Comparison of generative learning methods for turbulence modeling, arXiv preprint arXiv:2411.16417 (2024)
-
[8]
S. Kim, S. Moon, Y. Lim, S.-M. Choi, S.-K. Ko, Multi-modal rec- ommender system using text-to-image generative models and adaptive learning, Expert Systems with Applications 296 (2026) 129086
2026
-
[9]
Dhariwal, A
P. Dhariwal, A. Nichol, Diffusion models beat gans on image synthesis, Advances in neural information processing systems 34 (2021) 8780–8794
2021
-
[10]
Kang, J.-Y
M. Kang, J.-Y. Zhu, R. Zhang, J. Park, E. Shechtman, S. Paris, T. Park, Scaling up gans for text-to-image synthesis, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2023, pp. 10124–10134. 36
2023
-
[11]
J. Gao, T. Shen, Z. Wang, W. Chen, K. Yin, D. Li, O. Litany, Z. Gojcic, S. Fidler, Get3d: A generative model of high quality 3d textured shapes learned from images, Advances in neural information processing systems 35 (2022) 31841–31854
2022
-
[12]
Achlioptas, O
P. Achlioptas, O. Diamanti, I. Mitliagkas, L. Guibas, Learning repre- sentations and generative models for 3d point clouds, in: International conference on machine learning, PMLR, 2018, pp. 40–49
2018
-
[13]
M. Zhao, W. Wang, R. Zhang, H. Jia, Q. Chen, Tia2v: Video generation conditioned on triple modalities of text–image–audio, Expert Systems with Applications 268 (2025) 126278
2025
-
[14]
A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, K. Kavukcuoglu, Wavenet: A generative model for raw audio, arXiv preprint arXiv:1609.03499 (2016)
work page internal anchor Pith review arXiv 2016
-
[15]
MelNet: A Generative Model for Audio in the Frequency Domain
S. Vasquez, M. Lewis, Melnet: A generative model for audio in the frequency domain, arXiv preprint arXiv:1906.01083 (2019)
work page Pith review arXiv 1906
-
[16]
J. Ho, T. Salimans, A. Gritsenko, W. Chan, M. Norouzi, D. J. Fleet, Video diffusion models, in: S. Koyejo, S. Mohamed, A. Agarwal, D. Bel- grave, K. Cho, A. Oh (Eds.), Advances in Neural Information Processing Systems, Vol. 35, Curran Associates, Inc., 2022, pp. 8633–8646
2022
-
[17]
N. Aldausari, A. Sowmya, N. Marcus, G. Mohammadi, Video generative adversarial networks: A review, ACM Comput. Surv. 55 (2) (Jan. 2022). doi:10.1145/3487891
-
[18]
V. Kumar, D. Sinha, Synthetic attack data generation model apply- ing generative adversarial network for intrusion detection, Computers & Security 125 (2023) 103054.doi:https://doi.org/10.1016/j.cose. 2022.103054
-
[19]
F. Alwahedi, A. Aldhaheri, M. A. Ferrag, A. Battah, N. Tihanyi, Ma- chine learning techniques for iot security: Current research and fu- ture vision with generative ai and large language models, Internet of Things and Cyber-Physical Systems 4 (2024) 167–185.doi:https: //doi.org/10.1016/j.iotcps.2023.12.003. 37
-
[20]
S. Nam, Y. Kim, S. J. Kim, Text-adaptive generative adversarial net- works: Manipulating images with natural language, in: S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett (Eds.), Advances in Neural Information Processing Systems, Vol. 31, Curran Associates, Inc., 2018
2018
-
[21]
C. Dong, Y. Li, H. Gong, M. Chen, J. Li, Y. Shen, M. Yang, A survey of natural language generation, ACM Comput. Surv. 55 (8) (Dec. 2022). doi:10.1145/3554727
-
[22]
Anand, P
N. Anand, P. Huang, Generative modeling for protein structures, in: S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett (Eds.), Advances in Neural Information Processing Systems, Vol. 31, Curran Associates, Inc., 2018
2018
-
[23]
Ingraham, V
J. Ingraham, V. Garg, R. Barzilay, T. Jaakkola, Generative models for graph-based protein design, in: H. Wallach, H. Larochelle, A. Beygelz- imer, F. d'Alché-Buc, E. Fox, R. Garnett (Eds.), Advances in Neural Information Processing Systems, Vol. 32, Curran Associates, Inc., 2019
2019
-
[24]
J. Chen, F. Zhu, Y. Han, C. Chen, Fast prediction of complicated temperature field using conditional multi-attention generative adversar- ial networks (cmagan), Expert Systems with Applications 186 (2021) 115727
2021
-
[25]
Y. Liu, M. Yang, P. Jiang, Cgan-driven intelligent generative design of vehicle exterior shape, Expert Systems with Applications 274 (2025) 127066
2025
-
[26]
Y. Chen, L. Lin, H. Ruan, Y. Chen, S. Zhong, L. Zu, Hydraulic re- sponse enhancement in brake valve anomaly monitoring: an integrated hardware-in-the-loop and cyclic generative adversarial network, Expert Systems with Applications (2026) 131905
2026
- [27]
-
[28]
G. Wen, Z. Li, Q. Long, K. Azizzadenesheli, A. Anandkumar, S. M. Ben- son, Real-time high-resolution co2 geological storage prediction using 38 nested fourier neural operators, Energy Environ. Sci. 16 (2023) 1732– 1741.doi:10.1039/D2EE04204E
-
[29]
Mildenhall, P
B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoor- thi, R. Ng, Nerf: Representing scenes as neural radiance fields for view synthesis, Communications of the ACM 65 (1) (2021) 99–106
2021
-
[30]
J. J. Park, P. Florence, J. Straub, R. Newcombe, S. Lovegrove, Deepsdf: Learning continuous signed distance functions for shape representation, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 165–174
2019
-
[31]
E. Dupont, H. Kim, S. Eslami, D. Rezende, D. Rosenbaum, From data to functa: Your data point is a function and you can treat it like one, arXiv preprint arXiv:2201.12204 (2022)
- [32]
- [33]
-
[34]
Kossaifi, V
J.H.Lim, N.B.Kovachki, R.Baptista, C.Beckham, K.Azizzadenesheli, J. Kossaifi, V. Voleti, J. Song, K. Kreis, J. Kautz, et al., Score-based dif- fusion models in function space, Journal of Machine Learning Research 26 (158) (2025) 1–62
2025
-
[35]
Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, B. Poole, Score-based generative modeling through stochastic differen- tial equations, arXiv preprint arXiv:2011.13456 (2020)
work page internal anchor Pith review Pith/arXiv arXiv 2011
-
[36]
Flow Matching for Generative Modeling
Y. Lipman, R. T. Chen, H. Ben-Hamu, M. Nickel, M. Le, Flow matching for generative modeling, arXiv preprint arXiv:2210.02747 (2022)
work page internal anchor Pith review Pith/arXiv arXiv 2022
-
[37]
Functional flow matching.arXiv preprint arXiv:2305.17209, 2023
G. Kerrigan, G. Migliorini, P. Smyth, Functional flow matching, arXiv preprint arXiv:2305.17209 (2023)
-
[38]
Villani, et al., Optimal transport: old and new, Vol
C. Villani, et al., Optimal transport: old and new, Vol. 338, Springer, 2008. 39
2008
-
[39]
Benamou, Y
J.-D. Benamou, Y. Brenier, A computational fluid mechanics solution to the monge-kantorovich mass transfer problem, Numerische Mathematik 84 (3) (2000) 375–393
2000
-
[40]
R. J. McCann, A convexity principle for interacting gases, Advances in mathematics 128 (1) (1997) 153–179
1997
-
[41]
Zhang, P
B. Zhang, P. Wonka, Functional diffusion, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 4723–4732
2024
-
[42]
G. Kerrigan, J. Ley, P. Smyth, Diffusion generative models in infinite dimensions, arXiv preprint arXiv:2212.00886 (2022)
-
[43]
Z. Li, M. Liu-Schiaffini, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stuart, A. Anandkumar, Learning chaotic dynam- ics in dissipative systems, Advances in Neural Information Processing Systems 35 (2022) 16768–16781
2022
- [44]
-
[45]
Castagna, F
J. Castagna, F. Schiavello, L. Zanisi, J. Williams, Stylegan as an ai deconvolution operator for large eddy simulations of turbulent plasma equations in bout++, Physics of Plasmas 31 (3) (2024)
2024
-
[46]
Physics-preserving ai-accelerated simulations of plasma turbulence,
R. Greif, F. Jenko, N. Thuerey, Physics-preserving ai-accelerated simu- lations of plasma turbulence, arXiv preprint arXiv:2309.16400 (2023)
-
[47]
Z. Li, N. Kovachki, K. Azizzadenesheli, B. Liu, K. Bhattacharya, A. Stu- art, A. Anandkumar, Fourier neural operator for parametric partial dif- ferential equations, arXiv preprint arXiv:2010.08895 (2020)
work page internal anchor Pith review arXiv 2010
-
[48]
Gyselax, TOKAM2D: Github repository,https://github.com/ gyselax/tokam2d, accessed: 30 June 2025 (2024)
2025
-
[49]
P.Ghendrih, Y.Asahi, E.Caschera, G.Dif-Pradalier, P.Donnel, X.Gar- bet, C. Gillot, V. Grandgirard, G. Latu, Y. Sarazin, et al., Generation and dynamics of sol corrugated profiles, Journal of Physics: Confer- ence Series 1125 (1) (2018) 012011.doi:10.1088/1742-6596/1125/1/ 012011. 40
-
[50]
P. Ghendrih, G. Dif-Pradalier, O. Panico, Y. Sarazin, H. Bufferand, G. Ciraolo, P. Donnel, N. Fedorczak, X. Garbet, V. Grandgirard, et al., Role of avalanche transport in competing drift wave and interchange tur- bulence, Journal of Physics: Conference Series 2397 (1) (2022) 012018. doi:10.1088/1742-6596/2397/1/012018
-
[51]
Kovachki, Z
N. Kovachki, Z. Li, B. Liu, K. Azizzadenesheli, K. Bhattacharya, A. Stu- art, A. Anandkumar, Neural operator: Learning maps between function spaces with applications to pdes, Journal of Machine Learning Research 24 (89) (2023) 1–97
2023
-
[52]
Paszke, S
A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, Pytorch: An imperative style, high- performance deep learning library, version 2.2.1 (2019). URLhttps://pytorch.org
2019
-
[53]
D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2014)
work page internal anchor Pith review Pith/arXiv arXiv 2014
-
[54]
Gaussian Error Linear Units (GELUs)
D. Hendrycks, K. Gimpel, Gaussian error linear units (gelus), arXiv preprint arXiv:1606.08415 (2016). 41
work page internal anchor Pith review Pith/arXiv arXiv 2016
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.