pith. machine review for the scientific record. sign in

arxiv: 2604.08586 · v1 · submitted 2026-03-30 · 💻 cs.LG · cs.AI· physics.flu-dyn

Recognition: no theorem link

FluidFlow: a flow-matching generative model for fluid dynamics surrogates on unstructured meshes

Authors on Pith no claims yet

Pith reviewed 2026-05-14 21:17 UTC · model grok-4.3

classification 💻 cs.LG cs.AIphysics.flu-dyn
keywords flow-matchinggenerative modelfluid dynamicssurrogate modelingunstructured meshcomputational fluid dynamicstransformerairfoil
0
0 comments X

The pith

Conditional flow-matching builds generative surrogates for fluid dynamics that operate directly on unstructured meshes and beat multilayer perceptron baselines.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents FluidFlow as a generative model that learns to transport noise distributions to CFD solution distributions using conditional flow-matching. The approach conditions the transport on physical parameters such as operating conditions and applies the model directly to mesh data without any interpolation step. It evaluates two backbone networks, a U-Net and a diffusion transformer, on an airfoil pressure-coefficient task and a full three-dimensional aircraft geometry task. In both cases the flow-matching models record lower error metrics and stronger generalization across unseen operating points than standard multilayer perceptron regressors. The results indicate that deterministic flow-matching provides a workable route to scalable, mesh-faithful surrogate modeling for expensive many-query fluid problems.

Core claim

FluidFlow learns deterministic transport maps between noise and fluid-flow data distributions on both structured and unstructured meshes by conditioning a flow-matching objective on physically meaningful parameters; when trained on benchmark CFD data it produces pressure and friction coefficient fields whose errors are substantially smaller than those of multilayer perceptron baselines while preserving geometric fidelity and generalizing across operating conditions.

What carries the argument

Conditional flow-matching, which trains a neural network to predict a velocity field that carries noise samples to data samples in a deterministic, non-stochastic manner, applied directly to mesh-based CFD fields and conditioned on scalar operating parameters.

If this is right

  • Pressure-coefficient predictions along airfoil boundaries exhibit significantly lower error metrics than multilayer perceptron baselines.
  • Friction and pressure fields on a full three-dimensional aircraft mesh are predicted with improved accuracy and better generalization across operating conditions.
  • A diffusion-transformer backbone scales to large unstructured datasets while retaining high predictive accuracy.
  • No mesh interpolation preprocessing is required, so geometric fidelity of the original CFD discretization is preserved.
  • The same framework supplies a flexible surrogate for any many-query CFD task once the flow-matching model has been trained on representative data.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same conditional flow-matching construction could be applied to other physics problems defined on irregular meshes, such as structural or electromagnetic simulations.
  • Adding explicit conservation constraints as auxiliary losses might further reduce violations of mass or momentum balance in the generated fields.
  • Because the model produces entire fields at once, it could accelerate design-optimization loops that require thousands of flow evaluations on fixed geometries.
  • Time-dependent or unsteady flows could be handled by extending the conditioning vector to include time or previous states.

Load-bearing premise

The assumption that training solely on existing benchmark CFD data is enough for the model to generalize accurately to unseen operating conditions and real-world geometries without adding explicit conservation laws.

What would settle it

Large prediction errors on pressure or friction coefficients for an aircraft geometry at a Mach number or angle of attack lying outside the training distribution.

Figures

Figures reproduced from arXiv: 2604.08586 by David Ramos, Eusebio Valero, Ferm\'in Guti\'errez, Gonzalo Rubio, Lucas Lacasa.

Figure 1
Figure 1. Figure 1: U-Net architecture and its main components. [PITH_FULL_IMAGE:figures/full_fig_p006_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: One-dimensional patchification used in the diffusion transformer. [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Scatter plots of the true vs predicted pressure coefficient, for all locations and operating [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Examples of predicted and reference airfoil pressure distributions for different operating con [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Comparison between (ground true) CFD pressure/friction coefficient fields (top panels) and [PITH_FULL_IMAGE:figures/full_fig_p011_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Strong scaling speed-up of the DiT model for the aircraft dataset as a function of the number [PITH_FULL_IMAGE:figures/full_fig_p017_6.png] view at source ↗
read the original abstract

Computational fluid dynamics (CFD) provides high-fidelity simulations of fluid flows but remains computationally expensive for many-query applications. In recent years deep learning (DL) has been used to construct data-driven fluid-dynamic surrogate models. In this work we consider a different learning paradigm and embrace generative modelling as a framework for constructing scalable fluid-dynamics surrogate models. We introduce FluidFlow, a generative model based on conditional flow-matching, a recent alternative to diffusion models that learns deterministic transport maps between noise and data distributions. FluidFlow is specifically designed to operate directly on CFD data defined on both structured and unstructured meshes alike, without the needs to perform any mesh interpolation pre-processing and preserving geometric fidelity. We assess the capabilities of FluidFlow using two different core neural network architectures, a U-Net and diffusion transformer (DiT), and condition their learning on physically meaningful parameters. The methodology is validated on two benchmark problems of increasing complexity: prediction of pressure coefficients along an airfoil boundary across different operating conditions, and prediction of pressure and friction coefficients over a full three-dimensional aircraft geometry discretized on a large unstructured mesh. In both cases, FluidFlow outperform strong multilayer perceptron baselines, achieving significantly lower error metrics and improved generalisation across operating conditions. Notably, the transformer-based architecture enables scalable learning on large unstructured datasets while maintaining high predictive accuracy. These results demonstrate that flow-matching generative models provide an effective and flexible framework for surrogate modelling in fluid dynamics, with potential for realistic engineering and scientific applications.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 1 minor

Summary. The paper introduces FluidFlow, a conditional flow-matching generative model for fluid-dynamics surrogates that operates directly on CFD data defined on structured or unstructured meshes. It employs U-Net and diffusion-transformer (DiT) backbones conditioned on physical parameters, and validates the approach on two benchmarks of increasing complexity: prediction of pressure coefficients along an airfoil boundary across operating conditions, and prediction of pressure and friction coefficients over a full 3D aircraft geometry on a large unstructured mesh. The central claim is that FluidFlow outperforms strong multilayer-perceptron baselines with significantly lower error metrics and improved generalization, while the transformer variant scales to large unstructured datasets.

Significance. If the quantitative claims hold, the work supplies a scalable generative framework for CFD surrogates that avoids mesh-interpolation preprocessing and preserves geometric fidelity. The flow-matching formulation and DiT backbone on unstructured data constitute a concrete advance over standard MLP or CNN surrogates for many-query engineering tasks. The absence of explicit physics constraints, however, leaves open whether pointwise accuracy translates to physically consistent integrated quantities on unseen conditions.

major comments (3)
  1. [Abstract] Abstract: the claim that FluidFlow 'outperform[s] strong multilayer perceptron baselines, achieving significantly lower error metrics' is unsupported by any numerical values, error bars, or validation protocol details, preventing assessment of whether the reported improvement is load-bearing for the generalization statement.
  2. [§3] Method description (U-Net and DiT backbones conditioned on physical parameters): the training objective contains no physics-informed loss terms, divergence-free projections, or post-hoc conservation corrections. This directly affects the central generalization claim, because generative sampling can produce low pointwise L2 errors while violating integrated conservation laws (lift, drag) on complex unstructured meshes.
  3. [§4] §4 (benchmark results): the evaluation on held-out operating conditions and the 3D aircraft case reports only pointwise coefficient errors; no comparison of integrated aerodynamic forces or mesh-convergence checks is described, leaving the physical fidelity of the learned transport maps unverified.
minor comments (1)
  1. [Abstract] Abstract: the phrase 'significantly lower error metrics' should be replaced by concrete metrics (e.g., mean L2 error, relative error) once the full results are presented.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their thorough review and constructive comments on our manuscript. We address each of the major comments point by point below, proposing revisions to the manuscript where appropriate to address the concerns raised.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that FluidFlow 'outperform[s] strong multilayer perceptron baselines, achieving significantly lower error metrics' is unsupported by any numerical values, error bars, or validation protocol details, preventing assessment of whether the reported improvement is load-bearing for the generalization statement.

    Authors: We agree with the referee that the abstract would be strengthened by including specific numerical results. The detailed error metrics, including relative L2 errors with standard deviations across multiple runs, are presented in Section 4. In the revised manuscript, we will incorporate key quantitative values into the abstract, such as the average error reductions compared to the MLP baseline on both the airfoil and 3D aircraft benchmarks. revision: yes

  2. Referee: [§3] Method description (U-Net and DiT backbones conditioned on physical parameters): the training objective contains no physics-informed loss terms, divergence-free projections, or post-hoc conservation corrections. This directly affects the central generalization claim, because generative sampling can produce low pointwise L2 errors while violating integrated conservation laws (lift, drag) on complex unstructured meshes.

    Authors: The referee is correct that the training objective relies exclusively on the conditional flow-matching loss without explicit physics constraints. This choice enables the model to learn the data distribution directly from CFD simulations on unstructured meshes. Our results demonstrate improved generalization in pointwise predictions on unseen conditions. To address the concern, we will expand the discussion section to explicitly note this limitation and suggest that future work could incorporate physics-informed regularizers to ensure conservation properties. revision: partial

  3. Referee: [§4] §4 (benchmark results): the evaluation on held-out operating conditions and the 3D aircraft case reports only pointwise coefficient errors; no comparison of integrated aerodynamic forces or mesh-convergence checks is described, leaving the physical fidelity of the learned transport maps unverified.

    Authors: We acknowledge that the current evaluation focuses on pointwise errors as defined by the benchmark tasks. To better verify physical fidelity, in the revised version we will add computations of integrated quantities such as lift and drag coefficients derived from the predicted fields and compare them against the ground-truth CFD data and the MLP baseline. We will also include a brief analysis of mesh sensitivity based on the provided discretization. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper introduces FluidFlow as a conditional flow-matching generative model trained directly on external CFD benchmark datasets (airfoil and 3D aircraft cases) and evaluated on held-out operating conditions. The core methodology applies standard flow-matching objectives with U-Net or DiT backbones conditioned on physical parameters, without any self-definitional reductions, fitted inputs renamed as predictions, or load-bearing self-citations in the derivation. Generalization performance is asserted via empirical error metrics on unseen data rather than tautological construction from the inputs themselves.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard assumptions from generative modeling literature that flow-matching learns accurate transport maps from noise to CFD data distributions, plus the availability of representative training data on meshes.

axioms (1)
  • domain assumption Conditional flow-matching learns deterministic transport maps between noise and data distributions
    Stated as the core learning paradigm in the abstract

pith-pipeline@v0.9.0 · 5584 in / 1056 out tokens · 36198 ms · 2026-05-14T21:17:20.354523+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

47 extracted references · 47 canonical work pages · 7 internal anchors

  1. [1]

    Butterworth-Heinemann, 2015

    Jiri Blazek.Computational fluid dynamics: principles and applications. Butterworth-Heinemann, 2015

  2. [2]

    Cambridge University Press, 2022

    Steven L Brunton and J Nathan Kutz.Data-driven science and engineering: Machine learning, dynamical systems, and control. Cambridge University Press, 2022

  3. [3]

    MIT Press, Cambridge, MA, 2016

    Ian Goodfellow, Yoshua Bengio, and Aaron Courville.Deep Learning. MIT Press, Cambridge, MA, 2016

  4. [4]

    Machine learning for fluid mechanics

    Steven L Brunton, Bernd R Noack, and Petros Koumoutsakos. Machine learning for fluid mechanics. Annual review of fluid mechanics, 52(1):477–508, 2020

  5. [5]

    Improving aircraft performance using machine learning: A review.Aerospace Science and Technology, 138:108354, 2023

    Soledad Le Clainche, Esteban Ferrer, Sam Gibson, Elisabeth Cross, Alessandro Parente, and Ricardo Vinuesa. Improving aircraft performance using machine learning: A review.Aerospace Science and Technology, 138:108354, 2023

  6. [6]

    Physics-informed machine learning.Nature Reviews Physics, 3(6):422–440, 2021

    George Em Karniadakis, Ioannis G Kevrekidis, Lu Lu, Paris Perdikaris, Sifan Wang, and Liu Yang. Physics-informed machine learning.Nature Reviews Physics, 3(6):422–440, 2021

  7. [7]

    Learning mesh-based simulation with graph networks

    Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter Battaglia. Learning mesh-based simulation with graph networks. InInternational conference on learning representations, 2020. 12

  8. [8]

    Graph neural networks for the prediction of aircraft surface pressure distributions.Aerospace Science and Technology, 137:108268, 2023

    Derrick Hines and Philipp Bekemeyer. Graph neural networks for the prediction of aircraft surface pressure distributions.Aerospace Science and Technology, 137:108268, 2023

  9. [9]

    Surrogate modeling of the aerodynamic performance for airfoils in transonic regime

    Mohamed Elrefaie, Tarek Ayman, Mayar Elrefaie, Eman Sayed, Mahmoud Ayyad, and Mohamed M AbdelRahman. Surrogate modeling of the aerodynamic performance for airfoils in transonic regime. InAIAA SCITECH 2024 Forum, page 2220, 2024

  10. [10]

    A certifiable machine learning-based pipeline to predict fatigue life of aircraft structures.Engineering Failure Analysis, page 110334, 2025

    ´Angel Ladr´ on, Miguel S´ anchez-Dom´ ınguez, Javier Rozal´ en, Fernando R S´ anchez, Javier de Vicente, Lucas Lacasa, Eusebio Valero, and Gonzalo Rubio. A certifiable machine learning-based pipeline to predict fatigue life of aircraft structures.Engineering Failure Analysis, page 110334, 2025

  11. [11]

    Fourier Neural Operator for Parametric Partial Differential Equations

    Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, An- drew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations.arXiv preprint arXiv:2010.08895, 2020

  12. [12]

    Transfer learning-enhanced deep reinforcement learning for aerodynamic airfoil optimization subject to structural constraints.Physics of Fluids, 37(8), 2025

    David Ramos, Lucas Lacasa, Eusebio Valero, and Gonzalo Rubio. Transfer learning-enhanced deep reinforcement learning for aerodynamic airfoil optimization subject to structural constraints.Physics of Fluids, 37(8), 2025

  13. [13]

    Generative artificial intelligence.Electronic markets, 33(1):63, 2023

    Leonardo Banh and Gero Strobel. Generative artificial intelligence.Electronic markets, 33(1):63, 2023

  14. [14]

    Generative adversarial nets.Advances in neural information processing systems, 27, 2014

    Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets.Advances in neural information processing systems, 27, 2014

  15. [15]

    Auto-Encoding Variational Bayes

    Diederik P Kingma and Max Welling. Auto-encoding variational bayes.arXiv preprint arXiv:1312.6114, 2013

  16. [16]

    Attention is all you need.Advances in neural information processing systems, 30, 2017

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need.Advances in neural information processing systems, 30, 2017

  17. [17]

    Denoising diffusion probabilistic models.Advances in neural information processing systems, 33:6840–6851, 2020

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models.Advances in neural information processing systems, 33:6840–6851, 2020

  18. [18]

    Score-Based Generative Modeling through Stochastic Differential Equations

    Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations.arXiv preprint arXiv:2011.13456, 2020

  19. [19]

    High- resolution image synthesis with latent diffusion models

    Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨ orn Ommer. High- resolution image synthesis with latent diffusion models. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022

  20. [20]

    Flow Matching for Generative Modeling

    Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling.arXiv preprint arXiv:2210.02747, 2022

  21. [21]

    Fourierflow: Frequency-aware flow matching for generative turbulence modeling.arXiv preprint arXiv:2506.00862, 2025

    Haixin Wang, Jiashu Pan, Hao Wu, Fan Zhang, and Tailin Wu. Fourierflow: Frequency-aware flow matching for generative turbulence modeling.arXiv preprint arXiv:2506.00862, 2025

  22. [22]

    tempogan: A temporally coherent, volumetric gan for super-resolution fluid flow.ACM Transactions on Graphics (TOG), 37(4):1–15, 2018

    You Xie, Aleksandra Franz, Mengyu Chu, and Nils Thuerey. tempogan: A temporally coherent, volumetric gan for super-resolution fluid flow.ACM Transactions on Graphics (TOG), 37(4):1–15, 2018

  23. [23]

    Generative ai for fast and accurate statistical computation of fluids.arXiv preprint arXiv:2409.18359, 2024

    Roberto Molinaro, Samuel Lanthaler, Bogdan Raoni´ c, Tobias Rohner, Victor Armegioiu, Stephan Simonis, Dana Grund, Yannick Ramic, Zhong Yi Wan, Fei Sha, et al. Generative ai for fast and accurate statistical computation of fluids.arXiv preprint arXiv:2409.18359, 2024

  24. [24]

    Ai-based generative algorithms applied to the design of blended wing body aircraft

    Marta A Mart´ ın, Andr´ es Mateo, Thomas Wagenaar, Gonzalo Rubio, and Sven A Lanzan Ferran. Ai-based generative algorithms applied to the design of blended wing body aircraft. InAIAA AVIATION FORUM AND ASCEND 2025, page 3292, 2025

  25. [25]

    Airfoil diffusion: Denoising diffusion model for conditional airfoil generation.arXiv preprint arXiv:2408.15898, 2024

    Reid Graves and Amir Barati Farimani. Airfoil diffusion: Denoising diffusion model for conditional airfoil generation.arXiv preprint arXiv:2408.15898, 2024. 13

  26. [26]

    Exploring denoising diffusion models for compressible fluid field prediction.Computers & Fluids, 298:106665, 2025

    Rim Abaidi and Nikolaus A Adams. Exploring denoising diffusion models for compressible fluid field prediction.Computers & Fluids, 298:106665, 2025

  27. [27]

    Uncertainty-aware surrogate models for airfoil flow simulations with denoising diffusion probabilistic models.AIAA Journal, 62(8):2912–2933, 2024

    Qiang Liu and Nils Thuerey. Uncertainty-aware surrogate models for airfoil flow simulations with denoising diffusion probabilistic models.AIAA Journal, 62(8):2912–2933, 2024

  28. [28]

    Aerodit: Diffusion transformers for reynolds-averaged navier–stokes simulations of airfoil flows.Physics of Fluids, 37(12), 2025

    Chunyang Wang, Biyue Pan, Zhibo Dai, Yudi Cai, Yuhao Ma, Hao Zheng, Dixia Fan, and Hui Xiang. Aerodit: Diffusion transformers for reynolds-averaged navier–stokes simulations of airfoil flows.Physics of Fluids, 37(12), 2025

  29. [29]

    Foildiff: A hybrid diffusion transformer model for airfoil flow field prediction.Aerospace Science and Technology, page 111677, 2026

    Kenechukwu Ogbuagu, Sepehr Maleki, Giuseppe Bruni, and Senthil Krishnababu. Foildiff: A hybrid diffusion transformer model for airfoil flow field prediction.Aerospace Science and Technology, page 111677, 2026

  30. [30]

    U-net: Convolutional networks for biomedical image segmentation

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. InInternational Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015

  31. [32]

    Classifier-Free Diffusion Guidance

    Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance.arXiv preprint arXiv:2207.12598, 2022

  32. [33]

    A comparative study of learning techniques for the compressible aerodynamics over a transonic rae2822 airfoil.Computers & Fluids, 251:105759, 2023

    Giovanni Catalani, Daniel Costero, Michael Bauerheim, Luca Zampieri, Vincent Chapin, Nicolas Gourdain, and Pierre Baqu´ e. A comparative study of learning techniques for the compressible aerodynamics over a transonic rae2822 airfoil.Computers & Fluids, 251:105759, 2023

  33. [34]

    Onera’s crm wbpn database for machine learning activities, related regression challenge and first results.Computers & Fluids, 302:106838, 2025

    Jacques Peter, Quentin Bennehard, S´ ebastien Heib, Jean-Luc Hantrais-Gervois, and Fr´ ed´ eric Mo¨ ens. Onera’s crm wbpn database for machine learning activities, related regression challenge and first results.Computers & Fluids, 302:106838, 2025

  34. [35]

    Scalable diffusion models with transformers

    William Peebles and Saining Xie. Scalable diffusion models with transformers. InProceedings of the IEEE/CVF international conference on computer vision, pages 4195–4205, 2023

  35. [36]

    Root mean square layer normalization.Advances in neural infor- mation processing systems, 32, 2019

    Biao Zhang and Rico Sennrich. Root mean square layer normalization.Advances in neural infor- mation processing systems, 32, 2019

  36. [37]

    GLU Variants Improve Transformer

    Noam Shazeer. Glu variants improve transformer.arXiv preprint arXiv:2002.05202, 2020

  37. [38]

    Reconstruction vs

    Jingfeng Yao, Bin Yang, and Xinggang Wang. Reconstruction vs. generation: Taming optimization dilemma in latent diffusion models. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 15703–15712, 2025

  38. [39]

    Scaling vision transformers to 22 billion parameters

    Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Andreas Peter Steiner, Mathilde Caron, Robert Geirhos, Ibrahim Alabdulmohsin, et al. Scaling vision transformers to 22 billion parameters. InInternational conference on machine learning, pages 7480–7512. PMLR, 2023

  39. [40]

    Transformers are rnns: Fast autoregressive transformers with linear attention

    Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and Fran¸ cois Fleuret. Transformers are rnns: Fast autoregressive transformers with linear attention. InInternational conference on machine learning, pages 5156–5165. PMLR, 2020

  40. [41]

    SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers

    Enze Xie, Junsong Chen, Junyu Chen, Han Cai, Haotian Tang, Yujun Lin, Zhekai Zhang, Muyang Li, Ligeng Zhu, Yao Lu, et al. Sana: Efficient high-resolution image synthesis with linear diffusion transformers.arXiv preprint arXiv:2410.10629, 2024. [42]https://davidramosarchilla.github.io/FluidFlow/

  41. [42]

    127169.459

    Lucas Lacasa et al, Towards certification: A complete statistical validation pipeline for supervised learning in industry.Expert Systems with Applications277 (2025), p. 127169.459

  42. [43]

    Learning nonlinear operators via deeponet based on the universal approximation theorem of operators.Nature machine intelligence, 3(3):218–229, 2021

    Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators.Nature machine intelligence, 3(3):218–229, 2021. 14

  43. [44]

    Fourier neural operator with learned deformations for pdes on general geometries.Journal of Machine Learning Research, 24(388):1–26, 2023

    Zongyi Li, Daniel Zhengyu Huang, Burigede Liu, and Anima Anandkumar. Fourier neural operator with learned deformations for pdes on general geometries.Journal of Machine Learning Research, 24(388):1–26, 2023

  44. [45]

    Neural operators for accelerating scientific simulations and design.Nature Reviews Physics, 6(5):320–328, 2024

    Kamyar Azizzadenesheli, Nikola Kovachki, Zongyi Li, Miguel Liu-Schiaffini, Jean Kossaifi, and Anima Anandkumar. Neural operators for accelerating scientific simulations and design.Nature Reviews Physics, 6(5):320–328, 2024

  45. [46]

    Optuna: A next-generation hyperparameter optimization framework

    Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. InProceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2019

  46. [47]

    arXiv preprint arXiv:2304.11127 , year=

    Shuhei Watanabe. Tree-structured parzen estimator: Understanding its algorithm components and their roles for better empirical performance.arXiv preprint arXiv:2304.11127, 2023

  47. [48]

    Accelerate: Training and inference at scale made simple, efficient and adaptable.https://github.com/huggingface/accelerate, 2022

    Sylvain Gugger, Lysandre Debut, Thomas Wolf, Philipp Schmid, Zachary Mueller, Sourab Man- grulkar, Marc Sun, and Benjamin Bossan. Accelerate: Training and inference at scale made simple, efficient and adaptable.https://github.com/huggingface/accelerate, 2022. [50]https://github.com/DavidRamosArchilla/FluidFlow A Specifications and hyperparamenter optimisa...