pith. machine review for the scientific record. sign in

arxiv: 2605.01542 · v1 · submitted 2026-05-02 · 💻 cs.LG · cs.AI· physics.comp-ph

Recognition: unknown

Mesh Based Simulations with Spatial and Temporal awareness

Authors on Pith no claims yet

Pith reviewed 2026-05-09 14:57 UTC · model grok-4.3

classification 💻 cs.LG cs.AIphysics.comp-ph
keywords machine learning surrogatescomputational fluid dynamicsgraph neural networksmesh transformersstencil predictiontemporal correctionrotary embeddingslong-horizon simulation
0
0 comments X

The pith

Mesh-based ML models for fluid simulation gain accuracy and long-term stability by predicting entire local stencils and applying temporal cross-attention corrections.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Standard machine learning surrogates for computational fluid dynamics predict values at single mesh nodes and advance time with explicit Euler steps. These choices fail to respect the local continuity and stiff dynamics that finite-element and finite-volume methods enforce. The paper replaces node-wise losses with a multi-node stencil objective that requires the model to output consistent values across a node's full neighborhood. It further substitutes the explicit time step with a predictor-corrector loop driven by temporal cross-attention and adds three-dimensional rotary embeddings to capture rotational symmetries on unstructured meshes. When tested on Graph Neural Networks and Transformers across several physics datasets, the combined changes produce lower error, slower error growth over long rollouts, and latent features that transfer to unseen quantities such as wall shear stress.

Core claim

By training on stencil-level multi-node targets instead of isolated node values, inserting a temporal cross-attention corrector in place of explicit stepping, and encoding mesh geometry with 3D rotary positional embeddings, the resulting models produce field predictions whose spatial derivatives remain consistent with the underlying PDE and whose temporal trajectories remain stable far beyond the training horizon while also supporting zero-shot prediction of additional physical fields.

What carries the argument

Stencil-level multi-node prediction objective together with a temporal cross-attention predictor-corrector and 3D rotary positional embeddings on unstructured meshes.

If this is right

  • Long-horizon rollouts remain stable without rapid accumulation of local truncation errors.
  • The learned representations transfer directly to downstream tasks such as pressure or wall-shear-stress prediction without retraining.
  • The same training recipe improves performance across Graph Neural Networks, mesh Transformers, and related architectures.
  • Spatial consistency is achieved without adding explicit physics-informed loss terms.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach could be tested on adaptive mesh refinement by training only on coarse stencils and rolling out to finer resolutions.
  • Similar stencil objectives might reduce the need for separate physics-informed neural network regularizers in other PDE domains.
  • Because the method removes explicit Euler stepping, larger effective time steps become feasible once the corrector is trained.

Load-bearing premise

That forcing the network to predict an entire local stencil at once will enforce spatial derivative consistency and that the temporal cross-attention corrector will stabilize stiff dynamics without introducing fresh instabilities or requiring dataset-specific retuning.

What would settle it

A controlled long-horizon rollout experiment on a stiff advection-dominated flow in which the stencil-plus-correction model accumulates larger error or diverges earlier than an otherwise identical node-wise explicit baseline would falsify the central claim.

Figures

Figures reproduced from arXiv: 2605.01542 by Elie Hachem, Paul Garnier, Vincent Lannelongue.

Figure 1
Figure 1. Figure 1: Multi Node Prediction We process each node with a given model. Before decoding each node, we construct rings that consist of a latent node and freshly encoded neighbors. We then train a small cross-attention layer to predict the fields of each neighbor, while most relevant information lives in the latent central node. which does not explicitly enforce that the latent representa￾tion of z L i carries any in… view at source ↗
Figure 2
Figure 2. Figure 2: Results on 1-step RMSE. We see improvements for all models and all datasets. Since our approach incurs a slight increase in the number of trainable parameters, we also compute the metric for the original architecture with the same number of parameters as ours (represented by the yellow lines). For each head of width dh, we allocate channel pairs across the three axes. Let Ix, Iy, Iz index those pairs. With… view at source ↗
Figure 3
Figure 3. Figure 3: Scaling with model size. All rollout RMSE for a Transformer model on the three different datasets. Even when training models larger and larger, our approach keeps scaling similarly to the previous model view at source ↗
Figure 4
Figure 4. Figure 4: Scaling with training time All rollout RMSE for a MeshGraphNet architecture of two different datasets, for different training schedules. on long-term rollout. Predicting subtasks We used the latent representation of each layer (layer 0 being post-Encoder) of a Transformer with 10 layers, with and without Multi-Node Prediction training. We then fit a simple 2-layer MLP to predict dif￾ferent quantities such … view at source ↗
Figure 5
Figure 5. Figure 5: Ablation Studies We present our ablation studies for all three improvements and their alternatives. Ablations related to RoPE are in the left, to Multi-node Prediction in the middle, and to Temporal Correction in the right. Layer 10/10 Layer 6/10 view at source ↗
Figure 6
Figure 6. Figure 6: Subtasks prediction. We compute a regression task on next-step fields: velocity, pressure, and WSS, using latent representations from different layers. Importantly, the Default architecture is not always the same per ablation. For example, the default architecture for Multi Node Prediction is an architecture using RoPE. C¸ ., Song, H. F., Ballard, A. J., Gilmer, J., Dahl, G. E., Vaswani, A., Allen, K. R., … view at source ↗
Figure 7
Figure 7. Figure 7: We display details of our three primary datasets: DeformingPlate, Cylinder and Coarse Aneurysm. For the first two, we also display the mesh used for the simulations, while we discard it from the Aneurysm visualisation for practicality purposes. A. Datasets We give details below about the inputs and outputs used for each dataset (see view at source ↗
Figure 8
Figure 8. Figure 8: Impact of each upgrade. We detail the improvement in term of 1-step and all-rollout RMSE on the Aneurysm dataset for the Transformer architecture. We also detail the increase in terms of trainable parameters. Width vs. Depth For a given budget (in terms of trainable parameters), we study the optimal number of layers to achieve the lowest all-rollout RMSE. We display the results of a Transolver architecture… view at source ↗
Figure 9
Figure 9. Figure 9: We study the performance of a transolver model under a fixed number of parameters. We train models with 600k, 4M and 17M parameters, and very accordingly the number of layers and the hidden dimension to keep the parameters constant. C. Additional experiments Additional Ablations We present several additional experiments that were made in view at source ↗
Figure 10
Figure 10. Figure 10: Extra ablation studies. We present the impact of several variants of attention and of different loss functions. 20 view at source ↗
Figure 11
Figure 11. Figure 11: Effect on latent representation. While training architecture with and without Multi Node Prediction, we encode the next-step target to obtain a latent target, and compute it’s difference with the current latent representation at each stage of the architecture. We display the differences after zero spatial processing, and after L spatial processing view at source ↗
Figure 12
Figure 12. Figure 12: The next step velocity (target field) is presented in the top row. The latent representation without MNP is presented in the second row, while the latent representation with MNP is presented in the bottom row. 21 view at source ↗
read the original abstract

Machine Learning surrogates for Computational Fluid Dynamics (CFD), particularly Graph Neural Networks (GNNs) and Transformers, have become a new important approach for accelerating physics simulations. However, we identify a critical bottleneck in the field: while architectures have advanced significantly, the common underlying training paradigms remain bound to naive assumptions, such as node-wise supervision and explicit Euler time-stepping. These legacy choices ignore the stiff dynamics and local flux continuity inherent to numerous partial differential equations resolution methods, such as Finite Element, Difference, or Volume (FEM). In this work, we propose a unified framework to bridge the gap between geometric deep learning and rigorous numerical analysis. We introduce three key innovations: (1) Multi Node Prediction, a stencil-level objective that predicts field values for a node's full local topology, enforcing spatial derivative consistency; (2) Temporal Correction, replacing unstable explicit schemes with a predictor-corrector via temporal Cross-Attention; and (3) Geometric Inductive Biases, leveraging 3D Rotary Positional Embeddings (RoPE) to robustly capture rotational symmetries in unstructured meshes. We evaluate this framework across three architectures (MeshGraphNet, Transolver, and a Transformer) on diverse physics datasets. Our approach yields consistent improvements in accuracy and stability, particularly in long-horizon rollouts, while producing latent representations that generalize to unseen subtasks such as Wall Shear Stress or Pressure prediction. Code is available at https://github.com/DonsetPG/graph-physics.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper introduces a framework for ML-based surrogates in CFD on unstructured meshes, identifying limitations in standard node-wise supervision and explicit Euler stepping. It proposes three innovations: (1) Multi-Node Prediction, a stencil-level loss that predicts values over a node's local topology to enforce spatial derivative consistency; (2) Temporal Correction, a predictor-corrector scheme using temporal cross-attention to replace explicit time-stepping; and (3) 3D Rotary Positional Embeddings to capture rotational symmetries. The framework is tested on MeshGraphNet, Transolver, and Transformer architectures across multiple physics datasets, claiming consistent gains in accuracy and long-horizon stability plus generalization of learned latents to unseen tasks such as wall-shear-stress and pressure prediction.

Significance. If the empirical gains are shown to stem from the proposed mechanisms rather than increased capacity or regularization, the work would usefully connect geometric deep learning with discrete numerical principles, potentially improving reliability of mesh-based simulators for stiff or long-time problems. The public code release and multi-architecture evaluation are positive features.

major comments (3)
  1. [Method (Multi Node Prediction subsection)] The central claim that Multi-Node Prediction enforces spatial derivative consistency (and thereby improves stability) is load-bearing yet unsupported by derivation. No section shows that minimizing the stencil-level objective implies matching of discrete gradients or fluxes at element interfaces, nor is there an analysis of the induced discrete operator.
  2. [Method (Temporal Correction subsection) and Experiments] The Temporal Correction mechanism is asserted to stabilize stiff dynamics without introducing new instabilities, but the manuscript provides no spectral-radius analysis, eigenvalue bounds, or ablation isolating the cross-attention corrector from the predictor. Long-horizon gains could arise from implicit regularization rather than the predictor-corrector structure.
  3. [Experiments and Results] The experimental section reports consistent improvements but does not supply quantitative metrics, error bars, dataset sizes, or fair ablations that isolate each innovation from baseline capacity increases. Without these, it is impossible to verify whether the architectural changes are the cause of the reported gains in accuracy, stability, or generalization.
minor comments (2)
  1. [Method] Notation for the stencil loss and the temporal attention mask should be defined explicitly with respect to the mesh connectivity.
  2. [Introduction] The abstract and introduction would benefit from a short table summarizing the three datasets and the exact baselines used for each architecture.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback. We address each major comment below and describe the revisions we will make to improve the manuscript.

read point-by-point responses
  1. Referee: [Method (Multi Node Prediction subsection)] The central claim that Multi-Node Prediction enforces spatial derivative consistency (and thereby improves stability) is load-bearing yet unsupported by derivation. No section shows that minimizing the stencil-level objective implies matching of discrete gradients or fluxes at element interfaces, nor is there an analysis of the induced discrete operator.

    Authors: We agree that the current manuscript would benefit from a more explicit discussion of how the stencil-level objective relates to discrete gradient and flux consistency. The Multi-Node Prediction loss is constructed so that the model must produce coherent predictions across a node's local neighborhood, which by design encourages the learned update rule to respect local continuity properties similar to those enforced in finite-volume discretizations. We will revise the Multi-Node Prediction subsection to include a clearer explanation of the induced discrete operator and its consistency implications, together with any supporting analysis that can be derived. revision: yes

  2. Referee: [Method (Temporal Correction subsection) and Experiments] The Temporal Correction mechanism is asserted to stabilize stiff dynamics without introducing new instabilities, but the manuscript provides no spectral-radius analysis, eigenvalue bounds, or ablation isolating the cross-attention corrector from the predictor. Long-horizon gains could arise from implicit regularization rather than the predictor-corrector structure.

    Authors: We acknowledge that the manuscript currently lacks a spectral analysis of the learned time-stepping operator. The Temporal Correction module is intended to replace explicit Euler integration with a learned predictor-corrector that uses temporal cross-attention to produce a corrected state. We will add an ablation that isolates the contribution of the cross-attention corrector. While a complete eigenvalue analysis of the data-driven operator is difficult to obtain, we will expand the discussion to address possible regularization effects and the observed stability improvements in long-horizon rollouts. revision: partial

  3. Referee: [Experiments and Results] The experimental section reports consistent improvements but does not supply quantitative metrics, error bars, dataset sizes, or fair ablations that isolate each innovation from baseline capacity increases. Without these, it is impossible to verify whether the architectural changes are the cause of the reported gains in accuracy, stability, or generalization.

    Authors: We will substantially revise the Experiments and Results section to report quantitative metrics with error bars, explicit dataset sizes, and controlled ablations that match model capacity across variants. These additions will allow clearer isolation of the effects of Multi-Node Prediction, Temporal Correction, and 3D Rotary Positional Embeddings. revision: yes

Circularity Check

0 steps flagged

No circularity: innovations are new objectives and biases, not reductions to fitted inputs or self-citations.

full rationale

The paper's core claims rest on introducing Multi Node Prediction (stencil-level objective), Temporal Correction (predictor-corrector via cross-attention), and Geometric Inductive Biases (3D RoPE) as architectural and training changes to address node-wise supervision and explicit Euler issues. These are presented as proposals evaluated empirically on datasets for accuracy/stability gains and generalization, without any quoted equations showing a derived quantity reducing by construction to a fitted parameter, self-citation chain, or renamed input. No self-definitional loops, fitted-input predictions, or load-bearing self-citations appear in the provided text; the derivation chain is self-contained as empirical validation of new paradigms rather than tautological re-expression of priors.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The framework rests on the domain assumption that stencil-level supervision and predictor-corrector attention will better capture PDE properties than node-wise explicit schemes; no new physical entities are postulated and no free parameters are explicitly fitted in the abstract description.

axioms (1)
  • domain assumption Unstructured meshes from CFD can be treated as graphs or sequences where local topology and temporal evolution obey the same continuity rules as finite-element or finite-volume discretizations.
    Invoked when claiming that multi-node prediction enforces spatial derivative consistency and that temporal correction replaces explicit Euler.

pith-pipeline@v0.9.0 · 5565 in / 1450 out tokens · 37638 ms · 2026-05-09T14:57:16.083013+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

178 extracted references · 53 canonical work pages · 7 internal anchors

  1. [1]

    International Conference on Learning Representations , year =

    Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element Networks , author =. International Conference on Learning Representations , year =

  2. [2]

    Yu, Youn-Yeol and Choi, Jeongwhan and Park, Jaehyeon and Lee, Kookjin and Park, Noseong , booktitle =

  3. [3]

    International Conference on Learning Representations , year =

    Improving Long-Range Interactions in Graph Neural Simulators via Hamiltonian Dynamics , author =. International Conference on Learning Representations , year =

  4. [4]

    International Conference on Learning Representations , year =

    Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer , author =. International Conference on Learning Representations , year =

  5. [5]

    International Conference on Learning Representations , year =

    Janny, Steeven and B. International Conference on Learning Representations , year =

  6. [6]

    International Conference on Machine Learning , year =

    Unisoma: A Unified Transformer-based Solver for Multi-Solid Systems , author =. International Conference on Machine Learning , year =

  7. [7]

    npj Digital Medicine , volume =

    Physics constrained graph neural network for real time prediction of intracranial aneurysm hemodynamics , author =. npj Digital Medicine , volume =. 2026 , doi =

  8. [8]

    Physics of Fluids , volume =

    Fluid--structure interaction analysis of pulsatile flow in arterial aneurysms with physics-informed neural networks and computational fluid dynamics , author =. Physics of Fluids , volume =. 2025 , doi =

  9. [9]

    Mechanics Research Communications , volume =

    Enhanced Vascular Flow Simulations in Aortic Aneurysm via Physics-Informed Neural Networks and Deep Operator Networks , author =. Mechanics Research Communications , volume =

  10. [10]

    2017 , eprint=

    Attention Is All You Need , author=. 2017 , eprint=

  11. [11]

    2021 , eprint=

    Learning Mesh-Based Simulation with Graph Networks , author=. 2021 , eprint=

  12. [12]

    2023 , eprint=

    4D Gaussian Splatting for Real-Time Dynamic Scene Rendering , author=. 2023 , eprint=

  13. [13]

    Proceedings of the 36th International Conference on Neural Information Processing Systems , articleno =

    Feng, Jiarui and Chen, Yixin and Li, Fuhai and Sarkar, Anindya and Zhang, Muhan , title =. Proceedings of the 36th International Conference on Neural Information Processing Systems , articleno =. 2024 , isbn =

  14. [14]

    2023 , eprint=

    Efficient Learning of Mesh-Based Physical Simulation with BSMS-GNN , author=. 2023 , eprint=

  15. [15]

    2020 , eprint=

    Learning to Simulate Complex Physics with Graph Networks , author=. 2020 , eprint=

  16. [16]

    2023 , eprint=

    GraphCast: Learning skillful medium-range global weather forecasting , author=. 2023 , eprint=

  17. [17]

    2023 , eprint=

    Adding Conditional Control to Text-to-Image Diffusion Models , author=. 2023 , eprint=

  18. [18]

    2022 , eprint=

    Hierarchical Text-Conditional Image Generation with CLIP Latents , author=. 2022 , eprint=

  19. [19]

    2021 , eprint=

    Learning Transferable Visual Models From Natural Language Supervision , author=. 2021 , eprint=

  20. [20]

    A review on deep reinforcement learning for fluid mechanics.Computers & Fluids, 225:104973, July 2021

    Paul Garnier and Jonathan Viquerat and Jean Rabault and Aurélien Larcher and Alexander Kuhnle and Elie Hachem , keywords =. A review on deep reinforcement learning for fluid mechanics , journal =. 2021 , issn =. doi:https://doi.org/10.1016/j.compfluid.2021.104973 , url =

  21. [21]

    Direct shape optimization through deep reinforcement learning , journal =

    Jonathan Viquerat and Jean Rabault and Alexander Kuhnle and Hassan Ghraieb and Aurélien Larcher and Elie Hachem , keywords =. Direct shape optimization through deep reinforcement learning , journal =. 2021 , issn =. doi:https://doi.org/10.1016/j.jcp.2020.110080 , url =

  22. [22]

    2023 , eprint=

    Proposal for Numerical Benchmarking of Fluid-Structure Interaction in Cerebral Aneurysms , author=. 2023 , eprint=

  23. [23]

    Enhancement of cerebrovascular 4D flow MRI velocity fields using machine learning and computational fluid dynamics simulation data , volume =

    Rutkowski, David and Roldán-Alzate, Alejandro and Johnson, Kevin , year =. Enhancement of cerebrovascular 4D flow MRI velocity fields using machine learning and computational fluid dynamics simulation data , volume =. Scientific Reports , doi =

  24. [24]

    and Goetz, Aurèle and Rico, P

    Hachem, Elie and Meliga, P. and Goetz, Aurèle and Rico, P. and Viquerat, J. and Larcher, Aurélien and Valette, R. and Sanches, Augusto and Lannelongue, V. and Ghraieb, H. and Nemer, Ramy and Ozpeynirci, Y. and Liebig, Thomas , year =. Reinforcement learning for patient-specific optimal stenting of intracranial aneurysms , volume =. Scientific Reports , doi =

  25. [25]

    and Livesu, M

    Mancinelli, C. and Livesu, M. and Puppo, E. , year =. Smart Tools and Apps for Graphics - Eurographics Italian Chapter Conference , editor =

  26. [26]

    Improved denois- ing diffusion probabilistic models.arXiv preprint arXiv:2102.09672,

    Alex Nichol and Prafulla Dhariwal , title =. CoRR , volume =. 2021 , url =. 2102.09672 , timestamp =

  27. [27]

    Parameter-Efficient Transfer Learning for NLP

    Neil Houlsby and Andrei Giurgiu and Stanislaw Jastrzebski and Bruna Morrone and Quentin de Laroussilhe and Andrea Gesmundo and Mona Attariyan and Sylvain Gelly , title =. CoRR , volume =. 2019 , url =. 1902.00751 , timestamp =

  28. [28]

    Progressive Growing of GANs for Improved Quality, Stability, and Variation

    Tero Karras and Timo Aila and Samuli Laine and Jaakko Lehtinen , title =. CoRR , volume =. 2017 , url =. 1710.10196 , timestamp =

  29. [29]

    LoRA: Low-Rank Adaptation of Large Language Models

    Edward J. Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen. LoRA: Low-Rank Adaptation of Large Language Models , journal =. 2021 , url =. 2106.09685 , timestamp =

  30. [30]

    Zhang and Alexander Sax and Amir Zamir and Leonidas J

    Jeffrey O. Zhang and Alexander Sax and Amir Zamir and Leonidas J. Guibas and Jitendra Malik , title =. CoRR , volume =. 2019 , url =. 1912.13503 , timestamp =

  31. [31]

    CoRR , volume =

    Arun Mallya and Svetlana Lazebnik , title =. CoRR , volume =. 2017 , url =. 1711.05769 , timestamp =

  32. [32]

    Frontiers in Bioengineering and Biotechnology , VOLUME=

    Goetz, Aurèle and Jeken-Rico, Pablo and Pelissier, Ugo and Chau, Yves and Sédat, Jacques and Hachem, Elie , TITLE=. Frontiers in Bioengineering and Biotechnology , VOLUME=. 2024 , URL=. doi:10.3389/fbioe.2024.1433811 , ISSN=

  33. [33]

    CoRR , volume =

    Arun Mallya and Svetlana Lazebnik , title =. CoRR , volume =. 2018 , url =. 1801.06519 , timestamp =

  34. [34]

    Overcoming catastrophic forgetting with hard attention to the task , journal =

    Joan Serr. Overcoming catastrophic forgetting with hard attention to the task , journal =. 2018 , url =. 1801.01423 , timestamp =

  35. [35]

    CoRR , volume =

    Jonathan Tompson and Kristofer Schlachter and Pablo Sprechmann and Ken Perlin , title =. CoRR , volume =. 2016 , url =. 1607.03597 , timestamp =

  36. [36]

    Interaction networks for learning about objects, relations and physics.arXiv:1612.00222,

    Peter W. Battaglia and Razvan Pascanu and Matthew Lai and Danilo Jimenez Rezende and Koray Kavukcuoglu , title =. CoRR , volume =. 2016 , url =. 1612.00222 , timestamp =

  37. [37]

    Deep learning to replace, improve, or aid CFD analysis in built environment applications: A review , journal =

    Giovanni Calzolari and Wei Liu , keywords =. Deep learning to replace, improve, or aid CFD analysis in built environment applications: A review , journal =. 2021 , issn =. doi:https://doi.org/10.1016/j.buildenv.2021.108315 , url =

  38. [38]

    APL Machine Learning , volume =

    Patil, Aakash and Viquerat, Jonathan and Hachem, Elie , title = ". APL Machine Learning , volume =. 2023 , month =. doi:10.1063/5.0152212 , url =

  39. [39]

    2020 , eprint=

    Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks , author=. 2020 , eprint=

  40. [40]

    2021 , eprint=

    Rethinking Graph Transformers with Spectral Attention , author=. 2021 , eprint=

  41. [41]

    2021 , eprint=

    A Generalization of Transformer Networks to Graphs , author=. 2021 , eprint=

  42. [42]

    2024 , eprint=

    Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models , author=. 2024 , eprint=

  43. [43]

    2019 , eprint=

    Root Mean Square Layer Normalization , author=. 2019 , eprint=

  44. [44]

    CoRR , volume =

    Nils Thuerey and Konstantin Weissenow and Harshit Mehrotra and Nischal Mainali and Lukas Prantl and Xiangyu Hu , title =. CoRR , volume =. 2018 , url =. 1810.08217 , timestamp =

  45. [45]

    arXiv e-prints , keywords =

    U-net architectures for fast prediction of incompressible laminar flows. arXiv e-prints , keywords =. doi:10.48550/arXiv.1910.13532 , archivePrefix =. 1910.13532 , primaryClass =

  46. [46]

    , year =

    Rinkel GJ, Djibuti M, Algra A, van Gijn J. , year =. Prevalence and risk of rupture of intracranial aneurysms: a systematic review. , doi =

  47. [47]

    , year =

    Wiebers, David O et al. , year =. Unruptured intracranial aneurysms: natural history, clinical outcome, and risks of surgical and endovascular treatment , doi =

  48. [48]

    Fast Virtual Stenting with Active Contour Models in Intracranical Aneurysm , volume =

    Zhong, Jingru and Long, Yunling and Yan, Huagang and Meng, Qianqian and Zhao, Jing and Zhang, Ying and Yang, Xinjian and Li, Haiyun , year =. Fast Virtual Stenting with Active Contour Models in Intracranical Aneurysm , volume =. Scientific Reports , doi =

  49. [49]

    Frontiers in Physiology , VOLUME=

    Bisighini, Beatrice and Aguirre, Miquel and Biancolini, Marco Evangelos and Trovalusci, Federica and Perrin, David and Avril, Stéphane and Pierrat, Baptiste , TITLE=. Frontiers in Physiology , VOLUME=. 2023 , URL=. doi:10.3389/fphys.2023.1148540 , ISSN=

  50. [50]

    Frontiers in Medical Technology , VOLUME=

    Avril, Stéphane , TITLE=. Frontiers in Medical Technology , VOLUME=. 2023 , URL=. doi:10.3389/fmedt.2023.1304223 , ISSN=

  51. [51]

    Journalism quarterly , keywords =

    Taylor, Wilson L , biburl =. Journalism quarterly , keywords =

  52. [52]

    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

    Jacob Devlin and Ming. CoRR , volume =. 2018 , url =. 1810.04805 , timestamp =

  53. [53]

    Deep Unsupervised Learning using Nonequilibrium Thermodynamics

    Jascha Sohl. Deep Unsupervised Learning using Nonequilibrium Thermodynamics , journal =. 2015 , url =. 1503.03585 , timestamp =

  54. [54]

    Guerreiro, António Loison, Duarte M

    CroissantLLM: A Truly Bilingual French-English Language Model. arXiv e-prints , keywords =. doi:10.48550/arXiv.2402.00786 , archivePrefix =. 2402.00786 , primaryClass =

  55. [55]

    3d gaussian splatting for real-time radiance field rendering

    3D Gaussian Splatting for Real-Time Radiance Field Rendering. arXiv e-prints , keywords =. doi:10.48550/arXiv.2308.04079 , archivePrefix =. 2308.04079 , primaryClass =

  56. [56]

    Mistral 7B

    Mistral 7B. arXiv e-prints , keywords =. doi:10.48550/arXiv.2310.06825 , archivePrefix =. 2310.06825 , primaryClass =

  57. [57]

    Radford, Alec and Narasimhan, Karthik and Salimans, Tim and Sutskever, Ilya , biburl =

  58. [58]

    Language Models are Unsupervised Multitask Learners , url =

    Radford, Alec and Wu, Jeffrey and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya , biburl =. Language Models are Unsupervised Multitask Learners , url =

  59. [59]

    Relational inductive biases, deep learning, and graph networks

    Peter W. Battaglia and Jessica B. Hamrick and Victor Bapst and Alvaro Sanchez. Relational inductive biases, deep learning, and graph networks , journal =. 2018 , url =. 1806.01261 , timestamp =

  60. [60]

    Multiscale MeshGraphNets.arXiv preprint arXiv:2210.00612,

    MultiScale MeshGraphNets. arXiv e-prints , keywords =. doi:10.48550/arXiv.2210.00612 , archivePrefix =. 2210.00612 , primaryClass =

  61. [61]

    Strategies for pre-training graph neural networks.arXiv preprint arXiv:1905.12265, 2019

    Weihua Hu and Bowen Liu and Joseph Gomes and Marinka Zitnik and Percy Liang and Vijay S. Pande and Jure Leskovec , title =. CoRR , volume =. 2019 , url =. 1905.12265 , timestamp =

  62. [62]

    CoRR , volume =

    Qiaoyu Tan and Ninghao Liu and Xiao Huang and Rui Chen and Soo. CoRR , volume =. 2022 , url =. 2201.02534 , timestamp =

  63. [63]

    CoRR , volume =

    Ziniu Hu and Yuxiao Dong and Kuansan Wang and Kai. CoRR , volume =. 2020 , url =. 2006.15437 , timestamp =

  64. [64]

    Proceedings of the AAAI Conference on Artificial Intelligence , author=

    Learning to Pre-train Graph Neural Networks , volume=. Proceedings of the AAAI Conference on Artificial Intelligence , author=. 2021 , month=. doi:10.1609/aaai.v35i5.16552 , abstractNote=

  65. [65]

    , biburl =

    Narain, Rahul and Samii, Armin and O'Brien, James F. , biburl =. Adaptive anisotropic remeshing for cloth simulation. , url =. ACM Trans. Graph. , keywords =

  66. [66]

    Comsol multiphysics® , title =

  67. [67]

    DOLFINx: the next generation FEniCS problem solving environment.Zenodo, 2023

    Baratta, Igor A. and Dean, Joseph P. and Dokken, J. doi:10.5281/zenodo.10447666 , year =

  68. [68]

    ACM Transactions on Mathematical Software , year =

    Construction of arbitrary order finite element degree-of-freedom maps on polygonal and polyhedral cell meshes , author =. ACM Transactions on Mathematical Software , year =. doi:10.1145/3524456 , pages =

  69. [69]

    Santurkar, S., Tsipras, D., Ilyas, A., and M ˛ adry, A

    Basix: a runtime finite element basis evaluation library , author =. Journal of Open Source Software , year =. doi:10.21105/joss.03982 , pages =

  70. [70]

    2014 , volume =

    Unified Form Language: A domain-specific language for weak formulations of partial differential equations , author =. 2014 , volume =

  71. [71]

    and Blechta, Jan and Hake, Johan and Johansson, August and Kehlet, Benjamin and Logg, Anders and Richardson, Chris N

    Alnaes, Martin S. and Blechta, Jan and Hake, Johan and Johansson, August and Kehlet, Benjamin and Logg, Anders and Richardson, Chris N. and Ring, Johannes and Rognes, Marie E. and Wells, Garth N. , journal =. The. 2015 , volume =

  72. [72]

    2012 , doi =

    Automated Solution of Differential Equations by the Finite Element Method , author =. 2012 , doi =

  73. [73]

    , journal =

    Logg, Anders and Wells, Garth N. , journal =. 2010 , volume =

  74. [74]

    and Hake, Johan , year =

    Logg, Anders and Wells, Garth N. and Hake, Johan , year =. Automated Solution of Differential Equations by the Finite Element Method , publisher =

  75. [75]

    2006 , volume =

    A Compiler for Variational Forms , author =. 2006 , volume =

  76. [76]

    2012 , booktitle =

    Logg, Anders and. 2012 , booktitle =

  77. [77]

    2010 , volume =

    Optimisations for Quadrature Representations of Finite Element Tensors Through Automated Code Generation , author =. 2010 , volume =

  78. [78]

    , journal =

    Kirby, Robert C. , journal =. Algorithm 839:. 2004 , volume =. doi:10.1145/1039813.1039820 , pages =

  79. [79]

    , year =

    Kirby, Robert C. , year =. Automated Solution of Differential Equations by the Finite Element Method , publisher =

  80. [80]

    and Demmel, J

    Adams, M. and Demmel, J. , booktitle=. Parallel Multigrid Solver for 3D Unstructured Finite Element Problems , year=

Showing first 80 references.