pith. machine review for the scientific record. sign in

arxiv: 2605.04474 · v3 · submitted 2026-05-06 · 💻 cs.LG

Recognition: 2 theorem links

· Lean Theorem

Geometry-Aware Neural Optimizer for Shape Optimization and Inversion

Authors on Pith no claims yet

Pith reviewed 2026-05-15 06:29 UTC · model grok-4.3

classification 💻 cs.LG
keywords shape optimizationneural surrogatedifferentiable optimizationlatent spacegeometry representationPDE inversionauto-decoderdenoising
0
0 comments X

The pith

GANO unifies auto-decoder shape encoding, denoising-based latent updates, and surrogate gradients into one differentiable loop for shape optimization and inversion.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes GANO to solve shape optimization and inversion problems in PDE-governed systems. It encodes geometry with an auto-decoder, stabilizes latent code changes through a denoising step, and routes objective gradients through a geometry-informed surrogate. This single loop replaces separate simulation and geometry processing stages while adding part-wise control via null-space projection. The authors prove that the denoising step creates an implicit Jacobian regularization that limits decoder sensitivity and produces controlled deformations. If the method works as described, it removes the need for hand-crafted parameterizations and expert intervention in design and inversion tasks.

Core claim

GANO encodes shapes with an auto-decoder and stabilizes latent updates via a denoising mechanism, and a geometry-informed surrogate provides a reliable gradient pathway for geometry updates. Moreover, GANO supports part-wise control through null-space projection and uses remeshing-free projection to accelerate geometry processing. We further prove that denoising induces an implicit Jacobian regularization that reduces decoder sensitivity, yielding controlled deformations.

What carries the argument

The GANO framework, which integrates an auto-decoder for geometry representation, a denoising mechanism for latent stability, and a geometry-informed surrogate for gradient flow inside a unified latent-space optimization loop.

Load-bearing premise

The auto-decoder and geometry-informed surrogate together supply accurate, stable gradients from the objective back to the latent code for arbitrary unseen shapes.

What would settle it

Measure surrogate gradient error and optimization convergence on shapes whose latent codes lie far outside the training distribution; divergence or large gradient mismatch would falsify the stability claim.

Figures

Figures reproduced from arXiv: 2605.04474 by Guoze Sun, Han Wan, Hao Sun, Haoyang Huang, Huaguan Chen, Rui Zhang, Tianya Miao.

Figure 1
Figure 1. Figure 1: Comparison between a classical geometry optimization loop (a) and the proposed GANO loop (b). key tasks: optimization (Fei et al., 2025), which designs geometries to meet objectives under manufacturing con￾straints, and inversion (Bastek & Kochmann, 2023; Salo, 2024), which infers unknown shapes from observations. Efficient and reliable geometric optimization/inversion re￾duces computational cost and is cr… view at source ↗
Figure 2
Figure 2. Figure 2: Pipeline of GANO. a. Geometry representation: GANO models Γ as an implicit SDF sθ(x) and uses denoising-style augmentation during training. b. Forward analysis: A geometry-informed module to inject geometry information and creates a gradient pathway. c. Optimization: We backpropagate ∇zJ and apply a null-space projection P(z) to achieve controllable geometry update. lacks the ability of part-wise optimizat… view at source ↗
Figure 3
Figure 3. Figure 3: Latent-space of STABLESDF. a. Linear interpolation between two latent codes z0 and z1 in the dataset. b. Sampling latent codes from z ∼ N (0, 0.052 I). noising technique to make the latent-to-geometry mapping locally smooth, which is critical for stable gradient-based op￾timization. We propose our adapted model as STABLESDF. Preliminary: DEEPSDF. A signed distance function (SDF) assigns each spatial locati… view at source ↗
Figure 4
Figure 4. Figure 4: Experimental setups. a. 2D Helmholtz; b. 2D airfoil; c. 3D vehicle. Therefore, the additional penalty due to perturbation is ap￾proximately σ q 2 π ∥∇zsθ(x, z)∥2, which acts as an implicit latent-Jacobian regularizer. This makes the decoder less sensitive to latent perturbations near the geometry manifold. (2) Reduced sensitivity yields controlled surface displace￾ment. Assume (i) level-set regularity ∥∇xs… view at source ↗
Figure 5
Figure 5. Figure 5: 2D Helmholtz: forward prediction and shape inversion. a. Comparison of the real-part. b. Shape inversion from sensor array view at source ↗
Figure 6
Figure 6. Figure 6: 2D Airfoil: forward prediction and shape optimiza￾tion. a. Comparison of predicted flow fields. b. Optimized airfoil shapes under soft drag constraint (CD < 0.02). The shown airfoils are the best results from 100 iterations of each method view at source ↗
Figure 7
Figure 7. Figure 7: 3D Vehicle: surface pressure prediction and optimization on DrivAerNet++. a. Comparison of surface pressure. b-c. Two representative optimization cases. For each case, we show the initial shape and the optimized result from GANO view at source ↗
Figure 8
Figure 8. Figure 8: Comparison of GANO and PhysGen on vehicle optimization. Gray denotes the input vehicle, and yellow highlights regions with large geometric changes after optimization. a. GANO achieves a better result than PhysGen; b. The two are comparable. However, PhysGen often reduces drag by shrinking or sweeping back the side mirrors, while GANO largely preserves them due to null-space projection while still achieving… view at source ↗
Figure 9
Figure 9. Figure 9: Analysis of STABLESDF. a. The distribution of latent Jacobian norms ∥∇zsθ(x, z)∥ for DEEPSDF and STABLESDF. b. Decoded shapes under increasing Gaussian perturbations around a test latent code. c. Reconstruction comparison on a test vehicle view at source ↗
Figure 10
Figure 10. Figure 10: Null-space projection (NSP) preserves the mirror during optimization. Mirror close-up for the initial design and GANO optimized with/without NSP. 2014) that uses a tri-plane representation to parameterize the latent space. The VAE takes surface point clouds as input and is trained with two output targets: (i) SDF and (ii) occupancy (voxel). We find that SDF prediction with the VAE is difficult to optimize… view at source ↗
Figure 11
Figure 11. Figure 11: , we visualize the slices to compare our GI-TRANSOLVER against the original TRANSOLVER view at source ↗
Figure 12
Figure 12. Figure 12: Ablation of the number of sensors on Helmholtz inversion. E.4. The Effects of Number of Sampling Points for STABLESDF We study how the number of input samples at inference time affects reconstruction quality of STABLESDF on DrivAerNet++. While our model is trained with 50000 surface points per shape, we evaluate reconstruction metrics using different numbers of sampled surface points, i.e., 10000, 20000, … view at source ↗
Figure 13
Figure 13. Figure 13: a, we add a Gaussian perturbation to surface points, the resulting distribution (red) exhibits significant deviation from the surface. After applying our method, the distances converge effectively to zero (green), demonstrating that the points now lie on the surface. This is visually corroborated in Fig. 13b, where the point cloud, initially colored by distance errors, is corrected to perfectly align with… view at source ↗
Figure 14
Figure 14. Figure 14: Distribution of the Jacobian norm of the StableSDF output with respect to x: a. before random perturbation and b. after random perturbation. The statistics are computed from 100 randomly sampled test vehicles, with 10,000 points sampled from each vehicle. latency (4.53 vs. 4.63 ms) with fewer parameters (2.49M vs. 3.78M), at a modest increase in memory (2.40 vs. 1.68 GB). In contrast, AEROGTO is consisten… view at source ↗
Figure 15
Figure 15. Figure 15: Full results on Helmholtz dataset. 33 view at source ↗
Figure 16
Figure 16. Figure 16: Full results on Airfoil dataset view at source ↗
Figure 17
Figure 17. Figure 17: Remeshing free optimization process for an airfoil, showing how sampling points changes with the geometry. 34 view at source ↗
Figure 18
Figure 18. Figure 18: 2D Helmholtz tasks: comparison of geometry-injected UNet and DeepONet, CORAL, and GANO. a. Forward task. In CORAL, Fourier shape parameters are first used to generate the obstacle boundary, and the region between this boundary and a fixed outer boundary is then interpolated into a 2-channel coordinate field as the geometric input encoded by CORAL. When predicting the latent physical field, the geometric l… view at source ↗
Figure 19
Figure 19. Figure 19: Comparison of the results after optimizing the same Fastback and Estateback car for 20 steps using StableSDF and DeepSDF under the same settings. a. After optimization with DeepSDF, the deformation of the car details is relatively severe: the side mirrors detach from the car body, and the rear door handle as well as the rear window also undergo severe deformation. b. After optimization with DeepSDF, the w… view at source ↗
Figure 20
Figure 20. Figure 20: Comparison of the results of GANO and PhysGen in predicting the pressure field. 36 view at source ↗
read the original abstract

Geometry is central to PDE-governed systems, motivating shape optimization and inversion. Classical pipelines conduct costly forward simulation with geometry processing, requiring substantial expert effort. Neural surrogates accelerate forward analysis but do not close the loop because gradients from objectives to geometry are often unavailable. Existing differentiable methods either rely on restrictive parameterizations or unstable latent optimization driven by scalar objectives, limiting interpretability and part-wise control. To address these challenges, we propose Geometry-Aware Neural Optimizer (\textbf{\textsc{GANO}}), an end-to-end differentiable framework that unifies geometry representation, field-level prediction, and automated optimization/inversion in a single latent-space loop. \textsc{GANO} encodes shapes with an auto-decoder and stabilizes latent updates via a denoising mechanism, and a geometry-informed surrogate provides a reliable gradient pathway for geometry updates. Moreover, \textsc{GANO} supports part-wise control through null-space projection and uses remeshing-free projection to accelerate geometry processing. We further prove that denoising induces an implicit Jacobian regularization that reduces decoder sensitivity, yielding controlled deformations. Experiments on three benchmarks spanning 2D Helmholtz, 2D airfoil, and 3D vehicles show state-of-the-art accuracy and stable, controllable updates, achieving up to +55.9% lift-to-drag improvement for airfoils and ~7% drag reduction for vehicles.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript introduces the Geometry-Aware Neural Optimizer (GANO), an end-to-end differentiable framework that encodes shapes via an auto-decoder, stabilizes latent-space updates with a denoising mechanism, employs a geometry-informed surrogate to supply gradients from PDE objectives back to the latent code, and enables part-wise control via null-space projection together with remeshing-free projection. It claims state-of-the-art accuracy on three benchmarks (2D Helmholtz, 2D airfoil, 3D vehicles) with reported gains of up to +55.9% lift-to-drag and ~7% drag reduction, and supplies a proof that denoising induces implicit Jacobian regularization that reduces decoder sensitivity and yields controlled deformations.

Significance. If the surrogate gradients remain accurate along optimization trajectories that may leave the training distribution, and if the implicit-regularization proof holds under the stated assumptions, the work would meaningfully advance automated shape optimization by closing the differentiable loop in latent space while adding interpretability and part-wise control. The theoretical contribution on denoising-induced regularization would be a clear strength.

major comments (3)
  1. Section 3.2 (geometry-informed surrogate): the central claim that the surrogate supplies reliable gradients for arbitrary unseen shapes is not supported by experiments that test latent codes driven outside the training manifold by the optimizer; without such validation the stability and part-wise control assertions rest on an unverified extrapolation assumption.
  2. Theorem 1 (implicit Jacobian regularization): the proof treats denoising strength as a fixed hyper-parameter yet the manuscript provides no sensitivity analysis or ablation showing how variation in this free parameter affects the claimed regularization and downstream optimization stability.
  3. Tables 2–4 (benchmark results): no error bars, statistical significance tests, or ablation isolating the denoising strength are reported, so the quantitative SOTA claims cannot be assessed for robustness.
minor comments (2)
  1. Abstract: the reported lift-to-drag improvement percentage lacks an explicit baseline reference, making the magnitude of the gain difficult to interpret without the full table.
  2. Notation section: the null-space projection operator is introduced without an accompanying equation; adding one would clarify how it interacts with the surrogate gradient pathway.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We address each major comment point by point below and will incorporate revisions to improve the robustness and clarity of the manuscript.

read point-by-point responses
  1. Referee: Section 3.2 (geometry-informed surrogate): the central claim that the surrogate supplies reliable gradients for arbitrary unseen shapes is not supported by experiments that test latent codes driven outside the training manifold by the optimizer; without such validation the stability and part-wise control assertions rest on an unverified extrapolation assumption.

    Authors: We agree that explicit testing of surrogate gradient reliability for latent codes driven outside the training manifold would strengthen the stability and part-wise control claims. In the existing benchmarks the optimization trajectories converge to high-quality shapes without observed instability, indicating that the latent codes remain in regions where the surrogate remains accurate. To directly address the concern, we will add new experiments in the revised manuscript that intentionally initialize or perturb latent codes outside the training distribution (e.g., via large noise injection or out-of-distribution starting points) and quantify surrogate prediction error, gradient accuracy, and optimization stability along those trajectories. revision: yes

  2. Referee: Theorem 1 (implicit Jacobian regularization): the proof treats denoising strength as a fixed hyper-parameter yet the manuscript provides no sensitivity analysis or ablation showing how variation in this free parameter affects the claimed regularization and downstream optimization stability.

    Authors: Theorem 1 is stated for a general denoising strength parameter, but we acknowledge that the manuscript lacks empirical sensitivity analysis. In the revision we will add an ablation study that varies the denoising strength over a representative range, measures the resulting change in decoder Jacobian norm (to quantify the implicit regularization), and reports the impact on optimization stability and final benchmark performance. revision: yes

  3. Referee: Tables 2–4 (benchmark results): no error bars, statistical significance tests, or ablation isolating the denoising strength are reported, so the quantitative SOTA claims cannot be assessed for robustness.

    Authors: We will revise the experimental section to report error bars computed from multiple independent runs with different random seeds for all results in Tables 2–4. We will also include statistical significance tests (e.g., paired t-tests against baselines) and add a dedicated ablation that isolates the contribution of the denoising mechanism to the reported performance gains. revision: yes

Circularity Check

0 steps flagged

No circularity: derivation chain remains independent of fitted results

full rationale

The central framework (auto-decoder + denoising + geometry-informed surrogate + null-space projection) is introduced with a mathematical proof that denoising induces implicit Jacobian regularization; this proof is presented as a self-contained derivation rather than a fit or self-citation reduction. No equations equate a 'prediction' to a fitted parameter by construction, and no load-bearing uniqueness theorem or ansatz is imported solely via self-citation. Experimental results on benchmarks are reported separately from the derivation, leaving the chain self-contained against external validation.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 0 invented entities

The central claim rests on the existence of a well-behaved latent manifold for shapes, differentiability of the surrogate, and the mathematical effect of the denoising operator; no new physical constants or particles are introduced.

free parameters (2)
  • latent dimension
    Dimension of the auto-decoder latent code; chosen to balance expressivity and stability but not derived from first principles.
  • denoising strength
    Hyper-parameter controlling the strength of the latent denoising step; fitted or tuned on validation shapes.
axioms (2)
  • domain assumption The decoder mapping from latent code to geometry is differentiable almost everywhere.
    Invoked to guarantee that gradients from the surrogate objective reach the latent variables.
  • ad hoc to paper Denoising induces an implicit Jacobian regularization.
    The paper states it proves this property; the proof is part of the contribution rather than a standard background result.

pith-pipeline@v0.9.0 · 5548 in / 1564 out tokens · 29542 ms · 2026-05-15T06:29:44.784351+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

54 extracted references · 54 canonical work pages · 1 internal anchor

  1. [1]

    Seminar on Differential Geometry , year=

    Survey on partial differential equations in differential geometry , author=. Seminar on Differential Geometry , year=

  2. [2]

    ACM Computing Surveys , year=

    A survey of geometric optimization for deep learning: from Euclidean space to Riemannian manifold , author=. ACM Computing Surveys , year=

  3. [3]

    Microlocal Analysis and Inverse Problems in Tomography and Geometry , year=

    On geometric inverse problems and microlocal analysis , author=. Microlocal Analysis and Inverse Problems in Tomography and Geometry , year=

  4. [4]

    Computers & Fluids , year=

    Numerical sensitivity analysis for aerodynamic optimization: A survey of approaches , author=. Computers & Fluids , year=

  5. [5]

    ICML , year=

    Transolver: A Fast Transformer Solver for PDEs on General Geometries , author=. ICML , year=

  6. [6]

    Bocheng Zeng and Qi Wang and Mengtao Yan and Yang Liu and Ruizhi Chengze and Yi Zhang and Hongsheng Liu and Zidong Wang and Hao Sun , booktitle=. Phy

  7. [7]

    Engineering Applications of Artificial Intelligence , year=

    Deep neural operators as accurate surrogates for shape optimization , author=. Engineering Applications of Artificial Intelligence , year=

  8. [8]

    arXiv preprint arXiv:2511.10761 , year=

    Surrogate-Based Differentiable Pipeline for Shape Optimization , author=. arXiv preprint arXiv:2511.10761 , year=

  9. [9]

    ICLR , year =

    Fourier Neural Operator for Parametric Partial Differential Equations , author =. ICLR , year =

  10. [10]

    DeepONet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators

    Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators , author=. arXiv preprint arXiv:1910.03193 , year=

  11. [11]

    ICML , year=

    Transolver++: An Accurate Neural Solver for PDEs on Million-Scale Geometries , author=. ICML , year=

  12. [12]

    AAAI , year=

    Aerogto: An efficient graph-transformer operator for learning large-scale aerodynamics of 3d vehicle geometries , author=. AAAI , year=

  13. [13]

    NeurIPS , year=

    Pointnet++: Deep hierarchical feature learning on point sets in a metric space , author=. NeurIPS , year=

  14. [14]

    2023 ,doi =

    Airfoil Computational Fluid Dynamics - 9k shapes, 2 AoA's ,author =. 2023 ,doi =

  15. [15]

    NeurIPS , year=

    DrivAerNet++ a large-scale multimodal car dataset with computational fluid dynamics simulations and deep learning benchmarks , author=. NeurIPS , year=

  16. [16]

    Park, Jeong Joon and Florence, Peter and Straub, Julian and Newcombe, Richard and Lovegrove, Steven , booktitle =

  17. [17]

    Hao, Yuze and Zhu, Linchao and Yang, Yi , booktitle =

  18. [18]

    Communications Engineering , year =

    Aerodynamics-guided machine learning for design optimization of electric vehicles , author =. Communications Engineering , year =

  19. [19]

    Physics of Fluids , year=

    TripOptimizer: Generative three-dimensional shape optimization and drag prediction using triplane variational autoencoder networks , author=. Physics of Fluids , year=

  20. [20]

    arXiv preprint arXiv:2510.22491 , year=

    LAMP: Data-Efficient Linear Affine Weight-Space Models for Parameter-Controlled 3D Shape Generation and Extrapolation , author=. arXiv preprint arXiv:2510.22491 , year=

  21. [21]

    arXiv preprint arXiv:2512.00422 , year=

    PhysGen: Physically Grounded 3D Shape Generation for Industrial Design , author=. arXiv preprint arXiv:2512.00422 , year=

  22. [22]

    NeurIPS 2024 Workshop on Data-driven and Differentiable Simulations, Surrogates, and Solvers (D3S3) , year=

    VehicleSDF: A 3D generative model for constrained engineering design via surrogate modeling , author=. NeurIPS 2024 Workshop on Data-driven and Differentiable Simulations, Surrogates, and Solvers (D3S3) , year=

  23. [23]

    Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering , year=

    A review of the artificial neural network surrogate modeling in aerodynamic design , author=. Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering , year=

  24. [24]

    2024 , booktitle =

    Hao, Zhongkai and Su, Chang and Liu, Songming and Berner, Julius and Ying, Chengyang and Su, Hang and Anandkumar, Anima and Song, Jian and Zhu, Jun , title =. 2024 , booktitle =

  25. [25]

    ICLR , year=

    Wavelet Diffusion Neural Operator , author=. ICLR , year=

  26. [26]

    Kingma and Max Welling , editor =

    Diederik P. Kingma and Max Welling , editor =. Auto-Encoding Variational Bayes , booktitle =

  27. [27]

    2023 , booktitle =

    Wu, Haixu and Hu, Tengge and Luo, Huakun and Wang, Jianmin and Long, Mingsheng , title =. 2023 , booktitle =

  28. [28]

    GridMix: Exploring Spatial Modulation for Neural Fields in

    Honghui Wang and Shiji Song and Gao Huang , booktitle=. GridMix: Exploring Spatial Modulation for Neural Fields in

  29. [29]

    NeurIPS , year=

    Geometry-informed neural operator for large-scale 3D PDEs , author=. NeurIPS , year=

  30. [30]

    ICML , year=

    Gnot: A general neural operator transformer for operator learning , author=. ICML , year=

  31. [31]

    Computer Methods in Applied Mechanics and Engineering , year=

    Physics-informed latent neural operator for real-time predictions of time-dependent parametric PDEs , author=. Computer Methods in Applied Mechanics and Engineering , year=

  32. [32]

    ICLR , year=

    Factorized Fourier Neural Operators , author=. ICLR , year=

  33. [33]

    Computer Methods in Applied Mechanics and Engineering , year=

    Geometry-informed neural operator transformer for partial differential equations on arbitrary geometries , author=. Computer Methods in Applied Mechanics and Engineering , year=

  34. [34]

    Lectures at the Von Karman Institute, Brussels , year=

    Aerodynamic shape optimization using the adjoint method , author=. Lectures at the Von Karman Institute, Brussels , year=

  35. [35]

    NeurIPS , year=

    Inverse design for fluid-structure interactions using graph network simulators , author=. NeurIPS , year=

  36. [36]

    arXiv preprint arXiv:2503.17400 , year=

    TripNet: Learning Large-scale High-fidelity 3D Car Aerodynamics with Triplane Networks , author=. arXiv preprint arXiv:2503.17400 , year=

  37. [37]

    ICLR , year=

    Compositional Generative Inverse Design , author=. ICLR , year=

  38. [38]

    Nature Machine Intelligence , year=

    Inverse design of nonlinear mechanical metamaterials via video denoising diffusion models , author=. Nature Machine Intelligence , year=

  39. [39]

    ACS central science , year=

    Automatic chemical design using a data-driven continuous representation of molecules , author=. ACS central science , year=

  40. [40]

    CVPR , year=

    Occupancy networks: Learning 3d reconstruction in function space , author=. CVPR , year=

  41. [41]

    ICLR , year=

    Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow , author=. ICLR , year=

  42. [42]

    Proceedings of the SIGGRAPH Asia 2025 Conference Papers , year=

    PhysiOpt: Physics-Driven Shape Optimization for 3D Generative Models , author=. Proceedings of the SIGGRAPH Asia 2025 Conference Papers , year=

  43. [43]

    NeurIPS , year=

    Aligning optimization trajectories with diffusion models for constrained design generation , author=. NeurIPS , year=

  44. [44]

    AAAI , year=

    Diffusion models beat gans on topology optimization , author=. AAAI , year=

  45. [45]

    National Science Review , volume=

    Deciphering and integrating invariants for neural operator learning with various physical mechanisms , author=. National Science Review , volume=. 2024 , publisher=

  46. [46]

    Computer Physics Communications , year=

    JAX-FEM: A differentiable GPU-accelerated 3D finite element solver for automatic inverse design and mechanistic data science , author=. Computer Physics Communications , year=

  47. [47]

    NeurIPS , year=

    Physically compatible 3D object modeling from a single image , author=. NeurIPS , year=

  48. [48]

    ICML , year =

    DragSolver: A Multi-Scale Transformer for Real-World Automotive Drag Coefficient Estimation , author =. ICML , year =

  49. [49]

    Physics of Fluids , year =

    GeoFormer: Mesh-free geometry-to-flow alignment framework for real-time aerodynamics on non-watertight vehicle geometries , author =. Physics of Fluids , year =

  50. [50]

    NeurIPS , year=

    Implicit Neural Representations with Periodic Activation Functions , author=. NeurIPS , year=

  51. [51]

    NeurIPS , year=

    Operator Learning with Neural Fields: Tackling PDEs on General Geometries , author=. NeurIPS , year=

  52. [52]

    arXiv preprint arXiv:2602.03582 , year=

    Optimization and Generation in Aerodynamics Inverse Design , author=. arXiv preprint arXiv:2602.03582 , year=

  53. [53]

    Machine Intelligence Research , volume=

    Evolutionary Computation for Expensive Optimization: A Survey , author=. Machine Intelligence Research , volume=

  54. [54]

    Machine Intelligence Research , volume=

    Accelerated Elliptical PDE Solver for Computational Fluid Dynamics Based on Configurable U-Net Architecture: Analogy to V-Cycle Multigrid , author=. Machine Intelligence Research , volume=