pith. machine review for the scientific record. sign in

arxiv: 2604.19355 · v1 · submitted 2026-04-21 · 💻 cs.LG · cs.AI· cs.CE

Recognition: unknown

LASER: Learning Active Sensing for Continuum Field Reconstruction

Authors on Pith no claims yet

Pith reviewed 2026-05-10 03:32 UTC · model grok-4.3

classification 💻 cs.LG cs.AIcs.CE
keywords active sensingcontinuum field reconstructionreinforcement learningPOMDPlatent world modelsparse measurementsadaptive sensor placementphysical dynamics
0
0 comments X

The pith

A reinforcement learning policy trained inside a latent model of physical dynamics can adapt sensor movements to reconstruct continuum fields from sparse measurements more accurately than fixed layouts.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces LASER as a closed-loop system that casts active sensing as a POMDP whose policy learns to choose next sensor locations by consulting predicted future states. A learned latent representation of the continuum field supplies the intrinsic rewards that guide this policy through imagined sensing actions without requiring real-world rollouts during training. Because the policy conditions movement on regions expected to reduce reconstruction uncertainty, it can follow evolving field features that static or pre-optimized sensor placements miss. If the approach holds, scientific and engineering applications that currently rely on dense fixed sensor grids could obtain comparable fidelity with far fewer measurements by letting the sensors move intelligently. The work therefore targets the practical problem of maintaining high-resolution maps of temperature, flow, or concentration fields when only a small number of mobile sensors are available.

Core claim

LASER formulates active sensing as a POMDP and employs a continuum field latent world model to capture the underlying physical dynamics and provide intrinsic reward feedback. This model enables a reinforcement learning policy to simulate what-if sensing scenarios within a latent imagination space. By conditioning sensor movements on predicted latent states, the policy navigates toward potentially high-information regions beyond current observations. Experiments show that the resulting adaptive strategy consistently outperforms both static sensor layouts and offline-optimized baselines across diverse continuum fields under sparsity constraints.

What carries the argument

A continuum field latent world model that encodes physical dynamics to supply intrinsic rewards and generate what-if predictions of future measurements for the reinforcement learning policy.

Load-bearing premise

The latent world model must produce sufficiently accurate predictions of how new sensor placements would change the field reconstruction to supply reliable training signals that transfer to real physical environments.

What would settle it

Deploy the trained LASER policy on a physical testbed with known ground-truth field evolution and measure whether its reconstruction error remains lower than both a static uniform grid and an offline-optimized fixed layout when the same number of measurements is used.

Figures

Figures reproduced from arXiv: 2604.19355 by Huayu Deng, Jinghui Zhong, Xiangming Zhu, Xiaokang Yang, Yunbo Wang.

Figure 1
Figure 1. Figure 1: Overview of the closed-loop LASER framework. The agent θ optimizes sensing actions at based on reconstruction rewards rt and next-step observations ot+1 from the environment ϕ. The side panel illustrates the temporal evolution of high-fidelity physical states and their corresponding low-dimensional latents. field from limited data (Koupa¨ı et al., 2025; Serrano et al., 2024; Alkin et al., 2024; Serrano et … view at source ↗
Figure 2
Figure 2. Figure 2: The LASER graphical model. (a) Latent world model with jointly trained encoder, dynamics, and decoder. (b) Active sensing as a POMDP, where the policy interacts with the world model and receives updated latent states and rewards. 4.2. Continuum Field Latent World Model We develop a continuum field reconstruction model ϕ that serves as a high-fidelity surrogate of the physical environ￾ment. Following the pa… view at source ↗
Figure 3
Figure 3. Figure 3: The LASER policy architecture. The network employs a cross-attention mechanism to fuse the predicted latent state zˆt+1 and current sparse observations ot, outputting continuous sensor displacements for the next time step. a noise-corrupted latent z˜t+1 over K steps. The resulting predictive latent zˆt+1 incorporates trend information be￾yond the immediate observation, providing the policy with a forward-l… view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative showcases of active sensing. We evaluate different placement strategies under extreme sparsity (N = 64) [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
Figure 4
Figure 4. Figure 4: Long-term rollout performance under high sparsity (N = 64). We highlight the predictive errors of the LASER latent world model during extended temporal horizons. denoted as Out-t. The model is trained exclusively on the In-t segment, using a budget of 256 randomly sampled sen￾sor locations. At test time, the model autoregressively rolls out the learned dynamics starting from t = 0 over the entire Out-t hor… view at source ↗
Figure 6
Figure 6. Figure 6: Showcases of different inital placement pattern on NSν1e−3 with 256 sparse observations. Low Error High Error Random Sampling Uniform Sampling [PITH_FULL_IMAGE:figures/full_fig_p018_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Comparison of final sensor distributions (t = 39) under different initial conditions 18 [PITH_FULL_IMAGE:figures/full_fig_p018_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Showcases of rollout performance on NSν1e−3 across different sparse observations. Rows 1–3: 64 observations, Rows 4–6: 128 observations, Rows 7–9: 256 observations. The ground truth is presented in rows 1, 4 and 7, with the corresponding error maps shown directly below. 19 [PITH_FULL_IMAGE:figures/full_fig_p019_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Showcases of rollout performance on NSν1e−5 across different sparse observations. Rows 1–3: 64 observations, Rows 4–6: 128 observations, Rows 7–9: 256 observations. The ground truth is presented in rows 1, 4 and 7, with the corresponding error maps shown directly below. 20 [PITH_FULL_IMAGE:figures/full_fig_p020_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Showcases of rollout performance on Shallow-Water across different sparse observations. Rows 1–3: 64 observations, Rows 4–6: 128 observations, Rows 7–9: 256 observations. The ground truth is presented in rows 1, 4 and 7, with the corresponding error maps shown directly below. 21 [PITH_FULL_IMAGE:figures/full_fig_p021_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Qualitative showcases of active sensing methods on NSν1e−3, NSν1e−5 and Shallow-Water. We evaluate different placement strategies under extreme sparsity (N = 64). DiffusionPDE PhySense LASER Error Ground Truth [PITH_FULL_IMAGE:figures/full_fig_p022_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Qualitative showcases of active sensing methods on Sea Surface Temperature. We evaluate different placement strategies under extreme sparsity (N = 100). 22 [PITH_FULL_IMAGE:figures/full_fig_p022_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Showcasses of continuum field reconstruction on NSν1e−3 across different sparse observations by LASER. Low Error High Error Low Error High Error Low Error High Error 64 128 256 [PITH_FULL_IMAGE:figures/full_fig_p023_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Showcasses of continuum field reconstruction on NSν1e−5 across different sparse observations by LASER. 23 [PITH_FULL_IMAGE:figures/full_fig_p023_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Showcasses of continuum field reconstruction on Shallow-Water across different sparse observations by LASER. Low Error High Error 50 100 Low Error High Error [PITH_FULL_IMAGE:figures/full_fig_p024_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Showcasses of continuum field reconstruction on Sea Surface Temperature across different sparse observations by LASER. 24 [PITH_FULL_IMAGE:figures/full_fig_p024_16.png] view at source ↗
read the original abstract

High-fidelity measurements of continuum physical fields are essential for scientific discovery and engineering design but remain challenging under sparse and constrained sensing. Conventional reconstruction methods typically rely on fixed sensor layouts, which cannot adapt to evolving physical states. We propose LASER, a unified, closed-loop framework that formulates active sensing as a Partially Observable Markov Decision Process (POMDP). At its core, LASER employs a continuum field latent world model that captures the underlying physical dynamics and provides intrinsic reward feedback. This enables a reinforcement learning policy to simulate ''what-if'' sensing scenarios within a latent imagination space. By conditioning sensor movements on predicted latent states, LASER navigates toward potentially high-information regions beyond current observations. Our experiments demonstrate that LASER consistently outperforms static and offline-optimized strategies, achieving high-fidelity reconstruction under sparsity across diverse continuum fields.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes LASER, a unified closed-loop framework for active sensing in continuum field reconstruction. It formulates the task as a POMDP, introduces a continuum field latent world model to capture physical dynamics and supply intrinsic rewards, and trains an RL policy that selects sensor movements via 'what-if' rollouts in latent imagination space. The central empirical claim is that LASER consistently outperforms static and offline-optimized sensing strategies, achieving high-fidelity reconstruction under sparsity across diverse continuum fields.

Significance. If the empirical results and transfer claims hold, the work offers a promising direction for adaptive, data-efficient sensing in scientific and engineering applications such as fluid dynamics or environmental monitoring. The combination of latent dynamics modeling with POMDP-based policy learning for active sensing is conceptually novel and addresses limitations of fixed layouts.

major comments (2)
  1. [Experiments] The abstract states that experiments demonstrate outperformance yet provides no quantitative results, error bars, dataset details, or ablation studies. The experimental section must supply these (including specific metrics such as reconstruction MSE or PSNR, number of trials, and statistical significance) to support the central claim of consistent gains over baselines.
  2. [Method (latent world model and RL policy)] The central claim requires that the learned continuum field latent world model supplies accurate intrinsic rewards and multi-step 'what-if' rollouts so the POMDP policy can be trained effectively. The manuscript does not report the model's one-step or multi-step prediction error on held-out dynamics (especially under the sparsity levels used at test time), which is load-bearing because high error would mean the policy optimizes against a distorted reward landscape and reported gains may not transfer.
minor comments (1)
  1. [Abstract] The abstract would benefit from a brief reference to the specific fields or datasets used in the experiments to allow immediate assessment of scope.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments, which highlight important aspects of experimental reporting and model validation. We address each major comment below and have revised the manuscript to incorporate the requested details.

read point-by-point responses
  1. Referee: [Experiments] The abstract states that experiments demonstrate outperformance yet provides no quantitative results, error bars, dataset details, or ablation studies. The experimental section must supply these (including specific metrics such as reconstruction MSE or PSNR, number of trials, and statistical significance) to support the central claim of consistent gains over baselines.

    Authors: We agree that the abstract and experimental section should include explicit quantitative support. The manuscript contains experimental evaluations across multiple continuum fields, but we will revise the abstract to report key metrics (e.g., average reconstruction MSE reductions relative to baselines) and expand the experimental section with error bars, dataset specifications, ablation studies, number of trials, and statistical significance tests to strengthen the empirical claims. revision: yes

  2. Referee: [Method (latent world model and RL policy)] The central claim requires that the learned continuum field latent world model supplies accurate intrinsic rewards and multi-step 'what-if' rollouts so the POMDP policy can be trained effectively. The manuscript does not report the model's one-step or multi-step prediction error on held-out dynamics (especially under the sparsity levels used at test time), which is load-bearing because high error would mean the policy optimizes against a distorted reward landscape and reported gains may not transfer.

    Authors: We concur that validating the latent world model's predictive accuracy is essential to substantiate the POMDP training and transfer of results. Although the model architecture and training are described, we did not include explicit held-out prediction metrics. We will add one-step and multi-step prediction error evaluations on held-out dynamics, reported specifically at the sparsity levels used in testing, to confirm the model's suitability for intrinsic rewards and latent rollouts. revision: yes

Circularity Check

0 steps flagged

No significant circularity in LASER derivation chain

full rationale

The paper formulates active sensing as a POMDP, introduces a learned continuum field latent world model to supply intrinsic rewards and enable imagination-based rollouts for RL policy training, then reports empirical outperformance versus static and offline baselines. No step reduces by construction to its inputs: the world model is trained separately on field data, the policy optimizes against predicted rewards in latent space, and final claims rest on held-out experimental comparisons rather than self-definition, fitted-input renaming, or self-citation chains. The method is self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

Abstract-only review prevents exhaustive enumeration; the central claim rests on the existence of a learnable latent world model that supplies accurate intrinsic rewards and on the transferability of policies trained in latent imagination to real sensor movement.

axioms (1)
  • domain assumption A latent world model can be trained to capture underlying physical dynamics from sparse observations sufficiently well to generate useful intrinsic rewards.
    Invoked when the paper states the model 'provides intrinsic reward feedback' and enables 'what-if' simulation.
invented entities (1)
  • Continuum field latent world model no independent evidence
    purpose: Compresses field observations and predicts measurement outcomes for RL policy training in imagination space.
    New modeling component introduced to close the loop between sensing and reconstruction.

pith-pipeline@v0.9.0 · 5448 in / 1409 out tokens · 20355 ms · 2026-05-10T03:32:38.091215+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

89 extracted references · 19 canonical work pages · 9 internal anchors

  1. [1]

    Langley , title =

    P. Langley , title =. Proceedings of the 17th International Conference on Machine Learning (ICML 2000) , address =. 2000 , pages =

  2. [2]

    Advances in Neural Information Processing Systems , volume=

    Recurrent world models facilitate policy evolution , author=. Advances in Neural Information Processing Systems , volume=

  3. [3]

    Advances in Neural Information Processing Systems , volume=

    DiffusionPDE: Generative PDE-solving under partial observation , author=. Advances in Neural Information Processing Systems , volume=

  4. [4]

    Advances in Neural Information Processing Systems , volume=

    Pde-refiner: Achieving accurate long rollouts with neural pde solvers , author=. Advances in Neural Information Processing Systems , volume=

  5. [5]

    Frontiers in Earth Science , volume=

    The Copernicus global 1/12 oceanic and sea ice GLORYS12 reanalysis , author=. Frontiers in Earth Science , volume=. 2021 , publisher=

  6. [6]

    Proceedings of the IEEE/CVF international conference on computer vision , pages=

    Scalable diffusion models with transformers , author=. Proceedings of the IEEE/CVF international conference on computer vision , pages=

  7. [7]

    International conference on machine learning , pages=

    Deep unsupervised learning using nonequilibrium thermodynamics , author=. International conference on machine learning , pages=. 2015 , organization=

  8. [8]

    Advances in Neural Information Processing Systems , volume=

    Denoising diffusion probabilistic models , author=. Advances in Neural Information Processing Systems , volume=

  9. [9]

    International conference on machine learning , pages=

    Improved denoising diffusion probabilistic models , author=. International conference on machine learning , pages=. 2021 , organization=

  10. [10]

    Advances in Neural Information Processing Systems , volume=

    Pdebench: An extensive benchmark for scientific machine learning , author=. Advances in Neural Information Processing Systems , volume=

  11. [11]

    Group Sequence Policy Optimization

    Group sequence policy optimization , author=. arXiv preprint arXiv:2507.18071 , year=

  12. [12]

    GDPO: Group reward-Decoupled Normalization Policy Optimization for Multi-reward RL Optimization

    GDPO: Group reward-Decoupled Normalization Policy Optimization for Multi-reward RL Optimization , author=. arXiv preprint arXiv:2601.05242 , year=

  13. [13]

    Can Textual Reasoning Improve the Performance of MLLMs on Fine-grained Visual Classification?

    Can Textual Reasoning Improve the Performance of MLLMs on Fine-grained Visual Classification? , author=. arXiv preprint arXiv:2601.06993 , year=

  14. [14]

    Proximal Policy Optimization Algorithms

    Proximal policy optimization algorithms , author=. arXiv preprint arXiv:1707.06347 , year=

  15. [15]

    DAPO: An Open-Source LLM Reinforcement Learning System at Scale

    Dapo: An open-source llm reinforcement learning system at scale , author=. arXiv preprint arXiv:2503.14476 , year=

  16. [16]

    DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning

    Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning , author=. arXiv preprint arXiv:2501.12948 , year=

  17. [17]

    Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling

    Empirical evaluation of gated recurrent neural networks on sequence modeling , author=. arXiv preprint arXiv:1412.3555 , year=

  18. [18]

    2022 , publisher=

    Partial differential equations , author=. 2022 , publisher=

  19. [19]

    Advances in Neural Information Processing Systems , volume=

    Universal physics transformers: A framework for efficiently scaling neural operators , author=. Advances in Neural Information Processing Systems , volume=

  20. [20]

    The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

    Armand Kassa. The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

  21. [21]

    International Conference on Machine Learning , pages=

    Gnot: A general neural operator transformer for operator learning , author=. International Conference on Machine Learning , pages=. 2023 , organization=

  22. [22]

    2007 , publisher=

    Numerical computation of internal and external flows: The fundamentals of computational fluid dynamics , author=. 2007 , publisher=

  23. [23]

    2022 , publisher=

    Data-driven science and engineering: Machine learning, dynamical systems, and control , author=. 2022 , publisher=

  24. [24]

    Advances in Neural Information Processing Systems , volume=

    Attention is all you need , author=. Advances in Neural Information Processing Systems , volume=

  25. [25]

    Nature , pages=

    Mastering diverse control tasks through world models , author=. Nature , pages=. 2025 , publisher=

  26. [26]

    International Conference on Learning Representations , year=

    Mastering Atari with Discrete World Models , author=. International Conference on Learning Representations , year=

  27. [27]

    2020 , booktitle=

    Dream to Control: Learning Behaviors by Latent Imagination , author=. 2020 , booktitle=

  28. [28]

    Worrall and Max Welling , booktitle=

    Johannes Brandstetter and Daniel E. Worrall and Max Welling , booktitle=. Message Passing Neural. 2022 , url=

  29. [29]

    International Conference on Learning Representations , year=

    Fourier Neural Operator for Parametric Partial Differential Equations , author=. International Conference on Learning Representations , year=

  30. [30]

    1988 , publisher=

    Navier-stokes equations , author=. 1988 , publisher=

  31. [31]

    Fourier Neural Operator for Parametric Partial Differential Equations

    Fourier neural operator for parametric partial differential equations , author=. arXiv preprint arXiv:2010.08895 , year=

  32. [32]

    ICML 2022 2nd AI for Science Workshop , year=

    Transform Once: Efficient Operator Learning in Frequency Domain , author=. ICML 2022 2nd AI for Science Workshop , year=

  33. [33]

    International Conference on Learning Representations , year=

    Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element Networks , author=. International Conference on Learning Representations , year=

  34. [34]

    Machine learning–accelerated computational fluid dynamics,

    Dmitrii Kochkov and Jamie A. Smith and Ayya Alieva and Qing Wang and Michael P. Brenner and Stephan Hoyer , doi =. Machine learning--accelerated computational fluid dynamics , url =. 2021 , Bdsk-Url-1 =. https://www.pnas.org/doi/pdf/10.1073/pnas.2101784118 , journal =

  35. [35]

    International Conference on Learning Representations , year=

    Learning Mesh-Based Simulation with Graph Networks , author=. International Conference on Learning Representations , year=

  36. [36]

    2020 , note =

    Learning latent field dynamics of PDEs , author =. 2020 , note =

  37. [37]

    International Conference on Learning Representations , year=

    Predicting Physics in Mesh-reduced Space with Temporal Attention , author=. International Conference on Learning Representations , year=

  38. [38]

    Advances in Neural Information Processing Systems , editor=

    Learning to Accelerate Partial Differential Equations via Latent Global Evolution , author=. Advances in Neural Information Processing Systems , editor=. 2022 , url=

  39. [39]

    Continuous

    Yuan Yin and Matthieu Kirchmeyer and Jean-Yves Franceschi and Alain Rakotomamonjy and patrick gallinari , booktitle=. Continuous. 2023 , url=

  40. [40]

    Modelling spatiotemporal dynamics from Earth observation data with neural differential equations , ty =

    Ayed, Ibrahim and de B. Modelling spatiotemporal dynamics from Earth observation data with neural differential equations , ty =. Machine Learning , number =. 2022 , Bdsk-Url-1 =. doi:10.1007/s10994-022-06139-2 , id =

  41. [41]

    ArXiv , year=

    Variational Deep Learning for the Identification and Reconstruction of Chaotic and Stochastic Dynamical Systems from Noisy and Partial Observations , author=. ArXiv , year=

  42. [42]

    Proceedings of the 36th International Conference on Machine Learning , pages =

    Graph Element Networks: adaptive, structured computation and memory , author =. Proceedings of the 36th International Conference on Machine Learning , pages =. 2019 , editor =

  43. [43]

    Generalized Slow Roll for Tensors

    Jiang, Chiyu lMaxr and Esmaeilzadeh, Soheil and Azizzadenesheli, Kamyar and Kashinath, Karthik and Mustafa, Mustafa and Tchelepi, Hamdi A. and Marcus, Philip and Prabhat, Mr and Anandkumar, Anima , year=. MESHFREEFLOWNET: A Physics-Constrained Deep Continuous Space-Time Super-Resolution Framework , url=. doi:10.1109/sc41405.2020.00013 , journal=

  44. [44]

    ICML 2022 2nd AI for Science Workshop , year=

    Efficient Continuous Spatio-Temporal Simulation with Graph Spline Networks , author=. ICML 2022 2nd AI for Science Workshop , year=

  45. [45]

    2022 , url=

    Oussama Boussif and Yoshua Bengio and Loubna Benabbou and Dan Assouline , booktitle=. 2022 , url=

  46. [46]

    Brenner , doi =

    Yohai Bar-Sinai and Stephan Hoyer and Jason Hickey and Michael P. Brenner , doi =. Learning data-driven discretizations for partial differential equations , url =. 2019 , Bdsk-Url-1 =. https://www.pnas.org/doi/pdf/10.1073/pnas.1814058116 , journal =

  47. [47]

    2018 , editor =

    Long, Zichao and Lu, Yiping and Ma, Xianzhong and Dong, Bin , booktitle =. 2018 , editor =

  48. [48]

    , booktitle =

    Cheng, Alexander H.-D. , booktitle =. Radial Basis Function Collocation Method , year =

  49. [49]

    Kansa , doi =

    E.J. Kansa , doi =. Multiquadrics---A scattered data approximation scheme with applications to computational fluid-dynamics---II solutions to parabolic, hyperbolic and elliptic partial differential equations , url =. Computers and Mathematics with Applications , number =. 1990 , Bdsk-Url-1 =

  50. [50]

    The Numerical Method of Lines: Integration of Partial Differential Equations , author=

  51. [51]

    and Schiesser, W

    Hamdi, S. and Schiesser, W. E. and Griffiths, G. W , TITLE =. 2007 , JOURNAL =. doi:10.4249/scholarpedia.2859 , NOTE =

  52. [52]

    2013 , eprint=

    Auto-Encoding Variational Bayes , author=. 2013 , eprint=

  53. [53]

    M., KUCUKELBIR, A., MCAULIFFE, J

    Blei, David M. and Kucukelbir, Alp and McAuliffe, Jon D. , year=. Variational Inference: A Review for Statisticians , volume=. Journal of the American Statistical Association , publisher=. doi:10.1080/01621459.2017.1285773 , number=

  54. [54]

    Chen, Ricky T. Q. , title=. 2018 , url=

  55. [55]

    2018 , eprint=

    Neural Ordinary Differential Equations , author=. 2018 , eprint=

  56. [56]

    2021 , eprint=

    Neural Stochastic PDEs: Resolution-Invariant Learning of Continuous Spatiotemporal Dynamics , author=. 2021 , eprint=

  57. [57]

    On the smoothness of nonlinear system identification , url =

    Ant. On the smoothness of nonlinear system identification , url =. Automatica , keywords =. 2020 , Bdsk-Url-1 =. doi:https://doi.org/10.1016/j.automatica.2020.109158 , issn =

  58. [58]

    2021 , eprint=

    Gradients are Not All You Need , author=. 2021 , eprint=

  59. [59]

    Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers

    Um, Kiwon and Brand, Robert and Fei, Yun and Holl, Philipp and Thuerey, Nils , journal=. Solver-in-the-Loop: Learning from Differentiable Physics to Interact with Iterative PDE-Solvers

  60. [60]

    ArXiv , year=

    ODE2VAE: Deep generative second order ODEs with Bayesian neural networks , author=. ArXiv , year=

  61. [61]

    Bock and K.J

    H.G. Bock and K.J. Plitt , doi =. A Multiple Shooting Algorithm for Direct Solution of Optimal Control Problems* , url =. IFAC Proceedings Volumes , keywords =. 1984 , Bdsk-Url-1 =

  62. [62]

    and Kurths, Juergen , year =

    Voss, Henning and Timmer, J. and Kurths, Juergen , year =. Nonlinear dynamical system identification from uncertain and indirect measurements , volume =

  63. [63]

    Variational multiple shooting for Bayesian

    Pashupati Hegde and Cagatay Yildiz and Harri L. Variational multiple shooting for Bayesian. The 38th Conference on Uncertainty in Artificial Intelligence , year=

  64. [64]

    2021 , eprint=

    Learning Dynamical Systems from Noisy Sensor Measurements using Multiple Shooting , author=. 2021 , eprint=

  65. [65]

    Latent Neural

    Valerii Iakovlev and Cagatay Yildiz and Markus Heinonen and Harri L. Latent Neural. The Eleventh International Conference on Learning Representations , year=

  66. [66]

    ACM Transactions on Graphics , volume=

    Eckert, Marie-Lenat and Um, Kiwon and Thuerey, Nils , title =. ACM Transactions on Graphics , volume=

  67. [67]

    2019 , url=

    Cellier, Nicolas , title=. 2019 , url=

  68. [68]

    NeurIPS Workshop on Differentiable vision, graphics, and physics applied to machine learning , year=

    phiflow: A Differentiable PDE Solving Framework for Deep Learning via Physical Simulations , author=. NeurIPS Workshop on Differentiable vision, graphics, and physics applied to machine learning , year=

  69. [69]

    International Conference on Learning Representations , year=

    Physics-aware Difference Graph Networks for Sparsely-Observed Dynamics , author=. International Conference on Learning Representations , year=

  70. [70]

    Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics , pages =

    Understanding the difficulty of training deep feedforward neural networks , author =. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics , pages =. 2010 , editor =

  71. [71]

    Advances in Neural Information Processing Systems , volume=

    AROMA: Preserving spatial structure for latent PDE modeling with local neural fields , author=. Advances in Neural Information Processing Systems , volume=

  72. [72]

    Advances in Neural Information Processing Systems , volume=

    Operator learning with neural fields: Tackling pdes on general geometries , author=. Advances in Neural Information Processing Systems , volume=

  73. [73]

    Continuous field reconstruction from sparse observations with implicit neural networks.arXiv preprint arXiv:2401.11611, 2024

    Continuous field reconstruction from sparse observations with implicit neural networks , author=. arXiv preprint arXiv:2401.11611 , year=

  74. [74]

    The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

    PhySense: Sensor Placement Optimization for Accurate Physics Sensing , author=. The Thirty-ninth Annual Conference on Neural Information Processing Systems , year=

  75. [75]

    2017 , eprint=

    Adam: A Method for Stochastic Optimization , author=. 2017 , eprint=

  76. [76]

    and Wikle, C

    Cressie, N. and Wikle, C. K. , date-added =. Statistics for Spatio-Temporal Data , title1 =. 2011 , Bdsk-Url-1 =

  77. [77]

    Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators.Nature Machine Intelligence, 3(3), 218–229 (2021)

    Lu, Lu and Jin, Pengzhan and Pang, Guofei and Zhang, Zhongqiang and Karniadakis, George Em , year=. Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators , volume=. Nature Machine Intelligence , publisher=. doi:10.1038/s42256-021-00302-5 , number=

  78. [78]

    Forty-second International Conference on Machine Learning , year=

    EvoMesh: Adaptive Physical Simulation with Hierarchical Graph Evolutions , author=. Forty-second International Conference on Machine Learning , year=

  79. [79]

    The Twelfth International Conference on Learning Representations , year=

    Latent Intuitive Physics: Learning to Transfer Hidden Physics from A 3D Video , author=. The Twelfth International Conference on Learning Representations , year=

  80. [80]

    International conference on machine learning , pages=

    Neurofluid: Fluid dynamics grounding with particle-driven neural radiance fields , author=. International conference on machine learning , pages=. 2022 , organization=

Showing first 80 references.