Recognition: unknown
Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning
read the original abstract
Deep learning has achieved astonishing results on many tasks with large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the resulting problems. In particular, learning physics models for model-based control requires robust extrapolation from fewer samples - often collected online in real-time - and model errors may lead to drastic damages of the system. Directly incorporating physical insight has enabled us to obtain a novel deep model learning approach that extrapolates well while requiring fewer samples. As a first example, we propose Deep Lagrangian Networks (DeLaN) as a deep network structure upon which Lagrangian Mechanics have been imposed. DeLaN can learn the equations of motion of a mechanical system (i.e., system dynamics) with a deep network efficiently while ensuring physical plausibility. The resulting DeLaN network performs very well at robot tracking control. The proposed method did not only outperform previous model learning approaches at learning speed but exhibits substantially improved and more robust extrapolation to novel trajectories and learns online in real-time
This paper has not been read by Pith yet.
Forward citations
Cited by 5 Pith papers
-
Detecting Deepfakes via Hamiltonian Dynamics
HAAD detects deepfakes by modeling latent manifolds as potential energy surfaces and quantifying instability via Hamiltonian trajectory statistics such as action and energy dissipation.
-
Neural Co-state Policies: Structuring Hidden States in Recurrent Reinforcement Learning
Hidden states in recurrent RL policies correspond to PMP co-states, so a derived co-state loss structures the dynamics and yields robust performance on partially observable continuous control tasks.
-
QuietWalk: Physics-Informed Reinforcement Learning for Ground Reaction Force-Aware Humanoid Locomotion Under Diverse Footwear
QuietWalk combines an inverse-dynamics-constrained PINN for GRF estimation with RL to produce low-impact humanoid locomotion policies that generalize across footwear, cutting mean noise by 7.17 dB on hardware.
-
Dissipative Latent Residual Physics-Informed Neural Networks for Modeling and Identification of Electromechanical Systems
DiLaR-PINN learns dissipative effects in electromechanical systems via a skew-dissipative latent residual PINN that guarantees non-increasing energy and uses recurrent curriculum training for partial observations.
-
Neural Co-state Policies: Structuring Hidden States in Recurrent Reinforcement Learning
Recurrent RL policies can have their hidden states aligned with PMP co-states through a derived loss, yielding robust performance on partially observable control tasks.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.