Recognition: 2 theorem links
· Lean TheoremPhySPRING: Structure-Preserving Reduction of Physics-Informed Twins via GNN
Pith reviewed 2026-05-11 03:17 UTC · model grok-4.3
The pith
PhySPRING uses a graph neural network to merge similar nodes in spring-mass digital twins, creating lighter explicit models that run faster while retaining physical accuracy.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
PhySPRING is a fully differentiable GNN-based method that jointly learns a hierarchy of coarsened graph topologies and their mechanical parameters from observations. At each reduction level it merges nodes with similar learned dynamic responses to optimize the topology, while maintaining every reduced layer as an explicit spring-mass system. On the PhysTwin benchmark the resulting models improve dense reconstruction and prediction accuracy over the baseline, deliver up to a 2.30 times speed-up, and retain stable physical and visual fidelity; when substituted zero-shot into ACT and π0 robot policy evaluations they preserve comparable manipulation success rates and improve action-sampling time
What carries the argument
Node merging guided by learned dynamic-response similarity inside a GNN, which produces a hierarchy of explicit spring-mass graphs at successive coarsening levels.
If this is right
- Reduced models run up to 2.30 times faster than the original while keeping stable physical and visual fidelity.
- Dense reconstruction and forward-prediction accuracy improve over the unreduced PhysTwin baseline on the benchmark.
- Reduced models can be dropped zero-shot into ACT and π0 policy evaluations with no drop in manipulation success rates.
- Action sampling becomes more effective because each rollout is cheaper.
Where Pith is reading between the lines
- The same dynamic-response merging idea could be tried on other particle or mesh-based physical simulators beyond springs.
- Because every level stays an explicit mechanical model, users could still inspect or hand-tune parameters after reduction.
- The efficiency gain might let researchers test policies over longer horizons or larger environments than before.
Load-bearing premise
Nodes merged because they show similar learned dynamic responses will preserve task-relevant forward dynamics and physical fidelity at every reduction level.
What would settle it
A direct comparison in which a reduced model at a given downsampling level produces measurably higher prediction error or lower manipulation success rate than the unreduced twin on held-out PhysTwin interactions or robot tasks.
Figures
read the original abstract
Physics-based digital twins aim to predict the dynamics of real-world objects under interaction, enabling real-to-sim-to-real applications in robotics. Current approaches reconstruct such twins as explicit physical models (such as spring-mass systems) to predict the dynamics, but the resulting models often inherit the resolution of the visual reconstruction rather than being reduced to the physical complexity required to reproduce task-relevant dynamics. This mismatch introduces redundant topology, making repeated forward-dynamics rollouts unnecessarily expensive. To address this challenge, we present PhySPRING, an fully differentiable GNN-based method to reduce complexity in spring--mass digital twins. PhySPRING jointly learns a hierarchy of coarsened graph topologies and their mechanical parameters from observations. At each reduction level, PhySPRING merges nodes with similar learned dynamic responses to optimize the topology, while maintaining every reduced layer as an explicit spring--mass system. On the PhysTwin benchmark, PhySPRING improves dense reconstruction and prediction accuracy over PhysTwin, while reduced models retain stable physical and visual fidelity with up to a 2.30 times speed-up. We further demonstrate the effectiveness of PhySPRING in a Real2Sim robot policy-evaluation pipeline, where the reduced models are substituted zero-shot into ACT and $\pi_0$ evaluations, maintaining comparable manipulation success rates across downsampling levels while improving action-sampling effectiveness. Together, PhySPRING enables efficient and structure-preserving spring--mass reduction without sacrificing fidelity or robotic utility.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces PhySPRING, a fully differentiable GNN-based method for structure-preserving reduction of spring-mass physics-informed digital twins. It jointly learns a hierarchy of coarsened graph topologies and mechanical parameters from observations by merging nodes with similar learned dynamic responses at each level, while retaining explicit spring-mass form. On the PhysTwin benchmark it claims improved dense reconstruction and prediction accuracy over PhysTwin baselines, stable physical/visual fidelity, and up to 2.30× speedup; it further shows zero-shot substitution of reduced models into ACT and π₀ policy evaluations with comparable manipulation success rates and improved action sampling.
Significance. If the empirical claims hold under rigorous validation, the work could meaningfully advance efficient real-to-sim-to-real pipelines in robotics by enabling reduced yet faithful physics models that integrate directly with learned policies. The explicit mechanical parameterization and differentiability are clear strengths that support reproducibility and downstream optimization; the zero-shot policy results, if substantiated with detailed metrics, would demonstrate practical utility beyond pure reconstruction tasks.
major comments (2)
- [Abstract] Abstract and Experiments section: the central claim that node merging by learned dynamic-response similarity preserves task-relevant forward dynamics (including localized contact properties) for downstream policy success is load-bearing, yet the provided description supplies no quantitative tables, error bars, ablation on the similarity metric, or explicit checks (e.g., contact-force error or stiffness preservation) that would separate the merging criterion from the reported reconstruction/prediction metrics. This leaves open the possibility that global metrics remain acceptable while contact-rich regimes degrade.
- The zero-shot ACT/π₀ substitution results (Abstract) report comparable aggregate success rates across downsampling levels, but do not address whether action-sampling effectiveness improvements mask accumulated local errors in contact geometry or mass distribution; a targeted stress test on contact-critical manipulation subtasks would be required to substantiate that reduced models remain faithful for policy evaluation.
minor comments (1)
- [Abstract] The abstract would benefit from a brief statement of the exact protocol used to measure dynamic-response similarity and the number of reduction levels evaluated, to improve immediate clarity for readers.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. The comments highlight the need for stronger empirical validation of how our node-merging criterion preserves localized contact dynamics and task-relevant behavior in reduced models. We have revised the manuscript to incorporate additional quantitative analyses, ablations, and targeted evaluations as detailed below.
read point-by-point responses
-
Referee: [Abstract] Abstract and Experiments section: the central claim that node merging by learned dynamic-response similarity preserves task-relevant forward dynamics (including localized contact properties) for downstream policy success is load-bearing, yet the provided description supplies no quantitative tables, error bars, ablation on the similarity metric, or explicit checks (e.g., contact-force error or stiffness preservation) that would separate the merging criterion from the reported reconstruction/prediction metrics. This leaves open the possibility that global metrics remain acceptable while contact-rich regimes degrade.
Authors: We agree that the central claim requires more granular empirical support to demonstrate preservation of localized contact properties. In the revised manuscript, we have added quantitative tables in the Experiments section that report mean and standard deviation (error bars) for reconstruction and prediction errors across all downsampling levels. We further include explicit contact-force error and stiffness preservation metrics computed on contact-rich trajectories. An ablation study on the dynamic-response similarity metric (comparing it against geometric proximity and random merging baselines) has been added to isolate its contribution. These results show that global metrics do not mask degradation in contact regimes; the learned similarity criterion yields lower contact-force errors than alternatives while retaining comparable overall accuracy. revision: yes
-
Referee: The zero-shot ACT/π₀ substitution results (Abstract) report comparable aggregate success rates across downsampling levels, but do not address whether action-sampling effectiveness improvements mask accumulated local errors in contact geometry or mass distribution; a targeted stress test on contact-critical manipulation subtasks would be required to substantiate that reduced models remain faithful for policy evaluation.
Authors: We concur that aggregate success rates alone are insufficient and that targeted validation on contact-critical subtasks is warranted. The revised manuscript now includes a breakdown of policy success rates specifically for contact-intensive subtasks (grasping, pushing, and precise placement) across reduction levels, along with auxiliary metrics on contact geometry error and mass distribution deviation between full and reduced models. These targeted results confirm that the observed improvements in action sampling do not mask local errors; success rates on contact-critical subtasks remain statistically comparable to the full model, with no significant increase in contact geometry or mass errors. revision: yes
Circularity Check
No significant circularity; method and claims are empirically grounded rather than self-referential
full rationale
The paper describes a differentiable GNN procedure that jointly optimizes coarsened spring-mass topologies and parameters directly from observation data, with node merging driven by learned response similarity. Fidelity claims are supported by explicit comparisons to the PhysTwin baseline on reconstruction/prediction metrics, speed-up measurements, and zero-shot substitution into external ACT/π0 policy evaluations. No equations or steps reduce the output metrics to the inputs by algebraic identity or by a self-citation chain whose own justification is internal. The explicit mechanical form and external benchmarks supply independent grounding, satisfying the criteria for a self-contained derivation.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
PhySPRING merges nodes with similar learned dynamic responses to optimize the topology, while maintaining every reduced layer as an explicit spring–mass system.
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.leanabsolute_floor_iff_bare_distinguishability unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Galerkin projection induces coarse spring-mass representation... decoder refines mechanical parameters
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Yunuo Chen, Yafei Hu, Lingfeng Sun, Tushar Kusnur, Laura Herlant, and Chenfanfu Jiang. EMPM: Embodied MPM for Modeling and Simulation of Deformable Objects.IEEE Robotics and Automation Letters, 11(4):4179–4186, January 2026
work page 2026
-
[2]
Hanxiao Jiang, Hao-Yu Hsu, Kaifeng Zhang, Hsin-Ni Yu, Shenlong Wang, and Yunzhu Li. Phystwin: Physics-informed reconstruction and simulation of deformable objects from videos.ICCV, 2025
work page 2025
-
[3]
Haozhe Lou, Mingtong Zhang, Haoran Geng, Hanyang Zhou, Sicheng He, Zhiyuan Gao, Siheng Zhao, Jiageng Mao, Pieter Abbeel, Jitendra Malik, Daniel Seita, and Yue Wang. D-REX: Differentiable Real- to-Sim-to-Real Engine for Learning Dexterous Grasping.arXiv e-prints, page arXiv:2603.01151, March 2026
-
[4]
NovaFlow: Zero- shot manipulation via actionable flow from generated videos
Hongyu Li, Lingfeng Sun, Yafei Hu, Duy Ta, Jennifer Barry, George Konidaris, and Jiahui Fu. No- vaFlow: Zero-Shot Manipulation via Actionable Flow from Generated Videos.arXiv e-prints, page arXiv:2510.08568, October 2025
-
[5]
Evaluating real-world robot manipulation policies in simulation
Xuanlin Li, Kyle Hsu, Jiayuan Gu, Oier Mees, Karl Pertsch, Homer Rich Walke, Chuyuan Fu, Ishikaa Lunawat, Isabel Sieh, Sean Kirmani, Sergey Levine, Jiajun Wu, Chelsea Finn, Hao Su, Quan Vuong, and Ted Xiao. Evaluating real-world robot manipulation policies in simulation. In Pulkit Agrawal, Oliver Kroemer, and Wolfram Burgard, editors,Proceedings of The 8t...
work page 2025
-
[6]
Robotarena ∞: Scalable robot benchmarking via real-to-sim translation, 2025
Yash Jangir, Yidi Zhang, Kashu Yamazaki, Chenyu Zhang, Kuan-Hsun Tu, Tsung-Wei Ke, Lei Ke, Yonatan Bisk, and Katerina Fragkiadaki. Robotarena: Scalable robot benchmarking via real-to-sim translation. arXiv preprint arXiv:2510.23571, 2025
-
[7]
Ctrl-world: A controllable generative world model for robot manipulation, 2026
Yanjiang Guo, Lucy Xiaoyang Shi, Jianyu Chen, and Chelsea Finn. Ctrl-world: A controllable generative world model for robot manipulation.arXiv preprint arXiv:2510.10125, 2025
-
[8]
Closing the sim-to-real loop: Adapting simulation randomization with real world experience
Yevgen Chebotar, Ankur Handa, Viktor Makoviychuk, Miles Macklin, Jan Issac, Nathan Ratliff, and Dieter Fox. Closing the sim-to-real loop: Adapting simulation randomization with real world experience. In2019 International Conference on Robotics and Automation (ICRA), pages 8973–8979. IEEE, 2019
work page 2019
-
[9]
Neural posterior domain randomization
Fabio Muratore, Theo Gruner, Florian Wiese, Boris Belousov, Michael Gienger, and Jan Peters. Neural posterior domain randomization. InConference on robot learning, pages 1532–1542. PMLR, 2022
work page 2022
-
[10]
Kaifeng Zhang, Shuo Sha, Hanxiao Jiang, Matthew Loper, Hyunjong Song, Guangyan Cai, Zhuo Xu, Xiaochen Hu, Changxi Zheng, and Yunzhu Li. Real-to-Sim Robot Policy Evaluation with Gaussian Splatting Simulation of Soft-Body Interactions.arXiv e-prints, page arXiv:2511.04665, November 2025
-
[11]
Eftychios Sifakis and Jernej Barbic. Fem simulation of 3d deformable solids: a practitioner’s guide to theory, discretization and model reduction. InAcm siggraph 2012 courses, pages 1–50. 2012
work page 2012
-
[12]
Tomer Weiss. Fast position-based multi-agent group dynamics.Proceedings of the ACM on Computer Graphics and Interactive Techniques, 6(1):1–15, 2023
work page 2023
-
[13]
Warp: A High-performance Python Framework for GPU Simulation and Graphics, March
Miles Macklin. Warp: A High-performance Python Framework for GPU Simulation and Graphics, March
-
[14]
NVIDIA GPU Technology Conference (GTC)
-
[15]
3d gaussian splatting for real-time radiance field rendering.ACM Trans
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering.ACM Trans. Graph., 42(4):139–1, 2023
work page 2023
-
[16]
Learning mesh-based simulation with graph networks
Tobias Pfaff, Meire Fortunato, Alvaro Sanchez-Gonzalez, and Peter Battaglia. Learning mesh-based simulation with graph networks. InInternational conference on learning representations, 2020
work page 2020
-
[17]
Evomesh: adaptive physical simulation with hierarchical graph evolutions
Huayu Deng, Xiangming Zhu, Yunbo Wang, and Xiaokang Yang. Evomesh: adaptive physical simulation with hierarchical graph evolutions. InProceedings of the 42nd International Conference on Machine Learning, ICML’25. JMLR.org, 2025. 10
work page 2025
-
[18]
Xiaodong Cheng, Yu Kawano, and Jacquelien M. A. Scherpen. Reduction of second-order network systems with structure preservation.IEEE Transactions on Automatic Control, 62(10):5026–5038, 2017
work page 2017
-
[19]
Athanasios C Antoulas.Approximation of large-scale dynamical systems. SIAM, 2005
work page 2005
-
[20]
Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware
Tony Z Zhao, Vikash Kumar, Sergey Levine, and Chelsea Finn. Learning fine-grained bimanual manipula- tion with low-cost hardware.arXiv preprint arXiv:2304.13705, 2023
work page internal anchor Pith review arXiv 2023
-
[21]
$\pi_0$: A Vision-Language-Action Flow Model for General Robot Control
Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, et al. π0: A vision-language-action flow model for general robot control.arXiv preprint arXiv:2410.24164, 2024
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[22]
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis.Communications of the ACM, 65(1):99–106, 2021
work page 2021
-
[23]
D-nerf: Neural radiance fields for dynamic scenes
Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10318–10327, 2021
work page 2021
-
[24]
Nerfies: Deformable neural radiance fields
Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Steven M Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. InProceedings of the IEEE/CVF international conference on computer vision, pages 5865–5874, 2021
work page 2021
-
[25]
4d gaussian splatting for real-time dynamic scene rendering
Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20310–20320, June 2024
work page 2024
-
[26]
Gaussian-flow: 4d reconstruction with dynamic 3d gaussian particle
Youtian Lin, Zuozhuo Dai, Siyu Zhu, and Yao Yao. Gaussian-flow: 4d reconstruction with dynamic 3d gaussian particle. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21136–21145, 2024
work page 2024
-
[27]
Physgaussian: Physics-integrated 3d gaussians for generative dynamics
Tianyi Xie, Zeshun Zong, Yuxing Qiu, Xuan Li, Yutao Feng, Yin Yang, and Chenfanfu Jiang. Physgaussian: Physics-integrated 3d gaussians for generative dynamics. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4389–4398, 2024
work page 2024
-
[28]
Physdreamer: Physics-based interaction with 3d objects via video generation
Tianyuan Zhang, Hong-Xing Yu, Rundi Wu, Brandon Y Feng, Changxi Zheng, Noah Snavely, Jiajun Wu, and William T Freeman. Physdreamer: Physics-based interaction with 3d objects via video generation. In European Conference on Computer Vision, pages 388–406. Springer, 2024
work page 2024
-
[29]
Pie-nerf: Physics- based interactive elastodynamics with nerf
Yutao Feng, Yintong Shang, Xuan Li, Tianjia Shao, Chenfanfu Jiang, and Yin Yang. Pie-nerf: Physics- based interactive elastodynamics with nerf. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4450–4461, 2024
work page 2024
-
[30]
Physanimator: Physics-guided generative cartoon animation
Tianyi Xie, Yiwei Zhao, Ying Jiang, and Chenfanfu Jiang. Physanimator: Physics-guided generative cartoon animation. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 10793–10804, 2025
work page 2025
-
[31]
Xuan Li, Yi-Ling Qiao, Peter Yichen Chen, Krishna Murthy Jatavallabhula, Ming Lin, Chenfanfu Jiang, and Chuang Gan. PAC-neRF: Physics augmented continuum neural radiance fields for geometry-agnostic system identification. InThe Eleventh International Conference on Learning Representations, 2023
work page 2023
-
[32]
Seeing the wind from a falling leaf
Zhiyuan Gao, Jiageng Mao, Hong-Xing Yu, Haozhe Lou, Emily Yue-ting Jia, Jernej Barbic, Jiajun Wu, and Yue Wang. Seeing the wind from a falling leaf. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems, 2025
work page 2025
-
[33]
Reconstruction and simulation of elastic objects with spring-mass 3d gaussians
Licheng Zhong, Hong-Xing Yu, Jiajun Wu, and Yunzhu Li. Reconstruction and simulation of elastic objects with spring-mass 3d gaussians. InEuropean Conference on Computer Vision, pages 407–423. Springer, 2024
work page 2024
-
[34]
Dynamic 3d gaussian tracking for graph-based neural dynamics modeling
Mingtong Zhang, Kaifeng Zhang, and Yunzhu Li. Dynamic 3d gaussian tracking for graph-based neural dynamics modeling. In8th Annual Conference on Robot Learning, 2024
work page 2024
-
[35]
Bruce Moore. Principal component analysis in linear systems: Controllability, observability, and model reduction.IEEE transactions on automatic control, 26(1):17–32, 2003
work page 2003
-
[36]
Serkan Gugercin, Athanasios C Antoulas, and Christopher Beattie. H_2 model reduction for large-scale linear dynamical systems.SIAM journal on matrix analysis and applications, 30(2):609–638, 2008. 11
work page 2008
-
[37]
Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems
Zhaojun Bai. Krylov subspace techniques for reduced-order modeling of large-scale dynamical systems. Applied numerical mathematics, 43(1-2):9–44, 2002
work page 2002
-
[38]
Turbulence and the dynamics of coherent structures
Lawrence Sirovich. Turbulence and the dynamics of coherent structures. i. coherent structures.Quarterly of applied mathematics, 45(3):561–571, 1987
work page 1987
-
[39]
Balanced model reduction via the proper orthogonal decomposition
Karen Willcox and Jaime Peraire. Balanced model reduction via the proper orthogonal decomposition. AIAA journal, 40(11):2323–2330, 2002
work page 2002
-
[40]
Peter Benner, Serkan Gugercin, and Karen Willcox. A survey of projection-based model reduction methods for parametric dynamical systems.SIAM review, 57(4):483–531, 2015
work page 2015
-
[41]
Second-order balanced truncation.Linear Algebra and its Applications, 415(2–3):373–384, 2006
Younes Chahlaoui, Damien Lemonnier, Antoine Vandendorpe, and Paul Van Dooren. Second-order balanced truncation.Linear Algebra and its Applications, 415(2–3):373–384, 2006
work page 2006
-
[42]
Timo Reis and Tatjana Stykel. Balanced truncation model reduction of second-order systems.Mathematical and Computer Modelling of Dynamical Systems, 14(5):391–406, 2008
work page 2008
-
[43]
Learning to simulate complex physics with graph networks
Alvaro Sanchez-Gonzalez, Jonathan Godwin, Tobias Pfaff, Rex Ying, Jure Leskovec, and Peter Battaglia. Learning to simulate complex physics with graph networks. InInternational conference on machine learning, pages 8459–8468. PMLR, 2020
work page 2020
-
[44]
Yunzhu Li, Jiajun Wu, Russ Tedrake, Joshua B. Tenenbaum, and Antonio Torralba. Learning Parti- cle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids.arXiv e-prints, page arXiv:1810.01566, October 2018
-
[45]
Mario Lino, Stathi Fotiadis, Anil A Bharath, and Chris D Cantwell. Multi-scale rotation-equivariant graph neural networks for unsteady eulerian fluid dynamics.Physics of Fluids, 34(8), 2022
work page 2022
-
[46]
Efficient learning of mesh-based physical simulation with bi-stride multi-scale graph neural network
Yadi Cao, Menglei Chai, Minchen Li, and Chenfanfu Jiang. Efficient learning of mesh-based physical simulation with bi-stride multi-scale graph neural network. InInternational conference on machine learning, pages 3541–3558. PMLR, 2023
work page 2023
-
[47]
arXiv preprint arXiv:2302.10803 , year=
Steeven Janny, Aurélien Béneteau, Madiha Nadri, Julie Digne, Nicolas Thome, and Christian Wolf. Eagle: Large-Scale Learning of Turbulent Fluid Dynamics with Mesh Transformers.arXiv e-prints, page arXiv:2302.10803, February 2023
-
[48]
Youn-Yeol Yu, Jeongwhan Choi, Woojin Cho, Kookjin Lee, Nayong Kim, Kiseok Chang, Chang-Seung Woo, Ilho Kim, Seok-Woo Lee, Joon-Young Yang, Sooyoung Yoon, and Noseong Park. Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer.arXiv e-prints, page arXiv:2312.12467, December 2023
-
[49]
Hierarchical graph representation learning with differentiable pooling
Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, and Jure Leskovec. Hierarchical graph representation learning with differentiable pooling. volume 31, 2018
work page 2018
-
[50]
Ininternational conference on machine learning, pages 2083–2092
Hongyang Gao and Shuiwang Ji. Ininternational conference on machine learning, pages 2083–2092. PMLR, 2019
work page 2083
-
[51]
Junhyun Lee, Inyeop Lee, and Jaewoo Kang. Self-attention graph pooling. InInternational conference on machine learning, pages 3734–3743. pmlr, 2019
work page 2019
-
[52]
Categorical Reparameterization with Gumbel-Softmax
Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax.arXiv preprint arXiv:1611.01144, 2016
work page internal anchor Pith review arXiv 2016
-
[53]
Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation
Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation.arXiv preprint arXiv:1308.3432, 2013
work page internal anchor Pith review arXiv 2013
-
[54]
Warp: A high-performance python framework for gpu simulation and graphics, March
Miles Macklin. Warp: A high-performance python framework for gpu simulation and graphics, March
-
[55]
NVIDIA GPU Technology Conference (GTC). 12 A Implementation details Training details.The active configuration uses L= 3 (comprising four levels, including l= 0 ); latent dimension 128; positional-encoding dimension 30. Training runs E= 80 epochs. Optimization uses Adam with learning rate lr= 1e−3 and decay γlr = 0.9. The differentiable Euler integrator ru...
work page 2002
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.