pith. machine review for the scientific record. sign in

arxiv: 2604.08636 · v1 · submitted 2026-04-09 · 💻 cs.RO · cs.AI

Recognition: unknown

LEGO: Latent-space Exploration for Geometry-aware Optimization of Humanoid Kinematic Design

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:40 UTC · model grok-4.3

classification 💻 cs.RO cs.AI
keywords data-driven robot designkinematic optimizationlatent spacemanifold learninghumanoid robotsmotion retargetingscrew theory
0
0 comments X

The pith

A compact latent space learned from existing humanoid designs and human motion data enables automated discovery of new kinematic structures via gradient-free optimization.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows how to reduce human intuition in robot design by learning the search space directly from existing mechanical designs rather than defining it by hand. Joint axes are represented with screw theory, then an isometric manifold is learned to create a low-dimensional latent space that keeps geometric relationships intact. Human motion capture is turned into an objective by retargeting motions onto candidate robots and scoring alignment with Procrustes analysis. Gradient-free search inside this latent space then produces new upper-body designs that are kinematically valid and good at mimicking the reference movements.

Core claim

By representing joint axes with screw theory and learning an isometric manifold from existing humanoid upper-body designs, the authors obtain a compact geometry-preserving latent space. Gradient-free optimization performed in this space, using an objective derived from motion retargeting and Procrustes analysis of human motion data, yields novel kinematic designs that remain valid and effective for the target motions.

What carries the argument

The isometric manifold learned from screw-theory joint-axis representations of existing humanoid designs, which supplies a tractable latent space for gradient-free optimization driven by motion-derived losses.

If this is right

  • The design space for new robots can be constrained automatically by data from existing mechanisms instead of manual parameterization.
  • Human motion data supplies a direct, task-specific loss without needing hand-crafted reward functions.
  • Optimization stays inside kinematically feasible regions because new points remain on the learned manifold.
  • Novel designs can be found that match human motion better than the original training examples.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same data-driven manifold construction could be applied to lower-body or full-body designs if comparable collections of existing mechanisms and motion data are assembled.
  • Adding dynamic simulation feedback into the latent-space objective might extend the method beyond pure kinematic retargeting to tasks involving balance or contact.
  • Increasing the diversity of the training set of existing designs would enlarge the reachable space of new robots without changing the optimization procedure.

Load-bearing premise

The learned isometric manifold creates a latent space in which gradient-free optimization reliably finds kinematically valid and task-effective designs that generalize beyond the training set of existing mechanisms.

What would settle it

Generate candidate designs via the latent-space optimization and then check whether they produce invalid joint configurations or achieve higher retargeting error than hand-crafted baselines on held-out human motion sequences.

Figures

Figures reproduced from arXiv: 2604.08636 by Chanwoo Kim, Jaewoon Kwon, Jeongeun Park, Jihwan Yoon, Kyungjae Lee, Sungjoon Choi, Taemoon Jeong, Yonghyeon Lee.

Figure 1
Figure 1. Figure 1: Total pipeline for humanoid kinematic structure optimization. First, a dataset of robots is converted to a unified [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Mean best-so-far objective across ten shared [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Latent map of 30 robots (2D IsoAE). Colors show [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Hardware prototypes generated by our design frame [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Demonstration of the three motion-optimized robots from Fig. 4 successfully executing Waving Hello, Chicken [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
read the original abstract

Designing robot morphologies and kinematics has traditionally relied on human intuition, with little systematic foundation. Motion-design co-optimization offers a promising path toward automation, but two major challenges remain: (i) the vast, unstructured design space and (ii) the difficulty of constructing task-specific loss functions. We propose a new paradigm that minimizes human involvement by (i) learning the design search space from existing mechanical designs, rather than hand-crafting it, and (ii) defining the loss directly from human motion data via motion retargeting and Procrustes analysis. Using screw-theory-based joint axis representation and isometric manifold learning, we construct a compact, geometry-preserving latent space of humanoid upper body designs in which optimization is tractable. We then solve design optimization in this latent space using gradient-free optimization. Our approach establishes a principled framework for data-driven robot design and demonstrates that leveraging existing designs and human motion can effectively guide the automated discovery of novel robot design.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes LEGO, a data-driven framework for humanoid upper-body kinematic design. It represents existing designs via screw-theory joint axes, learns a compact latent space through isometric manifold learning to preserve geometry, and performs gradient-free optimization in this space. The objective is defined directly from human motion data using retargeting and Procrustes analysis, with the goal of discovering novel, task-effective designs while minimizing hand-crafted design spaces and loss functions.

Significance. If the central claims hold, the work offers a principled alternative to intuition-driven robot design by grounding the search space in real mechanical designs and the objective in human motion data. This could reduce the effective dimensionality of morphology optimization and enable more systematic exploration of kinematically plausible humanoid variants.

major comments (2)
  1. [Abstract (and implied Section 3 on latent-space construction)] The core assumption that points reached by gradient-free optimization in the isometric latent space decode to kinematically valid, assemblable mechanisms (with feasible joint limits and no self-collisions) is load-bearing for the claim of automated discovery of novel designs, yet the abstract provides no evidence or enforcement mechanism for these global constraints; isometry preserves local distances but does not automatically guarantee global validity outside the training distribution.
  2. [Abstract] No quantitative results, ablation studies, or baseline comparisons are referenced in the provided abstract, making it impossible to assess whether the optimized designs outperform hand-designed or randomly sampled mechanisms on the retargeting task; this undermines the demonstration that the framework 'effectively guide[s] the automated discovery of novel robot design.'
minor comments (1)
  1. [Abstract] The abstract would benefit from explicit mention of the size and diversity of the existing-design corpus used for manifold learning and the specific quantitative metrics (e.g., Procrustes distance, kinematic error) used to evaluate success.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the detailed and constructive review. We address each major comment below and outline revisions to improve clarity and completeness.

read point-by-point responses
  1. Referee: [Abstract (and implied Section 3 on latent-space construction)] The core assumption that points reached by gradient-free optimization in the isometric latent space decode to kinematically valid, assemblable mechanisms (with feasible joint limits and no self-collisions) is load-bearing for the claim of automated discovery of novel designs, yet the abstract provides no evidence or enforcement mechanism for these global constraints; isometry preserves local distances but does not automatically guarantee global validity outside the training distribution.

    Authors: We agree the abstract does not explicitly describe enforcement of global constraints. The full manuscript constructs the latent space exclusively from a curated dataset of existing, valid humanoid designs via isometric manifold learning on screw-theory joint-axis representations; this ensures decoded points remain kinematically consistent with the training distribution. Gradient-free optimization is performed strictly within the learned manifold to limit extrapolation. We will revise the abstract to note that validity is preserved by the data-driven construction and add a short subsection in Section 3 clarifying post-decoding checks for joint limits and collisions. revision: yes

  2. Referee: [Abstract] No quantitative results, ablation studies, or baseline comparisons are referenced in the provided abstract, making it impossible to assess whether the optimized designs outperform hand-designed or randomly sampled mechanisms on the retargeting task; this undermines the demonstration that the framework 'effectively guide[s] the automated discovery of novel robot design.'

    Authors: We acknowledge that the abstract omits references to quantitative results. Sections 4 and 5 of the manuscript contain the requested evaluations, including retargeting error metrics, ablation studies on the isometric embedding, and direct comparisons to hand-designed mechanisms and random latent-space samples. We will revise the abstract to include concise statements of these key quantitative outcomes and the observed performance gains. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected; derivation uses independent external data sources

full rationale

The paper constructs its latent space via isometric manifold learning on screw-theory representations drawn from a separate corpus of existing humanoid designs, and defines its optimization loss directly from independent human motion data through retargeting and Procrustes analysis. Neither step reduces to a self-definition, fitted parameter renamed as prediction, or self-citation chain; both are grounded in external inputs that do not presuppose the novel designs being discovered. The gradient-free optimization in the resulting latent space therefore remains non-circular with respect to the claimed outputs.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

Abstract-only review; the following entries are inferred from the high-level description and would need verification against the full manuscript.

axioms (2)
  • domain assumption Existing mechanical designs form a representative sample from which a geometry-preserving latent space can be learned.
    Invoked when the authors state they learn the design search space from existing designs rather than hand-crafting it.
  • domain assumption Motion retargeting followed by Procrustes analysis yields a task-specific loss that is meaningful for kinematic design optimization.
    Central to the claim that the loss can be defined directly from human motion data.

pith-pipeline@v0.9.0 · 5490 in / 1313 out tokens · 57488 ms · 2026-05-10T17:40:07.606815+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

59 extracted references · 2 canonical work pages

  1. [1]

    Physically Consistent Lie Group Mesh Models for Robot Design and Motion Co-Optimization,

    J. Kwon, S. Kim, and F. C. Park, “Physically Consistent Lie Group Mesh Models for Robot Design and Motion Co-Optimization,”IEEE Robot. Autom. Lett., vol. 7, no. 4, pp. 9501–9508, 2022

  2. [2]

    K. M. Lynch and F. C. Park,Modern robotics. Cambridge University Press, 2017

  3. [3]

    Computational aspects of the product-of-exponentials formula for robot kinematics,

    F. C. Park, “Computational aspects of the product-of-exponentials formula for robot kinematics,”IEEE Trans. Autom. Control, vol. 39, no. 3, pp. 643–647, 1994

  4. [4]

    A Geometric Perspective on Autoencoders,

    Y . Lee, “A Geometric Perspective on Autoencoders,”arXiv preprint arXiv:2309.08247, 2023

  5. [5]

    Neighborhood Reconstructing Au- toencoders,

    Y . Lee, H. Kwon, and F. Park, “Neighborhood Reconstructing Au- toencoders,”Adv. Neural Inf. Process. Syst. (NeurIPS), vol. 34, pp. 536–546, 2021

  6. [6]

    On Explicit Curvature Regularization in Deep Generative Models,

    Y . Lee and F. C. Park, “On Explicit Curvature Regularization in Deep Generative Models,” inTAG-ML Workshops. Proc. Int. Conf. Mach. Learn. (ICML), 2023, pp. 505–518

  7. [7]

    Regularized Autoencoders for Isometric Representation Learning,

    Y . Leeet al., “Regularized Autoencoders for Isometric Representation Learning,” inProc. Int. Conf. Learn. Represent. (ICLR), 2022

  8. [8]

    Graph Geometry-Preserving Autoencoders,

    J. Limet al., “Graph Geometry-Preserving Autoencoders,” inProc. Int. Conf. Mach. Learn. (ICML), 2024

  9. [9]

    Motion Manifold Primitives With Parametric Curve Models,

    Y . Lee, “Motion Manifold Primitives With Parametric Curve Models,” IEEE Trans. Robot., 2024

  10. [10]

    Isometric Regularization for Manifolds of Functional Data,

    H. Heoet al., “Isometric Regularization for Manifolds of Functional Data,” inProc. Int. Conf. Learn. Represent. (ICLR), 2025

  11. [11]

    Monte Carlo Tree Search in Continuous Spaces Using V oronoi Optimistic Optimization with Regret Bounds,

    B. Kimet al., “Monte Carlo Tree Search in Continuous Spaces Using V oronoi Optimistic Optimization with Regret Bounds,” inProc. AAAI Conf. Artif. Intell., 2020, pp. 9916–9924

  12. [12]

    GLSO: Grammar-guided Latent Space Optimization for Sample-efficient Robot Design Automation,

    J. Hu, J. Whitman, and H. Choset, “GLSO: Grammar-guided Latent Space Optimization for Sample-efficient Robot Design Automation,” inProc. Conf. Robot Learn. (CoRL), 2022

  13. [13]

    COIL: Constrained Optimization in Learned Latent Space – Learning Representations for Valid Solutions,

    P. J. Bentleyet al., “COIL: Constrained Optimization in Learned Latent Space – Learning Representations for Valid Solutions,” inProc. Genet. Evol. Comput. Conf. (GECCO), 2022

  14. [14]

    MorphV AE: Advancing Morphological Design of V oxel-Based Soft Robots with Variational Autoencoders,

    J. Songet al., “MorphV AE: Advancing Morphological Design of V oxel-Based Soft Robots with Variational Autoencoders,” inProc. AAAI Conf. Artif. Intell., vol. 38, no. 9, 2024, pp. 10 368–10 376

  15. [15]

    MORPH: Design Co-optimization with Reinforcement Learning via a Differentiable Hardware Model Proxy,

    Z. He and M. Ciocarlie, “MORPH: Design Co-optimization with Reinforcement Learning via a Differentiable Hardware Model Proxy,” inProc. IEEE Int. Conf. Robot. Autom. (ICRA), 2024

  16. [16]

    N-LIMB: Neural Limb Optimization for Efficient Morphological Design,

    C. Schaff and M. R. Walter, “N-LIMB: Neural Limb Optimization for Efficient Morphological Design,”arXiv preprint arXiv:2207.11773, 2022

  17. [17]

    R. M. Murray, Z. Li, and S. S. Sastry,A mathematical introduction to robotic manipulation. CRC press, 2017

  18. [18]

    Geometric Algorithms for Robot Dynamics: A Tutorial Review,

    F. C. Parket al., “Geometric Algorithms for Robot Dynamics: A Tutorial Review,”Appl. Mech. Rev., vol. 70, no. 1, p. 010803, 2018

  19. [19]

    Towards a Natural Motion Generator: a Pipeline to Control a Humanoid based on Motion Data,

    S. Choi and J. Kim, “Towards a Natural Motion Generator: a Pipeline to Control a Humanoid based on Motion Data,” inProc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2019, pp. 4373–4380

  20. [20]

    CoRe: A Hybrid Approach of Contact-Aware Opti- mization and Learning for Humanoid Robot Motions,

    T. Jeonget al., “CoRe: A Hybrid Approach of Contact-Aware Opti- mization and Learning for Humanoid Robot Motions,” inProc. IEEE- RAS Int. Conf. Humanoid Robots (Humanoids), 2025, pp. 293–300

  21. [21]

    Robust and Expressive Humanoid Motion Retargeting via Optimization-Based Rig Unification,

    T. Jeonget al., “Robust and Expressive Humanoid Motion Retargeting via Optimization-Based Rig Unification,” inProc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2025, pp. 21 619–21 626

  22. [22]

    Procrustes methods in the statistical analysis of shape,

    C. Goodall, “Procrustes methods in the statistical analysis of shape,” J. R. Stat. Soc. B, vol. 53, no. 2, pp. 285–321, 1991

  23. [23]

    HybrIK: A Hybrid Analytical-Neural Inverse Kinemat- ics Solution for 3D Human Pose and Shape Estimation,

    J. Liet al., “HybrIK: A Hybrid Analytical-Neural Inverse Kinemat- ics Solution for 3D Human Pose and Shape Estimation,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2021, pp. 3383–3393

  24. [24]

    Normalized Human Pose Features for Human Action Video Alignment,

    J. Liuet al., “Normalized Human Pose Features for Human Action Video Alignment,” inProc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2021, pp. 11 521–11 531

  25. [25]

    Differentiable Dynamics for Articulated 3d Hu- man Motion Reconstruction,

    E. G ¨artneret al., “Differentiable Dynamics for Articulated 3d Hu- man Motion Reconstruction,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2022, pp. 13 190–13 200

  26. [26]

    SynBody: Synthetic Dataset with Layered Human Models for 3D Human Perception and Modeling,

    Z. Yanget al., “SynBody: Synthetic Dataset with Layered Human Models for 3D Human Perception and Modeling,” inProc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2023, pp. 20 282–20 292

  27. [27]

    Trajectory Optimization for Physics-Based Re- construction of 3d Human Pose from Monocular Video,

    E. G ¨artneret al., “Trajectory Optimization for Physics-Based Re- construction of 3d Human Pose from Monocular Video,” inProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), 2022, pp. 13 106–13 115

  28. [28]

    Generalized procrustes analysis,

    J. C. Gower, “Generalized procrustes analysis,”Psychometrika, vol. 40, no. 1, pp. 33–51, 1975

  29. [29]

    DigitRobot.jl,

    A. Adu-Bredu, “DigitRobot.jl,” https://github.com/adubredu/ DigitRobot.jl, approved copyright permission for use in this paper by Agility Robotics

  30. [30]

    Talos: A new humanoid research platform targeted for industrial applications,

    O. Stasse, T. Flayolset al., “Talos: A new humanoid research platform targeted for industrial applications,” inProc. IEEE-RAS Int. Conf. Humanoid Robots (Humanoids), 2017, pp. 689–695

  31. [31]

    MuJoCo Menagerie: A collection of high-quality simulation models for MuJoCo

    K. Zakka, Y . Tassa, and MuJoCo Menagerie Contributors, “MuJoCo Menagerie: A collection of high-quality simulation models for MuJoCo.” [Online]. Available: http://github.com/google-deepmind/ mujoco menagerie

  32. [32]

    On the performance of the baxter research robot,

    S. Cremer, L. Mastromoro, and D. O. Popa, “On the performance of the baxter research robot,” inProc. IEEE Int. Symp. Assem. Manuf. (ISAM), 2016, pp. 106–111

  33. [33]

    ergoCub software,

    iCub Tech, “ergoCub software,” https://github.com/icub-tech-iit/ ergocub-software, 2022

  34. [34]

    The iCub humanoid robot: An open-systems platform for research in cognitive development,

    G. Mettaet al., “The iCub humanoid robot: An open-systems platform for research in cognitive development,”Neural Netw., vol. 23, no. 8–9, pp. 1125–1134, 2010

  35. [35]

    IHMC Alex SDK,

    IHMC Robotics, “IHMC Alex SDK,” https://github.com/ihmcrobotics/ ihmc-alex-sdk

  36. [36]

    Example Robot URDFs

    C. Mastalli, G. Saurel, and example-robot-data Contributors, “Example Robot URDFs.” [Online]. Available: https://github.com/ Gepetto/example-robot-data

  37. [37]

    Valkyrie: NASA’s First Bipedal Humanoid Robot,

    N. A. Radfordet al., “Valkyrie: NASA’s First Bipedal Humanoid Robot,”J. Field Robot., vol. 32, no. 3, pp. 397–419, 2015

  38. [38]

    PR2 Remote Lab: An Environment for Remote Development and Experimentation,

    B. Pitzer, S. Osentoskiet al., “PR2 Remote Lab: An Environment for Remote Development and Experimentation,” inProc. IEEE Int. Conf. Robot. Autom. (ICRA), 2012, pp. 3200–3205

  39. [39]

    Development of Life-Sized High- Power Humanoid Robot JAXON for Real-World Use,

    K. Kojima, T. Karasawaet al., “Development of Life-Sized High- Power Humanoid Robot JAXON for Real-World Use,” inProc. IEEE- RAS Int. Conf. Humanoid Robots (Humanoids), 2015

  40. [40]

    Proposal of inspection and rescue tasks for tunnel disasters — Task development of Japan virtual robotics challenge,

    M. Okugawaet al., “Proposal of inspection and rescue tasks for tunnel disasters — Task development of Japan virtual robotics challenge,” in Proc. IEEE Int. Symp. Saf. Secur. Rescue Robot. (SSRR), 2015, pp. 1–2

  41. [41]

    Poppy: Open Source 3D Printed Robot for Experiments in Developmental Robotics,

    M. Lapeyre, P. Rouanetet al., “Poppy: Open Source 3D Printed Robot for Experiments in Developmental Robotics,” inProc. IEEE Int. Conf. Dev. Learn. Epigenet. Robot. (ICDL-Epirob), 2014

  42. [42]

    Robonaut 2 – The First Humanoid Robot in Space,

    M. A. Diftler, J. S. Mehlinget al., “Robonaut 2 – The First Humanoid Robot in Space,” inProc. IEEE Int. Conf. Robot. Autom. (ICRA), 2011

  43. [43]

    Nextage Open ros packages,

    Tokyo Opensource Robotics Kyokai Association, “Nextage Open ros packages,” https://github.com/tork-a/rtmros nextage, accessed: 2024

  44. [44]

    Sciurus17 description package,

    RT Corporation, “Sciurus17 description package,” https://github.com/ rt-net/sciurus17 description, disclaimer: Development based on this model falls outside the scope of responsibility of RT Corporation

  45. [45]

    igus® humanoid open platform,

    P. Allgeueret al., “igus® humanoid open platform,”KI – K ¨unstliche Intell., vol. 30, no. 2, pp. 223–227, 2016

  46. [46]

    ARMAR-4: A 63 DOF torque controlled humanoid robot,

    T. Asfouret al., “ARMAR-4: A 63 DOF torque controlled humanoid robot,” inProc. IEEE-RAS Int. Conf. Humanoid Robots (Humanoids), 2013, pp. 390–396

  47. [47]

    ARMAR-6: A high-performance humanoid for human–robot collaboration in real-world scenarios,

    T. Asfouret al., “ARMAR-6: A high-performance humanoid for human–robot collaboration in real-world scenarios,”IEEE Robot. Autom. Mag., 2019

  48. [48]

    Pepper robot meshes,

    SoftBank Robotics, “Pepper robot meshes,” https://github.com/ ros-naoqi/pepper meshes, converted to MJCF for compatibility with MuJoCo environment

  49. [49]

    Humanoid robot HRP-4: Humanoid robotics platform with lightweight and slim body,

    K. Kaneko, F. Kanehiroet al., “Humanoid robot HRP-4: Humanoid robotics platform with lightweight and slim body,” inProc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2011

  50. [50]

    BRUCE and THEMIS simulation models,

    Westwood Robotics, “BRUCE and THEMIS simulation models,” https://github.com/Westwood-Robotics/BRUCE simulation models, https://github.com/Westwood-Robotics/THEMIS-Simulation-Model, used with Permission

  51. [51]

    Agibot X1 training assets,

    Agibot Tech, “Agibot X1 training assets,” https://github.com/ AgibotTech/agibot x1 train

  52. [52]

    RBY1 sdk,

    Rainbow Robotics, “RBY1 sdk,” https://github.com/RainbowRobotics/ rby1-sdk

  53. [53]

    MuJoCo: A physics engine for model-based control,

    E. Todorov, T. Erez, and Y . Tassa, “MuJoCo: A physics engine for model-based control,” inProc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS), 2012, pp. 5026–5033

  54. [54]

    Learning 3d human pose from structure and motion,

    R. Dabral, A. Mundhadaet al., “Learning 3d human pose from structure and motion,” inProc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 668–683

  55. [55]

    Carnegie–mellon mocap database,

    CMU Graphics Lab, “Carnegie–mellon mocap database,” 2007, online: http://mocap.cs.cmu.edu/

  56. [56]

    A Kinematic Notation for Lower- Pair Mechanisms Based on Matrices,

    J. Denavit and R. S. Hartenberg, “A Kinematic Notation for Lower- Pair Mechanisms Based on Matrices,”ASME J. Appl. Mech., vol. 22, no. 2, pp. 215–221, 1955

  57. [57]

    R. S. Hartenberg and J. Denavit,Kinematic Synthesis of Linkages. New York: McGraw-Hill, 1964

  58. [58]

    Reducing the Dimensionality of Data with Neural Networks,

    G. E. Hinton and R. R. Salakhutdinov, “Reducing the Dimensionality of Data with Neural Networks,”Science, vol. 313, no. 5786, pp. 504– 507, 2006

  59. [59]

    Goodfellow, Y

    I. Goodfellow, Y . Bengio, and A. Courville,Deep Learning. MIT Press, 2016. APPENDIX Appendix A. Robot Dataset A. Robot List We use 30 diverse robots collected from publicly available MJCF/URDF assets. Table II lists all robots included in the dataset. Each robot is classified as either Humanoid (bipedal, with articulated legs) or Non-bipedal (no legs, mo...