pith. machine review for the scientific record. sign in

arxiv: 2605.09216 · v1 · submitted 2026-05-09 · 💻 cs.RO

Recognition: no theorem link

Continuum Robot Modeling with Action Conditioned Flow Matching

Authors on Pith no claims yet

Pith reviewed 2026-05-12 02:13 UTC · model grok-4.3

classification 💻 cs.RO
keywords continuum robotstendon-driven continuum robotsflow matchingpoint cloud modelingself-modelingshape predictionrobot kinematics
0
0 comments X

The pith

A flow matching model maps motor actuation states to the settled 3D geometry of tendon-driven continuum robots.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper tackles the problem of predicting the final settled shape of tendon-driven continuum robots from their motor commands, a task made difficult by continuous deformation, friction, and fabrication differences. It introduces a data-driven method that trains a model on point clouds collected from randomly sampled quasi-static configurations using a custom multi-camera RGB-D setup on 3D-printed hardware. The central technique learns a conditional flow that transforms noise into the target robot geometry given the current motor states. Tests in simulation across 2-, 3-, and 5-module designs and on physical 2- and 3-module robots show lower errors than earlier deformable-object and self-modeling baselines when measured by Chamfer Distance and Earth Mover's Distance. The same conditional structure also accepts tip payload as an extra input in simulation, broadening the prediction task without changing the core architecture.

Core claim

We learn a point cloud flow matching model that maps motor actuation states to the robot's settled 3D geometry. The model is trained from randomly sampled quasi static configurations and evaluated on test motor commands within the same TDCR design family and actuation range, showing improved shape prediction accuracy under CD and EMD metrics compared to prior 3D deformable object and robot self modeling approaches.

What carries the argument

The action-conditioned point cloud flow matching model, which learns a probability flow from a base noise distribution to the target settled geometry distribution while taking motor states as conditioning input.

If this is right

  • The model delivers higher shape prediction accuracy for simulated 2-, 3-, and 5-module TDCRs than prior methods.
  • It also improves accuracy on real 2- and 3-module physical robots under the same metrics.
  • The conditional formulation extends directly to tip payload as an additional input, enabling payload-aware steady-state predictions in simulation.
  • The framework supplies a complete data-driven self-modeling pipeline for quasi-static TDCR geometry that requires only motor commands at inference time.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Accurate shape prediction from actions alone could allow closed-loop controllers to compensate for unmodeled compliance without extra sensors.
  • The same conditioning pattern may transfer to other soft or continuum robots by swapping the input variables while keeping the flow matching backbone.
  • Collecting data only from quasi-static poses leaves open the question of whether the model can be fine-tuned online to handle slow drifts in tendon tension or temperature.

Load-bearing premise

Randomly sampled quasi-static configurations provide sufficient coverage for the model to generalize to unseen test motor commands within the same TDCR design family and actuation range.

What would settle it

Evaluate the trained model on motor commands drawn from a TDCR with a different module count or actuation values outside the original sampling range, then measure whether the predicted point clouds still achieve the reported CD and EMD accuracy on the actual settled geometry.

Figures

Figures reproduced from arXiv: 2605.09216 by Hod Lipson, Jinchen Ruan, Jiong Lin.

Figure 1
Figure 1. Figure 1: Overview of our action conditioned point flow matching framework for a tendon driven continuum robot (TDCR) shape [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Action conditioned flow matching framework. (A) [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: TDCR hardware platform and real hardware data capture. (a) Three module CAD rendering. (b) Support design for [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: MuJoCo renderings of our tendon driven continuum [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative comparison on simulated and real TDCR datasets. Rows show simulated 2-/3-/5-module with base settings [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Payload conditioned prediction in simulation. Blue/red [PITH_FULL_IMAGE:figures/full_fig_p009_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Qualitative comparison on simulated no base datasets (2/3/5 modules). Columns show CNF (PointFlow), VSM, NeRF (FFKSM), 3DGS, our method, and the ground truth (GT). For our method, we visualize both MLP and Hybrid velocity networks. VI. QUALITATIVE COMPARISON ON no base DATASETS As shown in [PITH_FULL_IMAGE:figures/full_fig_p013_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Tendon routing overview. (A) Assembled 3-module [PITH_FULL_IMAGE:figures/full_fig_p014_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: YZ-plane workspace comparison for the simulated 2-, [PITH_FULL_IMAGE:figures/full_fig_p014_9.png] view at source ↗
read the original abstract

Predicting the shape of tendon driven continuum robots (TDCRs) at steady state from actuation remains challenging due to continuous deformation, complex tendon routing, compliance, friction, and fabrication variability. In this paper, we address this problem as kinematic self modeling conditioned on action. We present a lightweight 3D printed TDCR hardware platform and an RGB-D data collection pipeline with multiple cameras, and we learn a point cloud flow matching model that maps motor actuation states to the robot's settled 3D geometry. The model is trained from randomly sampled quasi static configurations and evaluated on test motor commands within the same TDCR design family and actuation range. We compare against prior 3D deformable object and robot self modeling approaches in both MuJoCo simulation and real hardware experiments. Experiments on simulated 2-, 3-, and 5-module TDCRs and real 2- and 3-module robots show improved shape prediction accuracy under CD and EMD metrics. We further show in simulation that the same conditional formulation generalizes to tip payload as a conditioning input, enabling payload conditioned steady-state shape prediction. These results demonstrate a data driven self modeling framework for quasi static TDCR geometry prediction.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript presents a data-driven self-modeling approach for tendon-driven continuum robots (TDCRs) using action-conditioned flow matching to predict steady-state 3D point cloud geometry from motor actuation states. It describes a 3D-printed hardware platform and RGB-D multi-camera data collection for randomly sampled quasi-static configurations. The conditional flow matching model is trained and evaluated on held-out commands in simulation for 2-, 3-, and 5-module TDCRs and on real 2- and 3-module robots, claiming improved Chamfer Distance and Earth Mover's Distance metrics compared to prior methods. It also demonstrates generalization to tip payload conditioning in simulation.

Significance. If the empirical results hold, the work offers a practical alternative to physics-based modeling for TDCRs, which suffer from complex nonlinear behaviors. The lightweight hardware and data pipeline are valuable contributions for reproducible experiments. The flow matching formulation for point clouds conditioned on actions is a timely application of generative models to robotics. Explicit credit is due for extending the model to payload-conditioned prediction and for providing both simulation and hardware validation across multiple module counts. This could influence self-modeling techniques in soft and continuum robotics.

major comments (2)
  1. [Data Collection] Data Collection section: The central generalization claim requires that random quasi-static sampling densely covers the motor actuation manifold (typically 4–10+ dimensions). No coverage metric, density analysis, or extrapolation test is provided, so the reported CD/EMD gains on the test split could reflect interpolation within well-sampled pockets rather than robust mapping. This assumption is load-bearing for the entire data-driven pipeline.
  2. [§5 (Results)] §5 (Results) and abstract: The claim of 'improved shape prediction accuracy under CD and EMD metrics' is not supported by any numerical values, error bars, or statistical tests in the evaluation summary. Without these, the strength of the comparison to prior 3D deformable object and robot self-modeling baselines cannot be assessed.
minor comments (2)
  1. [§3 (Method)] The point-cloud flow matching architecture diagram would benefit from explicit notation for the conditioning input (motor states) and the time-step embedding.
  2. [Tables] Table captions should include the exact number of training and test samples per module count to allow reproducibility assessment.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the positive assessment of our contributions and for the constructive feedback on the data collection and evaluation aspects. We address each major comment below and are prepared to revise the manuscript accordingly to strengthen the presentation of our results.

read point-by-point responses
  1. Referee: [Data Collection] Data Collection section: The central generalization claim requires that random quasi-static sampling densely covers the motor actuation manifold (typically 4–10+ dimensions). No coverage metric, density analysis, or extrapolation test is provided, so the reported CD/EMD gains on the test split could reflect interpolation within well-sampled pockets rather than robust mapping. This assumption is load-bearing for the entire data-driven pipeline.

    Authors: We agree that the density of coverage in the actuation space is important for supporting generalization claims in a data-driven model. Our pipeline collects a large number of randomly sampled quasi-static configurations within the full operational range of each TDCR (2-, 3-, and 5-module designs), with the test commands drawn from held-out samples in the identical range. While an explicit coverage metric or density analysis was not included in the original submission, the consistent outperformance on held-out test sets across simulation and real hardware provides empirical support that the sampling captures the relevant manifold sufficiently for the reported task. In revision, we will add a description of the total sample counts, the sampling procedure, and a qualitative visualization of actuation-space coverage (e.g., projected histograms or pairwise scatter plots) to make this assumption more transparent. Full quantitative density estimation in 4–10+ dimensions remains challenging without additional experiments, but we believe the current held-out evaluation already addresses the core concern. revision: partial

  2. Referee: [§5 (Results)] §5 (Results) and abstract: The claim of 'improved shape prediction accuracy under CD and EMD metrics' is not supported by any numerical values, error bars, or statistical tests in the evaluation summary. Without these, the strength of the comparison to prior 3D deformable object and robot self-modeling baselines cannot be assessed.

    Authors: We thank the referee for highlighting this presentational issue. The quantitative CD and EMD values for our method versus the baselines, together with error bars derived from multiple independent trials, are already reported in the tables and figures of Section 5 for all simulated and real-robot experiments. To address the concern directly, we will revise both the abstract and the opening paragraph of Section 5 to explicitly quote the key numerical improvements (including the magnitude of gains) and to reference the supporting tables/figures. We can additionally incorporate basic statistical significance tests (e.g., paired t-tests) on the metric differences if the referee considers them necessary for the final version. revision: yes

Circularity Check

0 steps flagged

No significant circularity detected

full rationale

The paper presents a standard data-driven pipeline: collect random quasi-static motor commands and corresponding 3D point clouds via RGB-D, train a conditional flow-matching model to map actuation states to geometry, and report CD/EMD metrics on a held-out test set of motor commands drawn from the same range. No equations or claims reduce a prediction to a fitted parameter by construction, no self-citations are invoked as load-bearing uniqueness theorems, and the central result (improved test-set accuracy versus baselines) is independently falsifiable on separate data. The approach is self-contained empirical modeling with no self-definitional or renaming steps.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the assumption that a neural flow-matching network trained on quasi-static point-cloud data can capture the steady-state mapping without explicit physics terms. No new physical constants or entities are introduced.

axioms (1)
  • domain assumption Quasi-static configurations are representative of the settled steady-state geometry under the tested actuation ranges
    Invoked in the data-collection and evaluation protocol described in the abstract.

pith-pipeline@v0.9.0 · 5506 in / 1342 out tokens · 69342 ms · 2026-05-12T02:13:19.156611+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

56 extracted references · 56 canonical work pages · 2 internal anchors

  1. [2]

    Tendon-driven continuum robots with extensible sections—a model-based evaluation of mo- tion capabilities.The International Journal of Robotics Research, 40(1):7–29, 2021

    Ernar Amanov, Thien-Dang Nguyen, and Jessica Burgner-Kahrs. Tendon-driven continuum robots with extensible sections—a model-based evaluation of mo- tion capabilities.The International Journal of Robotics Research, 40(1):7–29, 2021. doi: 10.1177/ 0278364919886047

  2. [4]

    Stains,et al., Anatomy of STEM Teaching in American Universities: A Snapshot from a Large-Scale Observation Study.Science359(6383), 1468–1470 (2018), doi:10.1126/science

    Josh Bongard, Victor Zykov, and Hod Lipson. Re- silient machines through continuous self-modeling.Sci- ence, 314(5802):1118–1121, 2006. doi: 10.1126/science. 1133687

  3. [5]

    Caleb Rucker, and Howie Choset

    Jessica Burgner-Kahrs, D. Caleb Rucker, and Howie Choset. Continuum robots for medical applications: A survey.IEEE Transactions on Robotics, 31(6):1261– 1280, 2015. doi: 10.1109/TRO.2015.2489500

  4. [6]

    Camarillo, Christopher F

    David B. Camarillo, Christopher F. Milne, Christopher R. Carlson, Michael R. Zinn, and J. Kenneth Salisbury. Mechanics modeling of tendon-driven continuum manip- ulators.IEEE Transactions on Robotics, 24(6):1262– 1273, 2008. doi: 10.1109/TRO.2008.2002311

  5. [7]

    Fully body visual self-modeling of robot morphologies.Science Robotics, 7(68):eabn1944, July

    Boyuan Chen, Robert Kwiatkowski, Carl V ondrick, and Hod Lipson. Fully body visual self-modeling of robot morphologies.Science Robotics, 7(68):eabn1944, July

  6. [8]

    URL https:// doi.org/10.1126/scirobotics.abn1944

    doi: 10.1126/scirobotics.abn1944. URL https:// doi.org/10.1126/scirobotics.abn1944

  7. [9]

    Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, and David K. Duvenaud. Neural ordinary differential equations. InAdvances in Neural Information Processing Systems (NeurIPS), 2018. URL https://arxiv.org/abs/ 1806.07366

  8. [10]

    Comparison of modeling approaches for a tendon actuated continuum robot with three extensible segments.IEEE Robotics and Automation Letters, 4(2):989–996, 2019

    Mohamed Taha Chikhaoui, Stefan Lilge, Simon Klein- schmidt, and Jessica Burgner-Kahrs. Comparison of modeling approaches for a tendon actuated continuum robot with three extensible segments.IEEE Robotics and Automation Letters, 4(2):989–996, 2019. doi: 10.1109/ LRA.2019.2893610

  9. [11]

    Cho, Daniel S

    Brian Y . Cho, Daniel S. Esser, Jordan Thompson, Bao Thach, Robert J. III Webster, and Alan Kuntz. Ac- counting for hysteresis in the forward kinematics of nonlinearly-routed tendon-driven continuum robots via a learned deep decoder network.IEEE Robotics and Automation Letters, 9(11):9263–9270, 2024. doi: 10. 1109/LRA.2024.3455797

  10. [12]

    Robots that can adapt like ani- mals.Nature, 521(7553):503–507, 2015

    Antoine Cully, Jeff Clune, Sumeet Tarapore, and Jean- Baptiste Mouret. Robots that can adapt like ani- mals.Nature, 521(7553):503–507, 2015. doi: 10.1038/ nature14422

  11. [13]

    Mohsen Moradi Dalvand, Saeid Nahavandi, and Robert D. Howe. An analytical loading model for n-tendon continuum robots.IEEE Transactions on Robotics, 34(5):1215–1225, 2018. doi: 10.1109/TRO.2018.2838548

  12. [14]

    Open source tendon-driven continuum mech- anism: A platform for research in soft robotics

    Bastian Deutschmann, Jens Reinecke, and Alexander Dietrich. Open source tendon-driven continuum mech- anism: A platform for research in soft robotics. In Proceedings of the IEEE International Conference on Soft Robotics (RoboSoft), pages 54–61, 2022. doi: 10.1109/RoboSoft54090.2022.9762144

  13. [15]

    Design and

    Puspita Triana Dewi, Priyanka Rao, and Jessica Burgner- Kahrs. A lightweight modular segment design for tendon-driven continuum robots with pre-programmable stiffness. InProceedings of the IEEE International Conference on Soft Robotics (RoboSoft), 2024. doi: 10.1109/RoboSoft60065.2024.10522016

  14. [16]

    Taniguchi

    Reinhard M. Grassmann, Priyanka Rao, Quentin Peyron, and Jessica Burgner-Kahrs. FAS—a fully actuated seg- ment for tendon-driven continuum robots.Frontiers in Robotics and AI, 9:873446, 2022. doi: 10.3389/frobt. 2022.873446

  15. [17]

    Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, David Duvenaud, and Ilya Sutskever. FFJORD: Free- form continuous dynamics for scalable reversible gener- ative models. InInternational Conference on Learning Representations (ICLR), 2019

  16. [18]

    Trimmed helicoids: An architectured soft structure yielding soft robots with high precision, large workspace, and compliant interactions.npj Robotics, 1:4, 2023

    Qinghua Guan, Francesco Stella, Cosimo Della Santina, Jinsong Leng, and Josie Hughes. Trimmed helicoids: An architectured soft structure yielding soft robots with high precision, large workspace, and compliant interactions.npj Robotics, 1:4, 2023. doi: 10.1038/ s44182-023-00004-7

  17. [19]

    Denoising diffusion probabilistic models

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. InAdvances in Neural Information Processing Systems (NeurIPS), 2020

  18. [20]

    Learning high-fidelity robot self-model with articulated 3D gaussian splatting

    Kejun Hu, Peng Yu, and Ning Tan. Learning high-fidelity robot self-model with articulated 3D gaussian splatting. The International Journal of Robotics Research, 2025. doi: 10.1177/02783649251396980. URL https://doi.org/ 10.1177/02783649251396980. OnlineFirst

  19. [21]

    Teaching robots to build simulations of themselves.Nature Machine Intelligence, 7:484–494, February 2025

    Yuhang Hu, Jiong Lin, and Hod Lipson. Teaching robots to build simulations of themselves.Nature Machine Intelligence, 7:484–494, February 2025. doi: 10.1038/s42256-025-01006-w. URL https://doi.org/10. 1038/s42256-025-01006-w

  20. [22]

    Jones and Ian D

    Bryan A. Jones and Ian D. Walker. Kinematics for multisection continuum robots.IEEE Transactions on Robotics, 22(1):43–55, 2006. doi: 10.1109/TRO.2005. 861458

  21. [23]

    first come, first go

    Heewoo Jun and Alex Nichol. Shap-E: Generat- ing conditional 3D implicit functions.arXiv preprint arXiv:2305.02463, 2023. doi: 10.48550/arXiv.2305. 02463. URL https://arxiv.org/abs/2305.02463

  22. [24]

    https://doi.org/10.1145/3592433 Xiaonan Kong and Riley G

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimk ¨uhler, and George Drettakis. 3D gaussian splatting for real-time radiance field rendering.ACM Transactions on Graphics, 42(4):139:1–139:14, 2023. doi: 10.1145/3592433

  23. [25]

    Roesthuis, and Sarthak Misra

    Fouzia Khan, Roy J. Roesthuis, and Sarthak Misra. Force sensing in continuum manipulators using fiber bragg grating sensors. InProceedings of the IEEE/RSJ Inter- national Conference on Intelligent Robots and Systems (IROS), pages 2531–2536, 2017. doi: 10.1109/IROS. 2017.8206073

  24. [26]

    Kierbel, Nicola Perugini, and Matteo Russo

    Clara G. Kierbel, Nicola Perugini, and Matteo Russo. A comparison of monolithic 3D-printed tendon-driven continuum robot designs.Robotica, 43(11):3888–3901,

  25. [27]

    doi: 10.1017/S0263574725102191

  26. [28]

    OstrichRL: A musculoskeletal os- trich simulation to study bio-mechanical locomotion

    Vittorio La Barbera, Fabio Pardo, Yuval Tassa, Mon- ica Daley, Christopher Richards, Petar Kormushev, and John Hutchinson. OstrichRL: A musculoskeletal os- trich simulation to study bio-mechanical locomotion. In NeurIPS 2021 Deep Reinforcement Learning Workshop, 35th Conference on Neural Information Processing Sys- tems (NeurIPS 2021), December 2021. URL ...

  27. [29]

    Design and control of a tendon-driven continuum robot.Transactions of the Institute of Mea- surement and Control, 40(11):3263–3272, 2018

    Minhan Li, Rongjie Kang, Shineng Geng, and Emanuele Guglielmino. Design and control of a tendon-driven continuum robot.Transactions of the Institute of Mea- surement and Control, 40(11):3263–3272, 2018. doi: 10.1177/0142331216685607

  28. [30]

    Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, and Matt Le. Flow matching for generative modeling. InInternational Conference on Learning Representations (ICLR), 2023

  29. [31]

    Flow straight and fast: Learning to generate and transfer data with rectified flow

    Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. InInternational Conference on Learning Representations (ICLR), 2023

  30. [32]

    Point-voxel CNN for efficient 3d deep learning

    Zhijian Liu, Haotian Tang, Yujun Lin, and Song Han. Point-voxel CNN for efficient 3d deep learning. In Advances in Neural Information Processing Systems (NeurIPS), 2019

  31. [33]

    Soft-robotic arm in- spired by the octopus: II

    Barbara Mazzolai, Laura Margheri, Matteo Cianchetti, Paolo Dario, and Cecilia Laschi. Soft-robotic arm in- spired by the octopus: II. from artificial requirements to innovative technological solutions.Bioinspiration & Biomimetics, 7(2):025005, 2012. doi: 10.1088/ 1748-3182/7/2/025005

  32. [34]

    Srinivasan, Matthew Tancik, Jonathan T

    Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. InProceedings of the European Conference on Computer Vision (ECCV), 2020

  33. [35]

    Considera- tions for follow-the-leader motion of extensible tendon- driven continuum robots

    Maria Neumann and Jessica Burgner-Kahrs. Considera- tions for follow-the-leader motion of extensible tendon- driven continuum robots. InIEEE International Confer- ence on Robotics and Automation (ICRA), pages 917– 923, 2016. doi: 10.1109/ICRA.2016.7487223

  34. [36]

    An extensible continuum robot for intracavitary surgery

    Thien-Dang Nguyen and Jessica Burgner-Kahrs. An extensible continuum robot for intracavitary surgery. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015. doi: 10.1109/IROS.2015.7353661

  35. [38]

    Caleb Rucker

    Kaitlin Oliver-Butler, John Till, and D. Caleb Rucker. Continuum robot stiffness under external loads and pre- scribed tendon displacements.IEEE Transactions on Robotics, 35(2):403–419, 2019. doi: 10.1109/TRO.2018. 2885923

  36. [39]

    FiLM: Visual reasoning with a general conditioning layer

    Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron Courville. FiLM: Visual reasoning with a general conditioning layer. InProceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), pages 3942–3951, 2018. URL https://cdn. aaai.org/ojs/11671/11671-13-15199-1-2-20201228.pdf

  37. [40]

    Qi, Hao Su, Kaichun Mo, and Leonidas J

    Charles R. Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 652–660, 2017. doi: 10.1109/ CVPR.2017.16

  38. [41]

    Qi, Li Yi, Hao Su, and Leonidas J

    Charles R. Qi, Li Yi, Hao Su, and Leonidas J. Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. InAdvances in Neural Information Processing Systems (NeurIPS), 2017

  39. [42]

    How to model tendon-driven continuum robots and benchmark modeling performance.Frontiers in Robotics and AI, 7, 02 2021

    Priyanka Rao, Quentin Peyron, Sven Lilge, and Jessica Burgner-Kahrs. How to model tendon-driven continuum robots and benchmark modeling performance.Frontiers in Robotics and AI, 7, 02 2021. doi: 10.3389/frobt.2020. 630245

  40. [43]

    SAM 2: Segment anything in images and videos

    Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Rong- hang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman R ¨adle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junting Pan, Kalyan Vasudev Alwala, Nicolas Carion, Chao-Yuan Wu, Ross Girshick, Piotr Doll ´ar, and Christoph Feichtenhofer. SAM 2: Segment anything in images and videos. InInternational Confer...

  41. [44]

    A 3D steady-state model of a tendon-driven continuum soft manipulator in- spired by the octopus arm.Bioinspiration & Biomimetics, 7(2):025006, 2012

    Federico Renda, Matteo Cianchetti, Michele Giorelli, Andrea Arienti, and Cecilia Laschi. A 3D steady-state model of a tendon-driven continuum soft manipulator in- spired by the octopus arm.Bioinspiration & Biomimetics, 7(2):025006, 2012. doi: 10.1088/1748-3182/7/2/025006

  42. [46]

    G ´omez Rodr´ıguez, Jos´e M

    D. Caleb Rucker and Robert J. III Webster. Statics and dynamics of continuum robots with general tendon routing and external loading.IEEE Transactions on Robotics, 27(6):1033–1044, 2011. doi: 10.1109/TRO. 2011.2160469

  43. [47]

    Matteo Antonio Russo, S. M. Hadi Sadati, Xin Dong, Abdelkhalick Mohammad, Ian D. Walker, Christos Bergeles, Kai Xu, and Dragos A. Axinte. Continuum robots: An overview.Advanced Intelligent Systems, 5 (5):2200367, 2023. doi: 10.1002/aisy.202200367

  44. [48]

    Sung-Chul Ryu and Pierre E. Dupont. FBG-based shape sensing tubes for continuum robots. InProceed- ings of the IEEE International Conference on Robotics and Automation (ICRA), pages 3531–3536, 2014. doi: 10.1109/ICRA.2014.6907368

  45. [49]

    2601.22579

    Chengnan Shentu and Jessica Burgner-Kahrs. Universal- jointed tendon-driven continuum robot: Design, kine- matic modeling, and locomotion in narrow tubes.arXiv preprint arXiv:2409.13165, 2024. doi: 10.48550/arXiv. 2409.13165

  46. [50]

    Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole

    Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score- based generative modeling through stochastic differential equations. InInternational Conference on Learning Representations (ICLR), 2021

  47. [51]

    Manu Srivastava and Ian D. Walker. On tendon driven continuum robots with compressible backbones. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages 669–675, 2023. doi: 10.1109/ICRA48891.2023.10161208

  48. [52]

    On the merits of helical tendon routing in continuum robots

    Julia Starke, Ernar Amanov, Mohamed Taha Chikhaoui, and Jessica Burgner-Kahrs. On the merits of helical tendon routing in continuum robots. InProceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 6470–6476, 2017. doi: 10.1109/IROS.2017.8206554

  49. [53]

    Cho, Daniel S

    Jordan Thompson, Brian Y . Cho, Daniel S. Brown, and Alan Kuntz. Modeling kinematic uncertainty of tendon- driven continuum robots via mixture density networks. In Proceedings of the International Symposium on Medical Robotics (ISMR), 2024. doi: 10.1109/ISMR63436.2024. 10586133

  50. [54]

    Rahn, William M

    Deepak Trivedi, Christopher D. Rahn, William M. Kier, and Ian D. Walker. Soft robotics: Biological inspiration, state of the art, and future research.Applied Bionics and Biomechanics, 5(3):99–117, September 2008. doi: 10.1080/11762320802557865

  51. [55]

    continuum

    Ian D. Walker. Continuous backbone “continuum” robot manipulators.ISRN Robotics, 2013:1–19, 2013. doi: 10.5402/2013/726506

  52. [56]

    ACM Trans

    Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, and Justin M. Solomon. Dynamic graph CNN for learning on point clouds.ACM Transac- tions on Graphics, 38(5), 2019. doi: 10.1145/3326362

  53. [57]

    III Webster and Bryan A

    Robert J. III Webster and Bryan A. Jones. Design and kinematic modeling of constant curvature contin- uum robots: A review.The International Journal of Robotics Research, 29(13):1661–1683, 2010. doi: 10. 1177/0278364910368147

  54. [58]

    Fcos: Fully convolutional one-stage object detection

    Guandao Yang, Xun Huang, Zekun Hao, Ming-Yu Liu, Serge J. Belongie, and Bharath Hariharan. PointFlow: 3D point cloud generation with continuous normalizing flows. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 4540– 4549, 2019. doi: 10.1109/ICCV .2019.00464. URL https://doi.org/10.1109/ICCV .2019.00464

  55. [59]

    Hengshuang Zhao, Li Jiang, Jiaya Jia, Philip H. S. Torr, and Vladlen Koltun. Point transformer. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021

  56. [60]

    3d shape gen- eration and completion through point-voxel diffusion

    Linfeng Zhou, Yipan Du, and Jiajun Wu. 3d shape gen- eration and completion through point-voxel diffusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021. Continuum Robot Modeling with Action Conditioned Flow Matching Supplementary Material CNF SIM 2 Modules SIM 3 Modules SIM 5 Modules VSM NERF 3DGS OURS MLP GT OUR...