pith. machine review for the scientific record. sign in

arxiv: 2605.12735 · v1 · submitted 2026-05-12 · 💻 cs.RO

Recognition: unknown

The Unified Autonomy Stack: Toward a Blueprint for Generalizable Robot Autonomy

Authors on Pith no claims yet

Pith reviewed 2026-05-14 19:38 UTC · model grok-4.3

classification 💻 cs.RO
keywords autonomy stackrobot navigationmulti-modal perceptionsafe navigationGNSS-deniedaerial and ground robotsopen source softwarefield testing
0
0 comments X

The pith

The Unified Autonomy Stack delivers resilient autonomy for diverse aerial and ground robots via integrated perception, planning, and navigation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces the Unified Autonomy Stack as an open-source system for generalizable robot autonomy. It combines three modules that fuse sensor data for localization, plan motions adaptively, and ensure safe navigation even without GPS. This matters for creating robots that can operate reliably in unpredictable real-world settings like disaster zones or cluttered spaces without needing separate software for each robot type or environment. The approach was tested on flying and walking robots in smoke-filled and complex areas, showing it supports exploration and inspection tasks. By making the code public, the work aims to help others build on this foundation for broader robot applications.

Core claim

The Unified Autonomy Stack is a system-level solution centered on three synergistic modules: multi-modal perception for robust localization and semantic understanding through sensor fusion, multi-behavior planning using sampling-based techniques, and multi-layered safe navigation combining map-based planning with learning-driven policies and safety filters. This architecture enables behaviors like safe GNSS-denied navigation in unknown environments, complex exploration, object discovery, and efficient inspection, as demonstrated in field tests on rotorcraft and legged robots in self-similar, smoke-filled, and high-clutter settings.

What carries the argument

The Unified Autonomy Stack architecture with its three modules of multi-modal perception, multi-behavior planning, and multi-layered safe navigation that work together for mission autonomy.

Load-bearing premise

That the three modules continue to integrate and perform synergistically when used on new robot types or in environments different from the tested ones.

What would settle it

Observing whether the stack fails to maintain safe operation or requires major modifications when deployed on a new robot morphology, such as a wheeled vehicle, in a previously untested environment like a dense forest or urban area with moving obstacles.

Figures

Figures reproduced from arXiv: 2605.12735 by Albert Gassol Puigjaner, Angelos Zacharia, Kostas Alexis, Martin Jacquet, Marvin Harms, Mihir Dharmadhikari, Mihir Kulkarni, Morten Nissov, Nikhil Khedekar, Philipp Weiss.

Figure 1
Figure 1. Figure 1: Indicative subset of the evaluation studies conducted to validate and assess the performance of the Unified Autonomy Stack. The tests involve both aerial and ground robots operating in diverse GNSS-denied and at instances perceptually-degraded environments including (a) snow-covered forests, (b) underground mines, (c) road tunnels, (d) a frozen lake, (e) ship cargo holds, as well as (f) the university camp… view at source ↗
Figure 2
Figure 2. Figure 2: The architecture of the UAstack. The stack involves three core modules, on perception, planning, and navigation that operate in a synergistic fashion. Aiming for operational resilience in diverse GNSS-denied, perceptually-degraded environments the UAstack emphasizes multi-modal sensor fusion merging data from LiDAR, radar, and camera sensing, alongside IMU cues. VLM-based reasoning builds upon the geometri… view at source ↗
Figure 3
Figure 3. Figure 3 [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Illustration of the current VLM-based functionality of the UAstack for semantic scene mapping and visual Q&A. Open-Vocabulary Object Detection and Semantic 3D Mapping 3D object detection is formulated as a semantic mapping problem. Objects are detected on the camera image using an open-vocabulary detector (YOLOe) or a VLM￾based detector (GPT-5) initialized with a set of labels. These models produce labeled… view at source ↗
Figure 5
Figure 5. Figure 5: Planning module architecture. The planning module is facilitated through OmniPlanner designed to work universally across aerial, ground, and underwater robot morphologies. At the core, the planner integrates a domain- and morphology-agnostic planning kernel that utilizes the dual map representation to build local and global graphs, both satisfying the robot motion constraints set by the Robot Abstraction L… view at source ↗
Figure 6
Figure 6. Figure 6: Navigation module architecture. Two swappable local navigation modalities are offered: (top) SDF-NMPC, which encodes depth images into a latent SDF representation online and embeds it as a constraint in an NMPC controller; and (bottom) ExRL, which combines inverted range images with proprioceptive state through a policy trained using PPO in simulation to directly output acceleration commands for waypoint-d… view at source ↗
Figure 7
Figure 7. Figure 7: Visualization of the used kappa function with nominal values λ = 1, p = 0.5, σ = 1. x˙ =  03 I3 03 03  x | {z } f(x) +  03 I3  | {z } g(x) u, (18) where a W WB is the linear acceleration. Here f(x) and g(x)u denote the state system dynamics vectors. Considering the above, the method then represents the “free-space” set as Cx = TN i=1{x : ∥p W WB − p W WOi ∥ 2 ≥ ε} with ε > 0 representing safety radius … view at source ↗
Figure 8
Figure 8. Figure 8: Ablation study comparing ExRL and SDF-NMPC navigation policies, with and without the C-CBF last-resort safety filter, across obstacle densities (8a) and a sweep of the time constant τd (8b). τd ≤ 0.10 s approximately corresponds to the realistic operating regime; τd = 0.25 s is unrealistic. Since success and stagnation both correspond to collision-free behaviour, the crash rate is the primary safety metric… view at source ↗
Figure 9
Figure 9. Figure 9: Overview of the SLAM performance in the two tunnel environments. The Fyllingsdal tunnel presents an environment with geometric self-similarity, thus causing divergence in LiDAR-geometry-based methods. Multi-modal fusion demonstrates increased performance, by replacing the missing observability of the LiDAR measurements with information from either radar or vision. The Runehamar tunnel contains sections of … view at source ↗
Figure 10
Figure 10. Figure 10: Overview of the SLAM performance in the Frozen Lake environment, comparing different uni-modal approaches with the multi-modal LRI configuration. This environment presents difficulty for methods relying only on LiDAR or radar. For the former, the planar geometry of the environment can result in lacking observability in lateral position and yaw. For the radar, the limited number of returns results in incre… view at source ↗
Figure 11
Figure 11. Figure 11: Overview of the SLAM performance of the experiment in the campus environment with the handheld module. The experiment has the trajectory passing through a room filled with dense fog, causing large increase of noise present in the LiDAR and vision measurements, whereas the radar remains largely unaffected. The multi-modal fusion retains the accuracy of LiDAR-based methods in nominal conditions alongside th… view at source ↗
Figure 12
Figure 12. Figure 12: Qualitative results of the proposed VLM reasoning system. Left: semantic 3D mapping with open-vocabulary object detections fused into a voxel grid, showing labeled objects and the robot trajectory over time. Right: binary visual question-answering examples, where the model provides “Yes/No” answers with confidence scores and explanations for high-level scene understanding tasks. The figure demonstrates th… view at source ↗
Figure 13
Figure 13. Figure 13: Navigation module evaluation is performed in the forest using AR-2 with SDF-NMPC + C-CBF, ExRL + C-CBF, and C-CBF paired with an unsafe policy (SDF-NMPC but with its collision-avoidance constraints disabled) respectively. The robot was tasked to navigate to a waypoint 38 m in front of it, with a reference path going through trees and branches. Individual experiments are shown, with specific instances high… view at source ↗
Figure 14
Figure 14. Figure 14: Navigation module evaluation is performed in the campus using AR-2 with SDF-NMPC + C-CBF. The robot was tasked to explore the basement autonomously and return home. At two instances, an obstacle was moved into the robot’s planned path during the exploration and the return phases respectively, forcing the SDF-NMPC + C-CBF to avoid these previously-absent obstacles in the robot’s path. The two specific inst… view at source ↗
Figure 15
Figure 15. Figure 15: Navigation module evaluation is performed in the campus using AR-2 with ExRL + C-CBF. The robot was tasked to explore the basement autonomously and return home. At two instances, an obstacle was moved into the robot’s planned path during the exploration and the return phases respectively, forcing the ExRL + C-CBF to avoid these previously-absent obstacles in the robot’s path. The two specific instances in… view at source ↗
Figure 16
Figure 16. Figure 16: Full-stack evaluation in a multi-branched section of an underground mine using AR-2 with SDF-NMPC as the navigation method. The robot started in one branch and explored all three branches, repositioning when needed. The SDF-NMPC tracks the planned path closely, resulting in next to zero interventions from the SDF-NMPC or C-CBF. The figure shows the full map of the environment along with key instances in t… view at source ↗
Figure 17
Figure 17. Figure 17: Full-stack evaluation in a multi-branched section of an underground mine using AR-2 with ExRL as the navigation method. The robot started in one branch and explored all three branches, repositioning when needed. The ExRL policy only aims to reach the end of each planned path, presenting larger local deviations, but remaining safe at all times. The figure shows the full map of the environment along with ke… view at source ↗
Figure 18
Figure 18. Figure 18: Full-stack evaluation in the forest using AR-2 with SDF-NMPC as the navigation method. The robot was tasked to explore a given area autonomously and return home. The figure shows the full map and planning instances in the mission. As the SDF-NMPC follows the planned path closely, it needs to intervene infrequently. one such instance where the SDF-NMPC deviates from the path planned by the planning module.… view at source ↗
Figure 19
Figure 19. Figure 19: Full-stack evaluation in the forest using AR-2 with ExRL as the navigation method. The robot was tasked to explore a given area autonomously and return home. The figure shows the full map and planning instances in the mission. As can be seen, the ExRL is not formulated to follow the planned path strictly, but successfully navigates towards the end of the path, avoiding obstacles. image of the robot in the… view at source ↗
Figure 20
Figure 20. Figure 20: Deployment of AR-2 in the ship cargo hold. The robot started with no prior information of the environment. It first performed exploration to map the tank. Upon completion, it switched to the inspection behavior, where it viewed the mapped surfaces with the camera sensor at the desired viewing distance. consisted of one mine shaft having narrow passages and areas with gaps on the side, requiring careful pl… view at source ↗
Figure 21
Figure 21. Figure 21: Full-stack evaluation in the Løkken underground mine using the GR-1 legged robot. The mission was conducted in one of the mine shafts, presenting narrow cross-section at times, and gaps on the side. Due to the dual map representation (volumetric and elevation maps), the robot was successfully able to handle these challenges completing the mission [PITH_FULL_IMAGE:figures/full_fig_p030_21.png] view at source ↗
Figure 22
Figure 22. Figure 22: Full-stack evaluation was conducted in a university building using the GR-1 legged robot. The robot was tasked with exploring the entire ground floor, which featured both open spaces and narrow corridors. The figure shows the complete map along with the planning instances from the mission. Leica Geosystems for providing the Robot Operating System (ROS) compatible MS60 and AP20 setup for collection of grou… view at source ↗
read the original abstract

We introduce and open-source the Unified Autonomy Stack, a system-level solution that enables resilient autonomy across diverse aerial and ground robot morphologies. The architecture centers on three synergistic modules -- multi-modal perception, multi-behavior planning, and multi-layered safe navigation -- that together deliver comprehensive mission autonomy. The stack fuses data from LiDAR, radar, vision, and inertial sensing, enabling (a) robust localization and mapping through factor graph-based fusion, (b) semantic scene understanding, (c) motion and informative path planning through sampling-based techniques adaptive across spatial scales, as well as (d) multi-layered safe navigation both through planning on the online reconstructed map and deep learning-driven exteroceptive policies alongside last-resort safety filters using control barrier functions. The resulting behaviors include safe GNSS-denied navigation into unknown and perceptually-degraded regions, exploration of complex environments, object discovery, and efficient inspection planning. The stack has been field-tested and validated on both aerial (rotorcraft) and ground (legged) robots operating in a host of demanding environments, including self-similar and smoke-filled settings, with complex geometries and high obstacle clutter. These tests demonstrate resilient performance in challenging conditions. To facilitate ease of adoption, we open-source the implementation alongside supporting documentation, validation, and evaluation datasets https://github.com/ntnu-arl/unified_autonomy_stack. A video giving the overview of the paper and the field experiments is available at https://youtu.be/l8Su8OXsM-E.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper introduces the Unified Autonomy Stack, an open-source modular architecture for resilient robot autonomy across aerial (rotorcraft) and ground (legged) morphologies. It centers on three synergistic modules: multi-modal perception fusing LiDAR/radar/vision/inertial data via factor graphs for localization/mapping and semantic understanding; multi-behavior planning using adaptive sampling-based methods across scales; and multi-layered safe navigation combining online map planning, deep-learning exteroceptive policies, and control-barrier-function safety filters. The central claim is that this stack delivers comprehensive mission autonomy—including GNSS-denied navigation, exploration, object discovery, and inspection—in perceptually degraded, cluttered environments, validated through field tests on multiple platforms.

Significance. If the empirical claims hold, the work provides a practical, integrated blueprint for generalizable autonomy that addresses real-world challenges in adverse conditions. The open-source release, supporting documentation, and datasets are explicit strengths that promote reproducibility and adoption. Field validation on distinct morphologies under GNSS-denied and smoke-filled conditions lends concrete support to the resilience narrative, distinguishing the contribution from purely simulation-based or single-platform studies.

major comments (1)
  1. [Abstract / Field Validation] Abstract and validation narrative: the resilience claims are grounded in field tests, yet only qualitative evidence of successful operation is described; quantitative metrics (e.g., localization error, path efficiency, success rates across trials) and systematic failure-case analysis are absent, which is load-bearing for rigorously substantiating cross-morphology performance.
minor comments (2)
  1. [Abstract] The GitHub repository and video links are provided but should be explicitly cross-referenced in the main text with version tags or DOIs to ensure long-term accessibility.
  2. Notation for the three modules is introduced clearly in the abstract; ensure consistent naming and acronym usage throughout the module descriptions to avoid reader confusion.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback and positive recommendation for minor revision. We address the single major comment point-by-point below.

read point-by-point responses
  1. Referee: [Abstract / Field Validation] Abstract and validation narrative: the resilience claims are grounded in field tests, yet only qualitative evidence of successful operation is described; quantitative metrics (e.g., localization error, path efficiency, success rates across trials) and systematic failure-case analysis are absent, which is load-bearing for rigorously substantiating cross-morphology performance.

    Authors: We agree that the addition of quantitative metrics and a systematic failure-case analysis would strengthen the substantiation of the cross-morphology resilience claims. The current manuscript and supplementary video emphasize qualitative demonstrations of successful operation in GNSS-denied, smoke-filled, and cluttered environments across rotorcraft and legged platforms. In the revised version we will add a dedicated validation subsection that reports concrete metrics drawn from the field datasets, including localization RMSE from the factor-graph fusion, navigation success rates over repeated trials, path-efficiency comparisons, and a concise analysis of observed failure modes together with the mitigations provided by the multi-layered safety filters. These numbers will be supported by the already-released evaluation datasets. revision: yes

Circularity Check

0 steps flagged

No significant circularity; architecture claims rest on empirical validation

full rationale

The paper introduces a modular autonomy architecture consisting of multi-modal perception, multi-behavior planning, and multi-layered safe navigation. Its central claims of resilient cross-morphology performance are supported directly by descriptions of field experiments on rotorcraft and legged platforms in GNSS-denied and perceptually degraded environments, along with open-sourced code and datasets. No equations, parameter-fitting steps, or derivations are present that reduce by construction to the inputs. Self-citations, if any, are not load-bearing for the core claims, which remain externally falsifiable via the reported tests and released artifacts.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The paper introduces no free parameters, invented entities, or ad-hoc axioms; it relies on standard robotics domain assumptions such as reliable sensor models and the existence of feasible control inputs.

axioms (1)
  • domain assumption Standard robotics assumptions including accurate sensor models, reliable low-level control, and the existence of feasible trajectories in the environments tested.
    These background assumptions underpin the perception, planning, and safety-filter modules described in the abstract.

pith-pipeline@v0.9.0 · 5610 in / 1212 out tokens · 43523 ms · 2026-05-14T19:38:46.761019+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

300 extracted references · 172 canonical work pages · 2 internal anchors

  1. [1]

    On-Manifold Preintegration for Real-Time Visual--Inertial Odometry , year=

    Forster, Christian and Carlone, Luca and Dellaert, Frank and Scaramuzza, Davide , journal=. On-Manifold Preintegration for Real-Time Visual--Inertial Odometry , year=

  2. [2]

    and Kim, Ayoung and Heckman, Christoffer , journal=

    Harlow, Kyle and Jang, Hyesu and Barfoot, Timothy D. and Kim, Ayoung and Heckman, Christoffer , journal=. A New Wave in Robotics: Survey on Recent MmWave Radar Applications in Robotics , year=

  3. [3]

    borglab/gtsam , year =

    Frank Dellaert and. borglab/gtsam , year =. doi:10.5281/zenodo.5794541 , publisher =

  4. [4]

    2011 , month = may, pages =

    Kaess, Michael and Johannsson, Hordur and Roberts, Richard and Ila, Viorela and Leonard, John and Dellaert, Frank , booktitle =. 2011 , month = may, pages =

  5. [5]

    Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback , year =

    Michael Bloesch and Michael Burri and Sammy Omari and Marco Hutter and Roland Siegwart , journal =. Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback , year =

  6. [6]

    GaRLIO: Gravity Enhanced Radar-LiDAR-Inertial Odometry , year=

    Noh, Chiyun and Yang, Wooseong and Jung, Minwoo and Jung, Sangwoo and Kim, Ayoung , booktitle=. GaRLIO: Gravity Enhanced Radar-LiDAR-Inertial Odometry , year=

  7. [7]

    AF-RLIO: Adaptive Fusion of Radar-LiDAR-Inertial Information for Robust Odometry in Challenging Environments , year=

    Qian, Chenglong and Xu, Yang and Shi, Xiufang and Chen, Jiming and Li, Liang , booktitle=. AF-RLIO: Adaptive Fusion of Radar-LiDAR-Inertial Information for Robust Odometry in Challenging Environments , year=

  8. [8]

    FAST-LIVO2: Fast, Direct LiDAR–Inertial–Visual Odometry , year=

    Zheng, Chunran and Xu, Wei and Zou, Zuhao and Hua, Tong and Yuan, Chongjian and He, Dongjiao and Zhou, Bingyang and Liu, Zheng and Lin, Jiarong and Zhu, Fangcheng and Ren, Yunfan and Wang, Rong and Meng, Fanle and Zhang, Fu , journal=. FAST-LIVO2: Fast, Direct LiDAR–Inertial–Visual Odometry , year=

  9. [9]

    FAST-LIO2: Fast Direct LiDAR-Inertial Odometry , year=

    Xu, Wei and Cai, Yixi and He, Dongjiao and Lin, Jiarong and Zhang, Fu , journal=. FAST-LIO2: Fast Direct LiDAR-Inertial Odometry , year=

  10. [10]

    2008 , isbn =

    Farrell, Jay , title =. 2008 , isbn =

  11. [11]

    , booktitle=

    Nemiroff, Ryan and Chen, Kenny and Lopez, Brett T. , booktitle=. Joint On-Manifold Gravity and Accelerometer Intrinsics Estimation for Inertially Aligned Mapping , year=

  12. [12]

    IEEE Robotics and Automation Letters (RA-L) , pages =

    Vizzo, Ignacio and Guadagnino, Tiziano and Mersch, Benedikt and Wiesmann, Louis and Behley, Jens and Stachniss, Cyrill , title =. IEEE Robotics and Automation Letters (RA-L) , pages =. 2023 , codeurl =. doi:10.1109/LRA.2023.3236571 , volume =

  13. [13]

    GLIM: 3D range-inertial localization and mapping with GPU-accelerated scan matching factors , journal =

    Kenji Koide and Masashi Yokozuka and Shuji Oishi and Atsuhiko Banno , keywords =. GLIM: 3D range-inertial localization and mapping with GPU-accelerated scan matching factors , journal =. 2024 , issn =. doi:https://doi.org/10.1016/j.robot.2024.104750 , url =

  14. [14]

    Gómez and M

    Campos, Carlos and Elvira, Richard and Rodríguez, Juan J. Gómez and M. Montiel, José M. and D. Tardós, Juan , journal=. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM , year=

  15. [15]

    2020 , Address =

    Patrick Geneva and Kevin Eckenhoff and Woosik Lee and Yulin Yang and Guoquan Huang , Booktitle =. 2020 , Address =

  16. [16]

    2019 , eprint=

    A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors , author=. 2019 , eprint=

  17. [17]

    2025 , eprint=

    Simultaneous Triggering and Synchronization of Sensors and Onboard Computers , author=. 2025 , eprint=

  18. [18]

    OpenVINS: A Research Platform for Visual-Inertial Estimation , year=

    Geneva, Patrick and Eckenhoff, Kevin and Lee, Woosik and Yang, Yulin and Huang, Guoquan , booktitle=. OpenVINS: A Research Platform for Visual-Inertial Estimation , year=

  19. [19]

    IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages=

    LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping , author=. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages=. 2020 , organization=

  20. [20]

    , author =

    evo: Python package for the evaluation of odometry and SLAM. , author =

  21. [21]

    2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages=

    M2p2: A multi-modal passive perception dataset for off-road mobility in extreme low-light conditions , author=. 2025 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) , pages=. 2025 , organization=

  22. [22]

    , booktitle=

    Carrillo, Henry and Reid, Ian and Castellanos, José A. , booktitle=. On the comparison of uncertainty criteria for active SLAM , year=

  23. [23]

    Huber , journal =

    Peter J. Huber , journal =. Robust Estimation of a Location Parameter , urldate =

  24. [24]

    Autonomous Robots , author =

    Low-drift and real-time lidar odometry and mapping , volume =. Autonomous Robots , author =. 2017 , pages =. doi:10.1007/s10514-016-9548-2 , abstract =

  25. [25]

    Khedekar, Nikhil and Kulkarni, Mihir and Alexis, Kostas , month = oct, year =. 2022. doi:10.1109/IROS47612.2022.9981108 , abstract =

  26. [26]

    Journal of Field Robotics , author =

    Keyframe‐based thermal–inertial odometry , volume =. Journal of Field Robotics , author =. 2020 , pages =. doi:10.1002/rob.21932 , abstract =

  27. [27]

    Complementary

    Khattak, Shehryar and Nguyen, Huan and Mascarich, Frank and Dang, Tung and Alexis, Kostas , month = sep, year =. Complementary. 2020. doi:10.1109/ICUAS48674.2020.9213865 , abstract =

  28. [28]

    IEEE Transactions on Cognitive and Developmental Systems , author =

    Morpho. IEEE Transactions on Cognitive and Developmental Systems , author =. 2023 , keywords =. doi:10.1109/TCDS.2022.3148543 , abstract =

  29. [29]

    Automatica , author =

    Control allocation—. Automatica , author =. 2013 , pages =. doi:10.1016/j.automatica.2013.01.035 , abstract =

  30. [30]

    Annual Review of Statistics and Its Application , author =

    Statistical. Annual Review of Statistics and Its Application , author =. 2019 , note =. doi:10.1146/annurev-statistics-030718-104938 , abstract =

  31. [31]

    Encyclopedia of Robotics , pages=

    Aerial field robotics , author=. Encyclopedia of Robotics , pages=. 2022 , publisher=

  32. [32]

    Champion-level drone racing using deep reinforcement learning,

    Champion-level drone racing using deep reinforcement learning , volume =. Nature , author =. 2023 , pages =. doi:10.1038/s41586-023-06419-4 , abstract =

  33. [33]

    Autonomous

    Dharmadhikari, Mihir and De Petris, Paolo and Kulkarni, Mihir and Khedekar, Nikhil and Nguyen, Huan and Stene, Arnt Erik and Sjøvold, Eivind and Solheim, Kristian and Gussiaas, Bente and Alexis, Kostas , month = dec, year =. Autonomous. 2023 21st. doi:10.1109/ICAR58858.2023.10406928 , abstract =

  34. [34]

    Annual Review of Control, Robotics, and Autonomous Systems , volume=

    Into the robotic depths: Analysis and insights from the darpa subterranean challenge , author=. Annual Review of Control, Robotics, and Autonomous Systems , volume=. 2023 , publisher=

  35. [35]

    Design of

    Raghunathan, Rahul Nath and Skulstad, Robert and Li, Guoyuan and Zhang, Houxiang , month = oct, year =. Design of. doi:10.1109/IECON51785.2023.10312100 , abstract =

  36. [36]

    Real-time instance detection with fast incremental learning

    Mysore, Siddharth and Mabsout, Bassel and Mancuso, Renato and Saenko, Kate , month = may, year =. Regularizing. 2021. doi:10.1109/ICRA48506.2021.9561138 , language =

  37. [37]

    Pretraining-finetuning

    Chen, Ci and Yu, Jiyu and Lu, Haojian and Gao, Hongbo and Xiong, Rong and Wang, Yue , month = sep, year =. Pretraining-finetuning

  38. [38]

    Probabilistic policy reuse in a reinforcement learning agent , isbn =

    Fernández, Fernando and Veloso, Manuela , month = may, year =. Probabilistic policy reuse in a reinforcement learning agent , isbn =. Proceedings of the fifth international joint conference on. doi:10.1145/1160633.1160762 , abstract =

  39. [39]

    Zhou, Yi and Barnes, Connelly and Lu, Jingwan and Yang, Jimei and Li, Hao , month = jun, year =. On the

  40. [40]

    International conference on machine learning , pages=

    Optnet: Differentiable optimization as a layer in neural networks , author=. International conference on machine learning , pages=. 2017 , organization=

  41. [41]

    Curobo: Parallelized collision-free robot motion generation

    Zhang, Dingqi and Loquercio, Antonio and Wu, Xiangyu and Kumar, Ashish and Malik, Jitendra and Mueller, Mark W. , month = may, year =. Learning a. 2023. doi:10.1109/ICRA48891.2023.10160836 , abstract =

  42. [42]

    FIESTA: Fast incremental euclidean distance fields for online motion planning of aerial robots

    Molchanov, Artem and Chen, Tao and Hönig, Wolfgang and Preiss, James A. and Ayanian, Nora and Sukhatme, Gaurav S. , month = nov, year =. Sim-to-(. 2019. doi:10.1109/IROS40897.2019.8967695 , abstract =

  43. [43]

    Planning to

    Sekar, Ramanan and Rybkin, Oleh and Daniilidis, Kostas and Abbeel, Pieter and Hafner, Danijar and Pathak, Deepak , month = jun, year =. Planning to

  44. [44]

    Exploration by

    Burda, Yuri and Edwards, Harrison and Storkey, Amos and Klimov, Oleg , month = oct, year =. Exploration by

  45. [45]

    and Silver, David and Kavukcuoglu, Koray , month = nov, year =

    Jaderberg, Max and Mnih, Volodymyr and Czarnecki, Wojciech Marian and Schaul, Tom and Leibo, Joel Z. and Silver, David and Kavukcuoglu, Koray , month = nov, year =. Reinforcement

  46. [46]

    Curiosity-driven Exploration by Self-supervised Prediction

    Pathak, Deepak and Agrawal, Pulkit and Efros, Alexei A. and Darrell, Trevor , month = may, year =. Curiosity-driven. doi:10.48550/arXiv.1705.05363 , abstract =

  47. [47]

    Learning

    Devin, Coline and Gupta, Abhishek and Darrell, Trevor and Abbeel, Pieter and Levine, Sergey , month = sep, year =. Learning

  48. [48]

    Learning to

    Rudin, Nikita and Hoeller, David and Hutter, Marco and Reist, Philipp , year =. Learning to

  49. [49]

    Makoviychuk, Viktor and Wawrzyniak, Lukasz and Guo, Yunrong and Lu, Michelle and Storey, Kier and Macklin, Miles and Hoeller, David and Rudin, Nikita and Allshire, Arthur and Handa, Ankur and State, Gavriel , month = aug, year =. Isaac

  50. [50]

    Kulkarni, Mihir and Forgaard, Theodor J. L. and Alexis, Kostas , month = may, year =. Aerial

  51. [51]

    Learning

    Liu, Guan-Horng and Siravuru, Avinash and Prabhakar, Sai and Veloso, Manuela and Kantor, George , year =. Learning

  52. [52]

    Addressing

    Fujimoto, Scott and van Hoof, Herke and Meger, David , month = oct, year =. Addressing

  53. [53]

    Ivanovic, Boris and Harrison, James and Sharma, Apoorva and Chen, Mo and Pavone, Marco , month = sep, year =

  54. [54]

    Schubert, Ingmar and Zhang, Jingwei and Bruce, Jake and Bechtle, Sarah and Parisotto, Emilio and Riedmiller, Martin and Springenberg, Jost Tobias and Byravan, Arunkumar and Hasenclever, Leonard and Heess, Nicolas , month = sep, year =. A

  55. [55]

    Decision

    Chen, Lili and Lu, Kevin and Rajeswaran, Aravind and Lee, Kimin and Grover, Aditya and Laskin, Michael and Abbeel, Pieter and Srinivas, Aravind and Mordatch, Igor , month = jun, year =. Decision

  56. [56]

    Diversity is

    Eysenbach, Benjamin and Gupta, Abhishek and Ibarz, Julian and Levine, Sergey , month = oct, year =. Diversity is

  57. [57]

    Huang, Wenlong and Mordatch, Igor and Pathak, Deepak , month = jul, year =. One

  58. [58]

    Zico , month = dec, year =

    Amos, Brandon and Kolter, J. Zico , month = dec, year =

  59. [59]

    Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA

    Jump-. Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023 , author =. 2023 , file =

  60. [60]

    Eberhard, Onno and Hollenstein, Jakob and Pinneri, Cristina and Martius, Georg , year =

  61. [61]

    Karen and Turk, Greg , month = may, year =

    Yu, Wenhao and Tan, Jie and Liu, C. Karen and Turk, Greg , month = may, year =. Preparing for the

  62. [62]

    Proximal

    Schulman, John and Wolski, Filip and Dhariwal, Prafulla and Radford, Alec and Klimov, Oleg , month = aug, year =. Proximal

  63. [63]

    and Abbeel, Pieter , month = apr, year =

    Schulman, John and Levine, Sergey and Moritz, Philipp and Jordan, Michael I. and Abbeel, Pieter , month = apr, year =. Trust

  64. [64]

    , year =

    Lee, John M. , year =. Introduction to. doi:10.1007/978-1-4419-7940-7 , file =

  65. [65]

    Solà, Joan and Deray, Jeremie and Atchuthan, Dinesh , month = dec, year =. A micro

  66. [66]

    and Tegmark, Max , month = may, year =

    Liu, Ziming and Wang, Yixuan and Vaidya, Sachin and Ruehle, Fabian and Halverson, James and Soljačić, Marin and Hou, Thomas Y. and Tegmark, Max , month = may, year =

  67. [67]

    Journal of Mathematical Imaging and Vision , author =

    Metrics for. Journal of Mathematical Imaging and Vision , author =. 2009 , pages =. doi:10.1007/s10851-009-0161-2 , language =

  68. [68]

    , month = oct, year =

    Reif, John H. , month = oct, year =. Complexity of the mover's problem and generalizations , url =. 20th. doi:10.1109/SFCS.1979.10 , language =

  69. [69]

    Curobo: Parallelized collision-free robot motion generation

    Ortiz-Haro, Joaquim and Ha, Jung-Su and Driess, Danny and Karpas, Erez and Toussaint, Marc , month = may, year =. Learning. 2023. doi:10.1109/ICRA48891.2023.10160887 , abstract =

  70. [70]

    , year =

    Lee, John M. , year =. Riemannian manifolds: an introduction to curvature , isbn =

  71. [71]

    , month = may, year =

    LaValle, Steven M. , month = may, year =. Planning. doi:10.1017/CBO9780511546877 , file =

  72. [72]

    2009 , doi =

    Nonlinear. 2009 , doi =

  73. [73]

    Learning

    Peters, Lasse and Fridovich-Keil, David and Ferranti, Laura and Stachniss, Cyrill and Alonso-Mora, Javier and Laine, Forrest , month = may, year =. Learning

  74. [74]

    Nature , author =

    Aerial additive manufacturing with multiple autonomous robots , volume =. Nature , author =. 2022 , pages =. doi:10.1038/s41586-022-04988-4 , language =

  75. [75]

    Shi, Guanya and Hönig, Wolfgang and Yue, Yisong and Chung, Soon-Jo , month = mar, year =. Neural-

  76. [76]

    Advances in Applied Clifford Algebras , author =

    The integration of angular velocity , volume =. Advances in Applied Clifford Algebras , author =. 2017 , keywords =. doi:10.1007/s00006-017-0793-z , abstract =

  77. [77]

    Construction of

    Lindsey, Quentin and Mellinger, Daniel and Kumar, Vijay , file =. Construction of

  78. [78]

    Complementary

    Uzakov, Timur and Nascimento, Tiago P. and Saska, Martin , month = sep, year =. 2020. doi:10.1109/ICUAS48674.2020.9213967 , abstract =

  79. [79]

    Introduction to

    Quan, Quan , year =. Introduction to. doi:10.1007/978-981-10-3382-7 , file =

  80. [80]

    Quaternions,

    Dam, Erik B and Koch, Martin and Lillholm, Martin , file =. Quaternions,

Showing first 80 references.