pith. machine review for the scientific record. sign in

arxiv: 2602.10032 · v2 · submitted 2026-02-10 · 💻 cs.CV · cs.RO

Recognition: 2 theorem links

· Lean Theorem

Perception with Guarantees: Certified Pose Estimation via Reachability Analysis

Authors on Pith no claims yet

Pith reviewed 2026-05-16 02:33 UTC · model grok-4.3

classification 💻 cs.CV cs.RO
keywords certified pose estimationreachability analysisformal verificationneural network verification3D localizationcamera-based perceptionsafety-critical systemscyber-physical systems
0
0 comments X

The pith

Formal reachability analysis certifies bounds on 3D poses estimated from camera images of known targets.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes a method for certified 3D pose estimation in safety-critical cyber-physical systems, where agents must localize reliably to guarantee safety even in worst cases. It computes formal bounds on the pose using only a camera image and a known target geometry, without depending on external services like GPS. Reachability analysis combined with formal neural network verification produces these guaranteed bounds. A sympathetic reader would care because the method enables provable safety for subsequent actions while remaining efficient for real-time use. Experiments confirm accurate performance on both synthetic data and real-world images.

Core claim

By leveraging reachability analysis and formal neural network verification, the authors compute bounds that are guaranteed to contain the true 3D pose estimated from a single camera image of a perfectly known target geometry. This certified estimation supports formal safety verification for agent actions without reliance on untrustworthy external localization inputs. The approach is demonstrated to run efficiently and accurately in both synthetic and physical experiments.

What carries the argument

Reachability analysis over-approximates the output set of a neural network pose estimator to produce formal bounds on the 3D pose parameters.

Load-bearing premise

The target geometry is modeled perfectly and the verification produces bounds tight enough to support practical safety decisions.

What would settle it

A ground-truth measurement showing the actual pose lies outside the computed bounds in a controlled scene with known geometry would disprove the certification.

Figures

Figures reproduced from arXiv: 2602.10032 by Matthias Althoff, Tobias Ladner, Yasser Shoukry.

Figure 1
Figure 1. Figure 1: (a) Runway markings as by ICAO standard [25, Chp. 5]. (b) Runway markings observed from a landing plane [5]. and subsequently define a generalized form of polygons [12], where their vertices are on a two-dimensional plane in a three￾dimensional Euclidean space. Definition 1 (Polygon). Given c ∈ R 3 , d ∈ R defining a plane, and ν vertices V ∈ R 3×ν in counter-clockwise winding order, we define a convex pol… view at source ↗
Figure 2
Figure 2. Figure 2: Example transformation given a target T and a pose ξ. The goal of this work is the inverse: How to obtain a certified estimate of ξ given the final image and T TCF? Noisy images: In practice, images are subject to noise for which we define a noise budget. Definition 4 (Noise Budget). Given a clean image I ∈ B w×h as produced by Def. 3, we say that an image Ie∈ B w×h does not exceed a maximal noise budget τ… view at source ↗
Figure 3
Figure 3. Figure 3: Overview: (a) Given a concrete image I ∗ ( ) and a pose candidate Ec ( ), we check whether I ∗ is contained in the outer-approximative image IbEc ( ). (b) If not, Ec is discarded as ξ ∗ ̸∈ Ec then. Otherwise, we exploit the relative position of the vertices in I ∗ ( ) within the uncertain vertices ( ) of Ec – which are a byproduct of computing IbEc – to (c) impose constraints ( ) on Ec, which – when comput… view at source ↗
Figure 4
Figure 4. Figure 4: Computation of the outer-approximative image for a target with a single polygon (ρ = 1) and an uncertain pose E with perturbed angles θx, θy, θz by 10 degrees: (a) Uncertain vertices V PCF 1 enclose the vertices of the random samples. (b) Convex hull computation over V PCF 1 . (c) Outer-approximative image IbE . computed sets, i.e., between the uncertain pose E ⊆ Ξ and the computed vertices V PCF i(·,k) . … view at source ↗
Figure 5
Figure 5. Figure 5: Certified pose estimation (Alg. 2): (a) Given an image I ∗ and a (simple) target T TCF (ρ = 1), (b) the vertices of the (unobservable) target T PCF are contained in the witness pixels Qc,1,k, k ∈ [ν1], enabling us to impose constraints Cc,1,k ≤ dc,1,k on a pose candidate Ec ⊆ Ξ via the shared latent space to (c) obtain a tight certified estimate of ξ ∗ ∈ Ec|Cc≤dc . Despite this ambiguity, it is possible to… view at source ↗
Figure 6
Figure 6. Figure 6: Ambiguity of an (unobservable) target T PCF given an image I ∗ (shown for τ = 0 in zoomed-in view): (a-c) Possible orientations of T PCF resulting in the same image I ∗ . (d) Witness pixels must be on the boundary of the target given the pre-computed V PCF c,i(·,k) . 7 [PITH_FULL_IMAGE:figures/full_fig_p007_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Geometric consideration: Triangle construction of the selected witness pixels (◦) without pixel expansion. orientation of the transformed target of our pose candidate Ec, we can determine the τ farthest out pixels, which we always have to include in Q∗ c,i,k, and additionally add the boundary pixels of the remaining pixels to Q∗ c,i,k. This ensures that all boundary pixels are selected, thereby guaranteein… view at source ↗
Figure 8
Figure 8. Figure 8: Landing plane scenario: Images and resulting certified pose estimates E ∗ . TABLE I: Comparison of computation time and resulting volume of the certified position estimate E ∗ using different targets: Initial filtering using the offline pre-computations (baseline, Filter), and our full approach (Ours). #Candidates Online Time [s] Norm. Vol. [%] Target Offline [h] ζ Filter Filter Ours Filter Ours 30 1.86 59… view at source ↗
Figure 9
Figure 9. Figure 9: Example image of recorded dataset: (a) raw image (Raw), and (b) with basic denoising applied (Cleaned). TABLE II: Results using our approach (Ours) on both meth￾ods shown in [PITH_FULL_IMAGE:figures/full_fig_p010_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Qualitative examples on the sound image enclosure: (a) Angle perturbation on target 30. (b) Zoom uncertainty (z￾axis) on target R. (c) Translational uncertainty (x- and y-axis) on target Stripes. Target T PCF is shown for the center of the perturbed pose E, respectively. focus is limited to verifying local robustness properties in image classification [27]. The geometric transformations of the camera (Def… view at source ↗
Figure 11
Figure 11. Figure 11: Example image enclosure of sin(x) and cos(x) on a given domain D. 25 50 75 25 50 75 (a) PCF 25 50 75 25 50 75 (b) supportFunc(V PCF 1 , A1) (c) Outer-Approx. Image IbE Target T Samples V PCF 1(·,k) Halfspaces A1(j,·) ≤ b1(j) [PITH_FULL_IMAGE:figures/full_fig_p013_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Example of the support function enclosure for a target with a single polygon (ρ = 1) with uncertain vertices V PCF 1 via Alg. 1, for a pose E with perturbed angles θx, θy, θz by 10 degrees. Thus, H = [PITH_FULL_IMAGE:figures/full_fig_p013_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: (a) We assume pixels are turned on if the (unobservable) target T intersects with the respective area of the pixel. (b) Pose candidates Ec partitioning pose space Ξ in a hierarchical manner. −4 −2 0 2 4 −4 −2 0 2 4 (a) −2 0 2 −5 0 5 (b) −1 −0.5 0 0.5 1 −2 0 2 (c) −0.2 0 0.2 −0.2 −0.1 0 0.1 0.2 (d) [PITH_FULL_IMAGE:figures/full_fig_p015_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Geometry of used targets: (a) 30, (b) R, (c) Stripes, and (d) SlowVehicle. TABLE III: Ablation study on the image resolution. Image Resolution #Candidates ζ Filter Time [s] #Witness Pixels 100 × 100 286 24.5±3.5 1.47±0.46 3.8±1.6 200 × 200 1405 29.5±5.7 2.17±0.50 4.2±1.4 300 × 300 1857 28.8±7.0 2.84±0.85 3.3±1.4 500 × 500 2592 30.1±6.1 4.50±1.15 2.8±1.1 (a) γ = 1 · ν1 (b) γ = 2 · ν1 (c) γ = 3 · ν1 (d) γ =… view at source ↗
Figure 15
Figure 15. Figure 15: Comparison of number of support function evaluation γ (showing zoomed-in view of setting in [PITH_FULL_IMAGE:figures/full_fig_p015_15.png] view at source ↗
read the original abstract

Agents in cyber-physical systems are increasingly entrusted with safety-critical tasks. Ensuring safety of these agents often requires localizing the pose for subsequent actions. Pose estimates can, e.g., be obtained from various combinations of lidar sensors, cameras, and external services such as GPS. Crucially, in safety-critical domains, a rough estimate is insufficient to formally determine safety, i.e., guaranteeing safety even in the worst-case scenario, and external services might additionally not be trustworthy. We address this problem by presenting a certified pose estimation in 3D solely from a camera image and a well-known target geometry. This is realized by formally bounding the pose, which is computed by leveraging recent results from reachability analysis and formal neural network verification. Our experiments demonstrate that our approach efficiently and accurately localizes agents in both synthetic and real-world experiments.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims to enable certified 3D pose estimation for safety-critical agents using only a single camera image of a known target geometry. It computes formal bounds on the pose by combining reachability analysis with formal neural-network verification, avoiding reliance on external services such as GPS. Experiments on both synthetic and real-world data are stated to demonstrate that the method is efficient and produces accurate localizations together with the accompanying bounds.

Significance. If the reachable-set bounds prove tight enough for decision-making and the target-geometry assumption holds in practice, the work supplies a concrete route to formally guaranteed vision-based localization. This directly addresses a gap in cyber-physical systems where rough estimates are insufficient for safety proofs and external localization services cannot be trusted.

major comments (2)
  1. [Abstract] Abstract: the claim that experiments 'efficiently and accurately localizes agents' is unsupported by any reported metrics, bound-tightness statistics, runtime figures, or baseline comparisons; without these data the practical relevance of the formal guarantees cannot be assessed.
  2. [Methods] Methods (reachability propagation): the central derivation that maps neural-network outputs to certified 3D pose bounds via reachability analysis is not accompanied by explicit equations or proof sketches showing how image features are lifted to pose intervals; this step is load-bearing for the certification claim.
minor comments (2)
  1. [Assumptions] Clarify the precise assumptions on target-geometry fidelity (e.g., manufacturing tolerances or modeling error) and how they are folded into the reachable sets.
  2. [Experiments] Add a table or plot quantifying bound tightness (e.g., interval width versus ground-truth error) across the reported synthetic and real-world trials.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment below and have prepared revisions to strengthen the presentation of our results and methods.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that experiments 'efficiently and accurately localizes agents' is unsupported by any reported metrics, bound-tightness statistics, runtime figures, or baseline comparisons; without these data the practical relevance of the formal guarantees cannot be assessed.

    Authors: We agree that the abstract would be strengthened by explicit quantitative support. The experiments section already reports runtime measurements, localization accuracy, bound tightness, and comparisons against non-certified baselines on both synthetic and real-world data. In the revised manuscript we have updated the abstract to include these key figures so that the efficiency and accuracy claims are directly supported by the reported results. revision: yes

  2. Referee: [Methods] Methods (reachability propagation): the central derivation that maps neural-network outputs to certified 3D pose bounds via reachability analysis is not accompanied by explicit equations or proof sketches showing how image features are lifted to pose intervals; this step is load-bearing for the certification claim.

    Authors: We concur that this mapping is central to the certification guarantee. The current text describes the overall pipeline but does not isolate the reachability step with equations. In the revised version we have inserted the explicit interval-arithmetic equations that lift the verified neural-network output intervals to 3D pose bounds, together with a short soundness argument showing that the computed pose intervals enclose all poses consistent with the input image. revision: yes

Circularity Check

0 steps flagged

No significant circularity in the derivation chain

full rationale

The paper's central claim is that certified 3D pose bounds can be obtained from a single camera image of a known target geometry by applying reachability analysis and formal neural network verification. This is presented as leveraging independent, external recent results in those formal methods rather than deriving the bounds from any self-defined parameters, fitted subsets of the target data, or prior self-citations that would reduce the claim to its own inputs. No self-definitional steps, fitted-input-as-prediction patterns, or load-bearing self-citation chains appear in the abstract or described structure; the derivation remains self-contained against external formal benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim rests on the applicability of existing reachability analysis and formal neural network verification techniques to the pose estimation problem; no new free parameters, axioms beyond standard domain assumptions, or invented entities are introduced in the abstract.

axioms (2)
  • domain assumption Reachability analysis can compute sound over-approximations of reachable sets for the pose estimation system
    Invoked to formally bound the pose from the camera image
  • domain assumption Formal verification methods can be applied to the neural network component to produce certified bounds
    Used to ensure the computed pose bounds are reliable

pith-pipeline@v0.9.0 · 5439 in / 1333 out tokens · 67247 ms · 2026-05-16T02:33:52.946368+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

51 extracted references · 51 canonical work pages

  1. [1]

    Journal of the Franklin Institute (2023)

    Alanwar, A., Rath, J.J., Said, H., Johansson, K.H., Althoff, M.: Dis- tributed set-based observers using diffusion strategies. Journal of the Franklin Institute (2023)

  2. [2]

    Althoff, M.: Reachability analysis and its application to the safety assessment of autonomous cars. Ph.D. thesis, Technische Universität München (2010)

  3. [3]

    In: Proc

    Althoff, M.: An introduction to CORA 2015. In: Proc. of the 1st and 2nd Workshop on Applied Verification for Continuous and Hybrid Systems. pp. 120–151 (2015)

  4. [4]

    Foundations and Trends in Machine Learning (2023)

    Angelopoulos, A.N., Bates, S.: Conformal prediction: A gentle intro- duction. Foundations and Trends in Machine Learning (2023)

  5. [5]

    (2025), https://www.boldmethod.com/learn-to-fly/regulations/ runway-markings-and-spacing/

    Boldmethod: Runway stripes and markings, explained. (2025), https://www.boldmethod.com/learn-to-fly/regulations/ runway-markings-and-spacing/

  6. [6]

    Computational Geometry (2006)

    Brönnimann, H., Chan, T.M.: Space-efficient algorithms for computing the convex hull of a simple polygonal line in linear time. Computational Geometry (2006)

  7. [7]

    IEEE Transactions on Robotics (2017)

    Cadena, C., Carlone, L., Carrillo, H., Latif, Y ., Scaramuzza, D., Neira, J., Reid, I., Leonard, J.J.: Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Transactions on Robotics (2017)

  8. [8]

    IEEE Transactions on Neural Networks and Learning Systems (2023)

    Chen, C., Wang, B., Lu, C.X., Trigoni, N., Markham, A.: Deep learning for visual localization and mapping: A survey. IEEE Transactions on Neural Networks and Learning Systems (2023)

  9. [9]

    IEEE Transactions on Industrial Electronics (2015)

    Chen, W.H., Yang, J., Guo, L., Li, S.: Disturbance-observer-based control and related methods – An overview. IEEE Transactions on Industrial Electronics (2015)

  10. [10]

    In: Interna- tional Conference on Artificial Neural Networks

    Chowdhury, S., Khandelwal, H., D’Souza, M.: Robustness verification for object detectors using set-based reachability analysis. In: Interna- tional Conference on Artificial Neural Networks. pp. 481–492 (2025)

  11. [11]

    In: International Conference on Computer Aided Verification (2000)

    Clarke, E., Grumberg, O., Jha, S., Lu, Y ., Veith, H.: Counterexample- guided abstraction refinement. In: International Conference on Computer Aided Verification (2000)

  12. [12]

    Springer (2008)

    De Berg, M., Cheong, O., Van Kreveld, M., Overmars, M.: Computa- tional geometry: Algorithms and applications. Springer (2008)

  13. [13]

    IEEE Transactions on Robotics (2023)

    Ebadi, K., Bernreiter, L., Biggie, H., Catt, G., Chang, Y ., Chatterjee, A., Denniston, C.E., Deschênes, S.P., Harlow, K., Khattak, S., et al.: Present and future of slam in extreme environments: The DARPA SubT challenge. IEEE Transactions on Robotics (2023)

  14. [14]

    IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)

    Gallego, G., Delbrück, T., Orchard, G., Bartolozzi, C., Taba, B., Censi, A., Leutenegger, S., Davison, A.J., Conradt, J., Daniilidis, K., et al.: Event-based vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)

  15. [15]

    Journal of Machine Learning Research (2015)

    Garcıa, J., Fernández, F.: A comprehensive survey on safe reinforcement learning. Journal of Machine Learning Research (2015)

  16. [16]

    Image and Vision Computing (2021)

    Garcia-Salguero, M., Briales, J., Gonzalez-Jimenez, J.: Certifiable rela- tive pose estimation. Image and Vision Computing (2021)

  17. [17]

    IFAC Proceedings V olumes (2008)

    Girard, A., Le Guernic, C.: Efficient reachability analysis for linear systems using support functions. IFAC Proceedings V olumes (2008)

  18. [18]

    Addison Wesley, 2 edn

    Goldstein, H.: Classical mechanics. Addison Wesley, 2 edn. (1980)

  19. [19]

    In: International Conference on Learning Rep- resentations (2015)

    Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Rep- resentations (2015)

  20. [20]

    International Conference on Learning Represen- tations (2015)

    Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. International Conference on Learning Represen- tations (2015)

  21. [21]

    Journal of Field Robotics (2020)

    Grigorescu, S., Trasnea, B., Cocias, T., Macesanu, G.: A survey of deep learning techniques for autonomous driving. Journal of Field Robotics (2020)

  22. [22]

    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2023)

    Habeeb, P., Deka, N., D’Souza, D., Lodaya, K., Prabhakar, P.: Veri- fication of camera-based autonomous systems. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2023)

  23. [23]

    IEEE Transactions on Systems, Man, and Cybernetics (1989)

    Haralick, R.M., Joo, H., Lee, C.N., Zhuang, X., Vaidya, V .G., Kim, M.B.: Pose estimation from corresponding point data. IEEE Transactions on Systems, Man, and Cybernetics (1989)

  24. [24]

    Cambridge University Press, 2 edn

    Hartley, R., Zisserman, A.: Multiple view geometry in computer vision. Cambridge University Press, 2 edn. (2003)

  25. [25]

    ICAO (2018)

    International Civil Aviation Organization: Annex 14, V olume I, Aero- drome Design and Operations. ICAO (2018)

  26. [26]

    In: The Thirty-ninth Annual Conference on Neural Information Processing Systems (2025)

    Ji, C., Li, Y ., Zhong, X., Zhang, H., Mitra, S.: Abstract rendering: Certified rendering under 3D semantic uncertainty. In: The Thirty-ninth Annual Conference on Neural Information Processing Systems (2025)

  27. [27]

    Johnson, T.T.: Is neural network verification useful and what is next? In: 2025 61st Allerton Conference on Communication, Control, and Computing Proceedings (2025)

  28. [28]

    In: International Conference on Computer Aided Verification (2017)

    Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Relu- plex: An efficient SMT solver for verifying deep neural networks. In: International Conference on Computer Aided Verification (2017)

  29. [29]

    The 6th international verification of neural networks competition (vnn-comp 2025): Summary and results,

    Kaulen, K., Ladner, T., Bak, S., Brix, C., Duong, H., Flinkow, T., Johnson, T.T., Koller, L., Manino, E., Nguyen, T.H., Hoaze, W.: The 6th international verification of neural networks competition (VNN-COMP 2025): Summary and results. arXiv preprint arXiv:2512.19007 (2025)

  30. [30]

    In: IEEE Transactions on Automatic Control (2020)

    Kochdumper, N., Althoff, M.: Sparse polynomial zonotopes: A novel set representation for reachability analysis. In: IEEE Transactions on Automatic Control (2020)

  31. [31]

    In: NASA Formal Methods Symposium (2023)

    Kochdumper, N., Schilling, C., Althoff, M., Bak, S.: Open- and closed- loop neural network verification using polynomial zonotopes. In: NASA Formal Methods Symposium (2023)

  32. [32]

    International Conference on Learning Representations (2026)

    Koller, L., Ladner, T., Althoff, M.: Out of the shadows: Exploring a latent space for neural network verification. International Conference on Learning Representations (2026)

  33. [33]

    In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control (2023)

    Ladner, T., Althoff, M.: Automatic abstraction refinement in neural network verification using sensitivity analysis. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control (2023)

  34. [34]

    Transactions on Machine Learning Research (2025)

    Ladner, T., Eichelbeck, M., Althoff, M.: Formal verification of graph convolutional networks with uncertain node features and uncertain graph structure. Transactions on Machine Learning Research (2025)

  35. [35]

    ACM Transactions on Cyber-Physical Systems (2025)

    Luo, X., Wei, T., Liu, S., Wang, Z., Mattei-Mendez, L., Loper, T., Neigh- bor, J., Hutchison, C., Liu, C.: Certifying robustness of learning-based keypoint detection and pose estimation methods. ACM Transactions on Cyber-Physical Systems (2025)

  36. [36]

    In: Principles of Verification: Cycling the Probabilistic Landscape, pp

    Mitra, S., P ˘as˘areanu, C., Prabhakar, P., Seshia, S.A., Mangal, R., Li, Y ., Watson, C., Gopinath, D., Yu, H.: Formal verification techniques for vision-based autonomous systems – A survey. In: Principles of Verification: Cycling the Probabilistic Landscape, pp. 89–108 (2024)

  37. [37]

    IEEE/ASME Transactions On Mechatronics (2017)

    Nazemzadeh, P., Fontanelli, D., Macii, D., Palopoli, L.: Indoor localiza- tion of mobile robots through QR code detection and dead reckoning data fusion. IEEE/ASME Transactions On Mechatronics (2017)

  38. [38]

    IEEE Communications Surveys & Tutorials (2022)

    Pirayesh, H., Zeng, H.: Jamming attacks and anti-jamming strategies in wireless networks: A comprehensive survey. IEEE Communications Surveys & Tutorials (2022)

  39. [39]

    IEEE Transactions on Robotics (2023) 11

    Placed, J.A., Strader, J., Carrillo, H., Atanasov, N., Indelman, V ., Car- lone, L., Castellanos, J.A.: A survey on active simultaneous localization and mapping: State of the art and new frontiers. IEEE Transactions on Robotics (2023) 11

  40. [40]

    IEEE Transactions on Automatic Control (2011)

    Raïssi, T., Efimov, D., Zolghadri, A.: Interval state estimation for a class of nonlinear systems. IEEE Transactions on Automatic Control (2011)

  41. [41]

    arxiv (2026)

    Santa Cruz, U., Elfar, M., Shoukry, Y .: Correct-by-construction: Vision- based pose estimation using geometric generative models. arxiv (2026)

  42. [42]

    In: NASA Formal Methods Symposium (2022)

    Santa Cruz, U., Shoukry, Y .: NNLander-VeriF: A neural network formal verification framework for vision-based autonomous aircraft landing. In: NASA Formal Methods Symposium (2022)

  43. [43]

    In: 62nd IEEE Conference on Decision and Control (2023)

    Santa Cruz, U., Shoukry, Y .: Certified vision-based state estimation for autonomous landing systems using reachability analysis. In: 62nd IEEE Conference on Decision and Control (2023)

  44. [44]

    Automatica (2016)

    Scott, J.K., Raimondo, D.M., Marseglia, G.R., Braatz, R.D.: Constrained zonotopes: A new tool for set-based estimation and fault detection. Automatica (2016)

  45. [45]

    IEEE Transactions on Robotics (2023)

    Talak, R., Peng, L.R., Carlone, L.: Certifiable object pose estimation: Foundations, learning models, and self-training. IEEE Transactions on Robotics (2023)

  46. [46]

    In: Proceedings of the 37th International Technical Meeting of the Satellite Division of The Institute of Navigation (2024)

    Wu, D.L., Csar, C., Salinas, J.H.: GPS jamming: A historical record from global radio occultation (RO) observations. In: Proceedings of the 37th International Technical Meeting of the Satellite Division of The Institute of Navigation (2024)

  47. [47]

    In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2023)

    Yang, H., Pavone, M.: Object pose estimation with statistical guarantees: Conformal keypoint detection and geometric uncertainty propagation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2023)

  48. [48]

    IEEE Transactions on Robotics (2020)

    Yang, H., Shi, J., Carlone, L.: TEASER: Fast and certifiable point cloud registration. IEEE Transactions on Robotics (2020)

  49. [49]

    In: IEEE International Conference on Robotics and Biomimetics (2015) APPENDIX A

    Zhang, H., Zhang, C., Yang, W., Chen, C.Y .: Localization and navigation using QR code for mobile robot in indoor environment. In: IEEE International Conference on Robotics and Biomimetics (2015) APPENDIX A. Polynomial Zonotopes and Image Enclosures We construct a running example to illustrate the image enclosure (Prop. 1) using polynomial zonotopes (Def....

  50. [50]

    III the effect of the image resolution on the results

    Ablating the image resolution.:We ablate in Tab. III the effect of the image resolution on the results. Interestingly, despite requiring more pose candidates with larger image res- olution, the average after the initial filtering is applied (Alg. 2, line 6) stays roughly constant and thus also the computation time. We also show the average number of witne...

  51. [51]

    1, we discuss a convex hull enclosure by runningγsupport function evaluations

    Ablating the support function heuristics.:In Alg. 1, we discuss a convex hull enclosure by runningγsupport function evaluations. The detailed process is described in Appendix C, and is also illustrated in Fig. 12. In this ablation study, we provide further insights by present- ing qualitative examples of different heuristics for selecting support function...