pith. machine review for the scientific record. sign in

arxiv: 2605.00147 · v1 · submitted 2026-04-30 · 💻 cs.CV

Recognition: unknown

From Images2Mesh: A 3D Surface Reconstruction Pipeline for Non-Cooperative Space Objects

Authors on Pith no claims yet

Pith reviewed 2026-05-09 20:07 UTC · model grok-4.3

classification 💻 cs.CV
keywords neural implicit surfaces3D reconstructionspace objectson-orbit imagerycamera pose estimationbackground removalphotometric correction
0
0 comments X

The pith

A pipeline reconstructs 3D surfaces of non-cooperative space objects from real monocular on-orbit inspection videos by first removing backgrounds and correcting exposure.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that neural implicit surface reconstruction can be applied to actual monocular footage of space objects when standard methods are augmented with targeted preprocessing. Background variation across frames defeats direct camera pose estimation, so the pipeline inserts segmentation to isolate the object before running COLMAP. Per-frame exposure changes are then corrected photometrically to stabilize input to the implicit model. The approach is shown to produce usable surfaces on released STS-119 ISS inspection video and on-orbit footage of an H-IIA rocket upper stage. A sympathetic reader would care because most existing objects in orbit lack controlled 3D scans, yet characterization of their geometry is required for debris removal and servicing missions.

Core claim

The central claim is that a neural implicit surface reconstruction pipeline can be applied to real monocular inspection imagery of non-cooperative space objects when preceded by segmentation-based background removal and photometric exposure correction, as demonstrated on STS-119 ISS footage and H-IIA rocket upper stage imagery.

What carries the argument

The preprocessing sequence of segmentation for background removal (to enable reliable COLMAP pose estimation) followed by photometric correction for per-frame exposure variations, before neural implicit surface reconstruction.

If this is right

  • Camera pose estimation succeeds on real footage where direct processing fails due to background variation.
  • Performance in shadowed regions varies with the illumination characteristics of the input footage.
  • The pipeline operates on publicly released mission imagery without requiring known poses or laboratory conditions.
  • It supplies geometry and structural condition data needed for active debris removal and on-orbit servicing planning.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the preprocessing generalizes, reconstruction could become feasible from a larger archive of existing satellite inspection videos.
  • Future inspection missions might rely less on dedicated multi-camera or known-pose hardware.
  • Similar segmentation-plus-correction steps could be tested for reconstructing objects under other uncontrolled outdoor or orbital lighting conditions.

Load-bearing premise

The assumption that segmentation and photometric correction steps will generalize across varied on-orbit illumination and background conditions without introducing artifacts that degrade the final surface reconstruction.

What would settle it

Running the pipeline unchanged on additional independent on-orbit inspection videos and observing whether pose estimation succeeds and the resulting surfaces remain free of artifacts from the correction steps.

Figures

Figures reproduced from arXiv: 2605.00147 by Bala Prenith Reddy Gopu, Christopher McKenna, David Hinckley, George M. Nehma, Madhur Tiwari, Matt Ueckermann, Patrick Quinn.

Figure 1
Figure 1. Figure 1: Overview of our pipeline comprising temporal frame extraction, background [PITH_FULL_IMAGE:figures/full_fig_p006_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Sample frames from the publicly released STS-119 inspection video and their [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Recovered camera trajectories (red frustums) and sparse point clouds from [PITH_FULL_IMAGE:figures/full_fig_p011_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Reconstructed mesh of the ISS generated by Neuralangelo without PPISP post [PITH_FULL_IMAGE:figures/full_fig_p012_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Reconstructed mesh of the H-IIA upper stage generated by Neuralangelo without [PITH_FULL_IMAGE:figures/full_fig_p012_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Reprojection of the reconstructed mesh of the H-IIA upper stage without PPISP [PITH_FULL_IMAGE:figures/full_fig_p013_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: PPISP evaluation report for the STS-119 dataset. [PITH_FULL_IMAGE:figures/full_fig_p014_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: PPISP evaluation report for the ADRAS-J dataset. [PITH_FULL_IMAGE:figures/full_fig_p015_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Qualitative comparison of the reconstructed mesh of the ISS without PPISP [PITH_FULL_IMAGE:figures/full_fig_p016_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Qualitative comparison of the reconstructed mesh of the ISS without PPISP [PITH_FULL_IMAGE:figures/full_fig_p016_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Qualitative comparison of the reconstructed mesh of the ISS without PPISP [PITH_FULL_IMAGE:figures/full_fig_p016_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Qualitative comparison of the reconstructed mesh of the ISS without PPISP [PITH_FULL_IMAGE:figures/full_fig_p017_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Qualitative comparison of the reconstructed mesh of the H-IIA upper stage [PITH_FULL_IMAGE:figures/full_fig_p017_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Qualitative comparison of the reconstructed mesh of the H-IIA upper stage [PITH_FULL_IMAGE:figures/full_fig_p018_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Qualitative comparison of the reconstructed mesh of the H-IIA upper stage [PITH_FULL_IMAGE:figures/full_fig_p018_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Qualitative comparison of the reconstructed mesh of the H-IIA upper stage [PITH_FULL_IMAGE:figures/full_fig_p019_16.png] view at source ↗
read the original abstract

On-orbit inspection imagery is crucial as it enables characterization of non-cooperative resident space objects, providing the geometry and structural condition essential for active debris removal and on-orbit servicing mission planning. However, most existing neural implicit surface reconstruction methods have been confined to synthetic or hardware-in-the-loop data with known camera poses and controlled illumination. In this work, we present a pipeline for neural implicit surface reconstruction of non-cooperative space objects from monocular inspection imagery. We demonstrate it on publicly released ISS inspection footage from the STS-119 mission and publicly released on-orbit inspection footage of an H-IIA rocket upper stage. We find that segmentation-based background removal is essential for successful camera pose estimation from real on-orbit footage, where background variation between frames caused direct processing to fail entirely. We further incorporate photometric correction of per-frame exposure variations and analyze its behavior across datasets, finding that performance in shadowed regions varies with the illumination characteristics of the input footage.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper presents a practical pipeline for neural implicit 3D surface reconstruction of non-cooperative space objects from monocular on-orbit inspection imagery. It combines segmentation-based background removal (shown to be essential for COLMAP-style pose estimation on real footage), per-frame photometric correction for exposure variations, and a neural implicit surface method. The approach is demonstrated on two public real datasets (STS-119 ISS inspection footage and H-IIA rocket upper stage), with success claimed via visual inspection of reconstructed meshes and renderings where direct application of reconstruction methods fails.

Significance. If the central claim holds, the work would be significant for space situational awareness, active debris removal, and on-orbit servicing, as it moves neural implicit reconstruction from synthetic/controlled settings to real monocular on-orbit data. The explicit identification of segmentation and photometric correction as critical preprocessing steps offers actionable guidance. The use of publicly released real footage is a strength that enables reproducibility and community follow-up.

major comments (2)
  1. [Results section] Results section (and abstract): The claim of 'successful demonstration' and that 'segmentation-based background removal is essential' rests entirely on qualitative visual inspection of meshes and renderings on the two real datasets, with no quantitative metrics (Chamfer distance, normal consistency, pose estimation error, or surface accuracy against any reference geometry) and no ablation studies isolating the effect of segmentation or photometric correction on final reconstruction fidelity. This is load-bearing for the central claim that the pipeline enables recovery where direct methods fail.
  2. [Photometric correction analysis] Section on photometric correction analysis: The statement that 'performance in shadowed regions varies with the illumination characteristics' is presented without quantitative measures of variation, error maps, or controlled comparisons across the STS-119 and H-IIA datasets, leaving the analysis of correction behavior qualitative and difficult to generalize.
minor comments (2)
  1. [Methods] The manuscript should specify the exact neural implicit surface method (e.g., NeuS, VolSDF) and all hyperparameters used for training and inference to improve reproducibility.
  2. [Figures] Figure captions and legends could be expanded to describe what specific visual features (e.g., handling of shadows or background removal artifacts) the reader is intended to observe in the reconstructed meshes.

Simulated Author's Rebuttal

2 responses · 1 unresolved

Thank you for the constructive review. We address the major comments point by point below, with honest acknowledgment of evaluation constraints inherent to real on-orbit data.

read point-by-point responses
  1. Referee: [Results section] Results section (and abstract): The claim of 'successful demonstration' and that 'segmentation-based background removal is essential' rests entirely on qualitative visual inspection of meshes and renderings on the two real datasets, with no quantitative metrics (Chamfer distance, normal consistency, pose estimation error, or surface accuracy against any reference geometry) and no ablation studies isolating the effect of segmentation or photometric correction on final reconstruction fidelity. This is load-bearing for the central claim that the pipeline enables recovery where direct methods fail.

    Authors: We agree the evaluation is qualitative. No reference 3D geometry exists for these real non-cooperative objects, so metrics such as Chamfer distance or surface accuracy cannot be computed. Pose errors are likewise unavailable without ground-truth poses. The core demonstration is that direct COLMAP fails entirely to recover poses due to background variation, while segmentation enables pose estimation and reconstruction. We will revise the results section to add explicit ablation-style comparisons: failed pose estimation and empty outputs without segmentation, plus visual mesh differences with and without photometric correction. This is a partial revision, as quantitative metrics against references remain impossible. revision: partial

  2. Referee: [Photometric correction analysis] Section on photometric correction analysis: The statement that 'performance in shadowed regions varies with the illumination characteristics' is presented without quantitative measures of variation, error maps, or controlled comparisons across the STS-119 and H-IIA datasets, leaving the analysis of correction behavior qualitative and difficult to generalize.

    Authors: We acknowledge the analysis is currently qualitative. In revision we will add per-dataset error maps and side-by-side renderings of shadowed regions before and after correction, together with a brief comparison of illumination conditions (dynamic Earth-albedo effects on ISS versus more stable conditions for H-IIA). Quantitative error quantification is not feasible without ground-truth reflectance or geometry, but the added visualizations will improve clarity and generalizability. revision: yes

standing simulated objections not resolved
  • Quantitative metrics (Chamfer distance, surface accuracy, pose error) against reference geometry cannot be provided, as no such reference data exists for the real on-orbit imagery.

Circularity Check

0 steps flagged

Empirical pipeline application with no derivation chain or self-referential reductions

full rationale

The paper describes a practical pipeline that combines off-the-shelf components (segmentation for background removal, COLMAP-style pose estimation, per-frame photometric correction, and a neural implicit surface method) and applies them to two public real-world datasets. No mathematical derivations, uniqueness theorems, fitted parameters renamed as predictions, or ansatzes are claimed. The central statements (segmentation is essential; photometric correction behavior varies) are empirical observations from running the pipeline, not reductions to self-defined quantities or self-citations. External public datasets provide independent test cases, so the work is self-contained against external benchmarks. Absence of quantitative metrics is an evaluation limitation, not a circularity issue.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Abstract provides insufficient detail to enumerate free parameters or invented entities; the approach rests on standard assumptions of neural implicit representations and pose estimation pipelines.

axioms (1)
  • domain assumption Neural implicit functions can represent complex 3D surfaces from 2D images under unknown poses and lighting
    Central to the reconstruction method; invoked implicitly by the choice of neural surface representation.

pith-pipeline@v0.9.0 · 5486 in / 1177 out tokens · 26191 ms · 2026-05-09T20:07:47.776922+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

27 extracted references · 7 canonical work pages · 2 internal anchors

  1. [1]

    JAXA, CRD2 phase I / ADRAS-J update: Fly-around observation images of space debris released,https://global.jaxa.jp/press/2024/ 07/20240730-1_e.html, accessed: 2025-04-20 (2024)

  2. [2]

    Astroscale, Astroscale’s ADRAS-J conducts first fly-around ob- servation of space debris, https://www.astroscale.com/en/news/ astroscales-adras-j-conducts-first-fly-around-observation-of-space-debris , accessed: 2025-04-20 (2024)

  3. [3]

    Zuehlke, D

    D. Zuehlke, D. Posada, M. Tiwari, T. Henderson, Autonomous satellite detectionandtrackingusingopticalflow, arXivpreprintarXiv:2204.07025 (2022). 22

  4. [4]

    P. D. Quinn, B. P. R. Gopu, G. M. Nehma, M. Tiwari, Simulation based reward function validation for multi-agent on orbit inspection, in: AIAA SCITECH 2026 Forum, 2026, p. 2042

  5. [5]

    Hopkins, Space-based 3D reconstruction: Advancing object charac- terization in orbit, Scout Space Newsroom, accessed: 2026-04-20 (mar 2025)

    K. Hopkins, Space-based 3D reconstruction: Advancing object charac- terization in orbit, Scout Space Newsroom, accessed: 2026-04-20 (mar 2025). URLhttps://www.scout.space/news/3d-reconstruction

  6. [6]

    Issitt, T

    A. Issitt, T. Mahendrakar, A. Alvarez, R. T. White, A. Sizemore, On optimalobservationorbitsforlearninggaussiansplatting-based3dmodels of unknown resident space objects, in: AIAA SCITECH 2025 Forum, 2025, p. 1780

  7. [7]

    Mahendrakar, R

    T. Mahendrakar, R. T. White, M. Wilde, M. Tiwari, Spaceyolo: A human- inspired model for real-time, on-board spacecraft feature detection, in: 2023 IEEE Aerospace Conference, IEEE, 2023, pp. 01–11

  8. [8]

    Mahendrakar, R

    T. Mahendrakar, R. T. White, M. Tiwari, M. Wilde, Unknown non- cooperative spacecraft characterization with lightweight convolutional neural networks, Journal of Aerospace Information Systems 21 (5) (2024) 455–460

  9. [9]

    T. H. Park, S. D’Amico, Rapid abstraction of spacecraft 3d structure from single 2d image, in: AIAA SCITECH 2024 Forum, 2024, p. 2768

  10. [10]

    Bates, S

    E. Bates, S. D’Amico, Removing ambiguities in concurrent monocular single-shot spacecraft shape and pose estimation using a deep neural network, in: 47th Rocky Mountain AAS Guidance, Navigation and Control Conference, 2025

  11. [11]

    P. F. Huc, E. Bates, S. D’Amico, Fast learning of non-cooperative space- craft 3d models through primitive initialization, in: 2025 AAS/AIAA Astrodynamics Specialist Conference, Boston, Massachusetts, 2025. arXiv:2507.19459,doi:10.48550/arXiv.2507.19459. URLhttps://arxiv.org/abs/2507.19459

  12. [12]

    V.M.Nguyen, E.Sandidge, T.Mahendrakar, R.T.White, Characterizing satellite geometry via accelerated 3d gaussian splatting, Aerospace 11 (3) (2024) 183. 23

  13. [13]

    Ng, Nerf: Representing scenes as neural radiance fields for view synthesis, Communications of the ACM 65 (1) (2021) 99–106

    B.Mildenhall, P.P.Srinivasan, M.Tancik, J.T.Barron, R.Ramamoorthi, R. Ng, Nerf: Representing scenes as neural radiance fields for view synthesis, Communications of the ACM 65 (1) (2021) 99–106

  14. [14]

    Kerbl, G

    B. Kerbl, G. Kopanas, T. Leimkühler, G. Drettakis, 3d gaussian splatting for real-time radiance field rendering, ACM Transactions on Graphics 42 (4) (July 2023). URL https://repo-sam.inria.fr/fungraph/ 3d-gaussian-splatting/

  15. [15]

    Mergy, G

    A. Mergy, G. Lecuyer, D. Derksen, D. Izzo, Vision-based neural scene rep- resentations for spacecraft, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2002–2011

  16. [16]

    Caruso, T

    B. Caruso, T. Mahendrakar, V. M. Nguyen, R. T. White, T. Steffen, 3d reconstruction of non-cooperative resident space objects using instant ngp-accelerated nerf and d-nerf, arXiv preprint arXiv:2301.09060 (2023)

  17. [17]

    T. J. Huber, High-fidelity 3D reconstruction of space bodies using ma- chine learningand neuralradiance fields, Master’sthesis, FloridaInstitute of Technology (2024). URLhttps://repository.fit.edu/etd/1442

  18. [18]

    Z. Li, T. Müller, A. Evans, R. H. Taylor, M. Unberath, M.-Y. Liu, C.-H. Lin, Neuralangelo: High-fidelity neural surface reconstruction, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8456–8465

  19. [19]

    B. P. R. Gopu, T. J. Huber, G. M. Nehma, P. D. Quinn, M. Tiwari, M. Ueckermann, D. Hinckley, C. McKenna, Dynamic scene 3d recon- struction of an uncooperative resident space object, in: AIAA SCITECH 2026 Forum, 2026.doi:10.2514/6.2026-1038

  20. [20]

    T. H. Park, S. D’Amico, Improved 3d gaussian splatting of unknown spacecraft structure using space environment illumination knowledge, in: 2025 International Conference on Space Robotics (iSpaRo), IEEE, 2025. doi:10.1109/iSpaRo66239.2025.11437016

  21. [21]

    PPISP: Physically-Plausible Compensation and Control of Photometric Variations in Radiance Field Reconstruction

    I. Deutsch, N. Moënne-Loccoz, G. State, Z. Gojcic, Ppisp: Physically- plausible compensation and control of photometric variations in radiance 24 field reconstruction (2026).arXiv:2601.18336. URLhttps://arxiv.org/abs/2601.18336

  22. [22]

    J. L. Schönberger, J.-M. Frahm, Structure-from-motion revisited, in: Conference on Computer Vision and Pattern Recognition (CVPR), 2016

  23. [23]

    J. L. Schönberger, E. Zheng, M. Pollefeys, J.-M. Frahm, Pixelwise view selection for unstructured multi-view stereo, in: European Conference on Computer Vision (ECCV), 2016

  24. [24]

    SAM 3: Segment Anything with Concepts

    N. Carion, L. Gustafson, Y.-T. Hu, S. Debnath, R. Hu, D. Suris, C. Ryali, K. V. Alwala, H. Khedr, A. Huang, J. Lei, T. Ma, B. Guo, A. Kalla, M. Marks, J. Greer, M. Wang, P. Sun, R. Rädle, T. Afouras, E. Mavroudi, K. Xu, T.-H. Wu, Y. Zhou, L. Momeni, R. Hazra, S. Ding, S. Vaze, F. Porcher, F. Li, S. Li, A. Kamath, H. K. Cheng, P. Dollár, N. Ravi, K. Saenko...

  25. [25]

    URLhttps://www.youtube.com/watch?v=bXNH7whveGk

    NASA, STS 119 HD ISS Fly Around Sped Up, YouTube, accessed: 2025- 04-20 (2009). URLhttps://www.youtube.com/watch?v=bXNH7whveGk

  26. [26]

    URLhttps://www.youtube.com/shorts/bKKkkX-7fn8

    Astroscale, Historic Fly-Around of Space Debris by ADRAS-J on July 15 (Telephoto), YouTube, accessed: 2025-04-20 (2024). URLhttps://www.youtube.com/shorts/bKKkkX-7fn8

  27. [27]

    Cignoni, M

    P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, G. Ranzuglia, MeshLab: an Open-Source Mesh Processing Tool, in: V. Scarano, R. De Chiara, U. Erra (Eds.), Eurographics Italian Chap- ter Conference, The Eurographics Association, 2008. doi:10.2312/ LocalChapterEvents/ItalChap/ItalianChapConf2008/129-136. 25