Recognition: unknown
From Images2Mesh: A 3D Surface Reconstruction Pipeline for Non-Cooperative Space Objects
Pith reviewed 2026-05-09 20:07 UTC · model grok-4.3
The pith
A pipeline reconstructs 3D surfaces of non-cooperative space objects from real monocular on-orbit inspection videos by first removing backgrounds and correcting exposure.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that a neural implicit surface reconstruction pipeline can be applied to real monocular inspection imagery of non-cooperative space objects when preceded by segmentation-based background removal and photometric exposure correction, as demonstrated on STS-119 ISS footage and H-IIA rocket upper stage imagery.
What carries the argument
The preprocessing sequence of segmentation for background removal (to enable reliable COLMAP pose estimation) followed by photometric correction for per-frame exposure variations, before neural implicit surface reconstruction.
If this is right
- Camera pose estimation succeeds on real footage where direct processing fails due to background variation.
- Performance in shadowed regions varies with the illumination characteristics of the input footage.
- The pipeline operates on publicly released mission imagery without requiring known poses or laboratory conditions.
- It supplies geometry and structural condition data needed for active debris removal and on-orbit servicing planning.
Where Pith is reading between the lines
- If the preprocessing generalizes, reconstruction could become feasible from a larger archive of existing satellite inspection videos.
- Future inspection missions might rely less on dedicated multi-camera or known-pose hardware.
- Similar segmentation-plus-correction steps could be tested for reconstructing objects under other uncontrolled outdoor or orbital lighting conditions.
Load-bearing premise
The assumption that segmentation and photometric correction steps will generalize across varied on-orbit illumination and background conditions without introducing artifacts that degrade the final surface reconstruction.
What would settle it
Running the pipeline unchanged on additional independent on-orbit inspection videos and observing whether pose estimation succeeds and the resulting surfaces remain free of artifacts from the correction steps.
Figures
read the original abstract
On-orbit inspection imagery is crucial as it enables characterization of non-cooperative resident space objects, providing the geometry and structural condition essential for active debris removal and on-orbit servicing mission planning. However, most existing neural implicit surface reconstruction methods have been confined to synthetic or hardware-in-the-loop data with known camera poses and controlled illumination. In this work, we present a pipeline for neural implicit surface reconstruction of non-cooperative space objects from monocular inspection imagery. We demonstrate it on publicly released ISS inspection footage from the STS-119 mission and publicly released on-orbit inspection footage of an H-IIA rocket upper stage. We find that segmentation-based background removal is essential for successful camera pose estimation from real on-orbit footage, where background variation between frames caused direct processing to fail entirely. We further incorporate photometric correction of per-frame exposure variations and analyze its behavior across datasets, finding that performance in shadowed regions varies with the illumination characteristics of the input footage.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper presents a practical pipeline for neural implicit 3D surface reconstruction of non-cooperative space objects from monocular on-orbit inspection imagery. It combines segmentation-based background removal (shown to be essential for COLMAP-style pose estimation on real footage), per-frame photometric correction for exposure variations, and a neural implicit surface method. The approach is demonstrated on two public real datasets (STS-119 ISS inspection footage and H-IIA rocket upper stage), with success claimed via visual inspection of reconstructed meshes and renderings where direct application of reconstruction methods fails.
Significance. If the central claim holds, the work would be significant for space situational awareness, active debris removal, and on-orbit servicing, as it moves neural implicit reconstruction from synthetic/controlled settings to real monocular on-orbit data. The explicit identification of segmentation and photometric correction as critical preprocessing steps offers actionable guidance. The use of publicly released real footage is a strength that enables reproducibility and community follow-up.
major comments (2)
- [Results section] Results section (and abstract): The claim of 'successful demonstration' and that 'segmentation-based background removal is essential' rests entirely on qualitative visual inspection of meshes and renderings on the two real datasets, with no quantitative metrics (Chamfer distance, normal consistency, pose estimation error, or surface accuracy against any reference geometry) and no ablation studies isolating the effect of segmentation or photometric correction on final reconstruction fidelity. This is load-bearing for the central claim that the pipeline enables recovery where direct methods fail.
- [Photometric correction analysis] Section on photometric correction analysis: The statement that 'performance in shadowed regions varies with the illumination characteristics' is presented without quantitative measures of variation, error maps, or controlled comparisons across the STS-119 and H-IIA datasets, leaving the analysis of correction behavior qualitative and difficult to generalize.
minor comments (2)
- [Methods] The manuscript should specify the exact neural implicit surface method (e.g., NeuS, VolSDF) and all hyperparameters used for training and inference to improve reproducibility.
- [Figures] Figure captions and legends could be expanded to describe what specific visual features (e.g., handling of shadows or background removal artifacts) the reader is intended to observe in the reconstructed meshes.
Simulated Author's Rebuttal
Thank you for the constructive review. We address the major comments point by point below, with honest acknowledgment of evaluation constraints inherent to real on-orbit data.
read point-by-point responses
-
Referee: [Results section] Results section (and abstract): The claim of 'successful demonstration' and that 'segmentation-based background removal is essential' rests entirely on qualitative visual inspection of meshes and renderings on the two real datasets, with no quantitative metrics (Chamfer distance, normal consistency, pose estimation error, or surface accuracy against any reference geometry) and no ablation studies isolating the effect of segmentation or photometric correction on final reconstruction fidelity. This is load-bearing for the central claim that the pipeline enables recovery where direct methods fail.
Authors: We agree the evaluation is qualitative. No reference 3D geometry exists for these real non-cooperative objects, so metrics such as Chamfer distance or surface accuracy cannot be computed. Pose errors are likewise unavailable without ground-truth poses. The core demonstration is that direct COLMAP fails entirely to recover poses due to background variation, while segmentation enables pose estimation and reconstruction. We will revise the results section to add explicit ablation-style comparisons: failed pose estimation and empty outputs without segmentation, plus visual mesh differences with and without photometric correction. This is a partial revision, as quantitative metrics against references remain impossible. revision: partial
-
Referee: [Photometric correction analysis] Section on photometric correction analysis: The statement that 'performance in shadowed regions varies with the illumination characteristics' is presented without quantitative measures of variation, error maps, or controlled comparisons across the STS-119 and H-IIA datasets, leaving the analysis of correction behavior qualitative and difficult to generalize.
Authors: We acknowledge the analysis is currently qualitative. In revision we will add per-dataset error maps and side-by-side renderings of shadowed regions before and after correction, together with a brief comparison of illumination conditions (dynamic Earth-albedo effects on ISS versus more stable conditions for H-IIA). Quantitative error quantification is not feasible without ground-truth reflectance or geometry, but the added visualizations will improve clarity and generalizability. revision: yes
- Quantitative metrics (Chamfer distance, surface accuracy, pose error) against reference geometry cannot be provided, as no such reference data exists for the real on-orbit imagery.
Circularity Check
Empirical pipeline application with no derivation chain or self-referential reductions
full rationale
The paper describes a practical pipeline that combines off-the-shelf components (segmentation for background removal, COLMAP-style pose estimation, per-frame photometric correction, and a neural implicit surface method) and applies them to two public real-world datasets. No mathematical derivations, uniqueness theorems, fitted parameters renamed as predictions, or ansatzes are claimed. The central statements (segmentation is essential; photometric correction behavior varies) are empirical observations from running the pipeline, not reductions to self-defined quantities or self-citations. External public datasets provide independent test cases, so the work is self-contained against external benchmarks. Absence of quantitative metrics is an evaluation limitation, not a circularity issue.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Neural implicit functions can represent complex 3D surfaces from 2D images under unknown poses and lighting
Reference graph
Works this paper leans on
-
[1]
JAXA, CRD2 phase I / ADRAS-J update: Fly-around observation images of space debris released,https://global.jaxa.jp/press/2024/ 07/20240730-1_e.html, accessed: 2025-04-20 (2024)
2024
-
[2]
Astroscale, Astroscale’s ADRAS-J conducts first fly-around ob- servation of space debris, https://www.astroscale.com/en/news/ astroscales-adras-j-conducts-first-fly-around-observation-of-space-debris , accessed: 2025-04-20 (2024)
2025
-
[3]
D. Zuehlke, D. Posada, M. Tiwari, T. Henderson, Autonomous satellite detectionandtrackingusingopticalflow, arXivpreprintarXiv:2204.07025 (2022). 22
-
[4]
P. D. Quinn, B. P. R. Gopu, G. M. Nehma, M. Tiwari, Simulation based reward function validation for multi-agent on orbit inspection, in: AIAA SCITECH 2026 Forum, 2026, p. 2042
2026
-
[5]
Hopkins, Space-based 3D reconstruction: Advancing object charac- terization in orbit, Scout Space Newsroom, accessed: 2026-04-20 (mar 2025)
K. Hopkins, Space-based 3D reconstruction: Advancing object charac- terization in orbit, Scout Space Newsroom, accessed: 2026-04-20 (mar 2025). URLhttps://www.scout.space/news/3d-reconstruction
2026
-
[6]
Issitt, T
A. Issitt, T. Mahendrakar, A. Alvarez, R. T. White, A. Sizemore, On optimalobservationorbitsforlearninggaussiansplatting-based3dmodels of unknown resident space objects, in: AIAA SCITECH 2025 Forum, 2025, p. 1780
2025
-
[7]
Mahendrakar, R
T. Mahendrakar, R. T. White, M. Wilde, M. Tiwari, Spaceyolo: A human- inspired model for real-time, on-board spacecraft feature detection, in: 2023 IEEE Aerospace Conference, IEEE, 2023, pp. 01–11
2023
-
[8]
Mahendrakar, R
T. Mahendrakar, R. T. White, M. Tiwari, M. Wilde, Unknown non- cooperative spacecraft characterization with lightweight convolutional neural networks, Journal of Aerospace Information Systems 21 (5) (2024) 455–460
2024
-
[9]
T. H. Park, S. D’Amico, Rapid abstraction of spacecraft 3d structure from single 2d image, in: AIAA SCITECH 2024 Forum, 2024, p. 2768
2024
-
[10]
Bates, S
E. Bates, S. D’Amico, Removing ambiguities in concurrent monocular single-shot spacecraft shape and pose estimation using a deep neural network, in: 47th Rocky Mountain AAS Guidance, Navigation and Control Conference, 2025
2025
-
[11]
P. F. Huc, E. Bates, S. D’Amico, Fast learning of non-cooperative space- craft 3d models through primitive initialization, in: 2025 AAS/AIAA Astrodynamics Specialist Conference, Boston, Massachusetts, 2025. arXiv:2507.19459,doi:10.48550/arXiv.2507.19459. URLhttps://arxiv.org/abs/2507.19459
-
[12]
V.M.Nguyen, E.Sandidge, T.Mahendrakar, R.T.White, Characterizing satellite geometry via accelerated 3d gaussian splatting, Aerospace 11 (3) (2024) 183. 23
2024
-
[13]
Ng, Nerf: Representing scenes as neural radiance fields for view synthesis, Communications of the ACM 65 (1) (2021) 99–106
B.Mildenhall, P.P.Srinivasan, M.Tancik, J.T.Barron, R.Ramamoorthi, R. Ng, Nerf: Representing scenes as neural radiance fields for view synthesis, Communications of the ACM 65 (1) (2021) 99–106
2021
-
[14]
Kerbl, G
B. Kerbl, G. Kopanas, T. Leimkühler, G. Drettakis, 3d gaussian splatting for real-time radiance field rendering, ACM Transactions on Graphics 42 (4) (July 2023). URL https://repo-sam.inria.fr/fungraph/ 3d-gaussian-splatting/
2023
-
[15]
Mergy, G
A. Mergy, G. Lecuyer, D. Derksen, D. Izzo, Vision-based neural scene rep- resentations for spacecraft, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2002–2011
2021
- [16]
-
[17]
T. J. Huber, High-fidelity 3D reconstruction of space bodies using ma- chine learningand neuralradiance fields, Master’sthesis, FloridaInstitute of Technology (2024). URLhttps://repository.fit.edu/etd/1442
2024
-
[18]
Z. Li, T. Müller, A. Evans, R. H. Taylor, M. Unberath, M.-Y. Liu, C.-H. Lin, Neuralangelo: High-fidelity neural surface reconstruction, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8456–8465
2023
-
[19]
B. P. R. Gopu, T. J. Huber, G. M. Nehma, P. D. Quinn, M. Tiwari, M. Ueckermann, D. Hinckley, C. McKenna, Dynamic scene 3d recon- struction of an uncooperative resident space object, in: AIAA SCITECH 2026 Forum, 2026.doi:10.2514/6.2026-1038
-
[20]
T. H. Park, S. D’Amico, Improved 3d gaussian splatting of unknown spacecraft structure using space environment illumination knowledge, in: 2025 International Conference on Space Robotics (iSpaRo), IEEE, 2025. doi:10.1109/iSpaRo66239.2025.11437016
-
[21]
I. Deutsch, N. Moënne-Loccoz, G. State, Z. Gojcic, Ppisp: Physically- plausible compensation and control of photometric variations in radiance 24 field reconstruction (2026).arXiv:2601.18336. URLhttps://arxiv.org/abs/2601.18336
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[22]
J. L. Schönberger, J.-M. Frahm, Structure-from-motion revisited, in: Conference on Computer Vision and Pattern Recognition (CVPR), 2016
2016
-
[23]
J. L. Schönberger, E. Zheng, M. Pollefeys, J.-M. Frahm, Pixelwise view selection for unstructured multi-view stereo, in: European Conference on Computer Vision (ECCV), 2016
2016
-
[24]
SAM 3: Segment Anything with Concepts
N. Carion, L. Gustafson, Y.-T. Hu, S. Debnath, R. Hu, D. Suris, C. Ryali, K. V. Alwala, H. Khedr, A. Huang, J. Lei, T. Ma, B. Guo, A. Kalla, M. Marks, J. Greer, M. Wang, P. Sun, R. Rädle, T. Afouras, E. Mavroudi, K. Xu, T.-H. Wu, Y. Zhou, L. Momeni, R. Hazra, S. Ding, S. Vaze, F. Porcher, F. Li, S. Li, A. Kamath, H. K. Cheng, P. Dollár, N. Ravi, K. Saenko...
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[25]
URLhttps://www.youtube.com/watch?v=bXNH7whveGk
NASA, STS 119 HD ISS Fly Around Sped Up, YouTube, accessed: 2025- 04-20 (2009). URLhttps://www.youtube.com/watch?v=bXNH7whveGk
2025
-
[26]
URLhttps://www.youtube.com/shorts/bKKkkX-7fn8
Astroscale, Historic Fly-Around of Space Debris by ADRAS-J on July 15 (Telephoto), YouTube, accessed: 2025-04-20 (2024). URLhttps://www.youtube.com/shorts/bKKkkX-7fn8
2025
-
[27]
Cignoni, M
P. Cignoni, M. Callieri, M. Corsini, M. Dellepiane, F. Ganovelli, G. Ranzuglia, MeshLab: an Open-Source Mesh Processing Tool, in: V. Scarano, R. De Chiara, U. Erra (Eds.), Eurographics Italian Chap- ter Conference, The Eurographics Association, 2008. doi:10.2312/ LocalChapterEvents/ItalChap/ItalianChapConf2008/129-136. 25
2008
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.