pith. machine review for the scientific record. sign in

arxiv: 2604.13235 · v1 · submitted 2026-04-14 · 💻 cs.CV

Recognition: unknown

Neural 3D Reconstruction of Planetary Surfaces from Descent-Phase Wide-Angle Imagery

Authors on Pith no claims yet

Pith reviewed 2026-05-10 14:45 UTC · model grok-4.3

classification 💻 cs.CV
keywords neural 3D reconstructionplanetary surfacesdescent imageryheight fieldmulti-view stereodigital elevation modellunar terrainMars terrain
0
0 comments X

The pith

An explicit neural height field enables wider 3D reconstruction of planetary surfaces from wide-angle descent imagery than traditional stereo methods.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines neural techniques for building digital elevation models from wide-angle camera images recorded while a spacecraft descends toward a planet. Such sequences present strong radial distortion and minimal parallax because the camera faces mostly downward and moves vertically. The central innovation is an explicit neural height field that incorporates the prior that surfaces are continuous, smooth, and solid. On simulated high-fidelity lunar and Martian descent sequences, this representation produces greater spatial coverage than conventional multi-view stereo while preserving acceptable accuracy. The result matters because descent imagery could supply high-resolution terrain data at low additional cost for studying planetary geology.

Core claim

Modern neural reconstruction methods provide a strong and competitive alternative to traditional multi-view stereo for planetary descent imaging. A novel approach incorporates an explicit neural height field representation, which provides a strong prior since planetary surfaces are generally continuous, smooth, solid, and free from floating objects. Experiments on simulated descent sequences over high-fidelity lunar and Mars terrains demonstrate that the proposed approach achieves increased spatial coverage while maintaining satisfactory estimation accuracy.

What carries the argument

The explicit neural height field representation, which encodes the continuity, smoothness, and solidity of planetary surfaces as a domain-specific prior to constrain reconstruction under radial distortion and limited parallax.

If this is right

  • Neural reconstruction yields greater spatial coverage than multi-view stereo under the geometric constraints of nadir-facing descent imagery.
  • Accuracy remains at satisfactory levels on high-fidelity simulated lunar and Martian terrains.
  • The explicit height field prior overcomes the limited depth range and reduced fidelity that affect conventional stereo methods.
  • Descent-phase wide-angle imagery becomes a viable low-cost source for high-resolution digital elevation models.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same continuity prior could support reconstruction of other solid natural surfaces, such as terrestrial terrain from drone or aircraft descent sequences.
  • Validation against real mission data would determine whether the simulated results translate to actual spacecraft imagery containing minor surface irregularities.
  • The method could be extended by relaxing the height field smoothness constraint in localized regions to accommodate small boulders or sharp crater rims.

Load-bearing premise

Planetary surfaces are continuous, smooth, solid, and free from floating objects, allowing the neural height field to serve as a reliable domain prior.

What would settle it

Running the method on real descent imagery from an actual planetary landing mission and comparing the resulting elevation model against independent ground-truth measurements such as laser altimetry data.

Figures

Figures reproduced from arXiv: 2604.13235 by Divya M. Persaud, George Brydon, John H. Williamson, Melonie de Almeida, Paul Henderson.

Figure 1
Figure 1. Figure 1: First: Height map and x-y camera center (green cross) for the lunar scene. Remaining: Example simulated descent views toward the center of the lunar scene, including the top-most and bottom-most views [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: First:Height map and x-y camera center (green cross) for the Mars scene. Remaining: Example simulated descent views toward the top-right corner, including the top-most and bottom-most views. continuous volumetric representation, producing denser, smoother, and more accurate reconstructions that also cap￾ture viewpoint-dependent effects. NeRF-based methods re￾place traditional MVS by reconstructing detailed… view at source ↗
Figure 3
Figure 3. Figure 3: Fisheye Ray Casting: maps each pixel through a wide￾angle lens to a 3D ray using the lens’s radial distortion model. approach achieves a balanced trade-off between spatial cov￾erage and elevation accuracy in the generated DEMs, out￾performing both Metashape [1] and Nerfacto [33] in cover￾age at 0.1 relative error across both datasets. Moreover, it demonstrates a clear improvement over Nerfacto [33] for thi… view at source ↗
Figure 4
Figure 4. Figure 4: Our method uses two scene representations that each [PITH_FULL_IMAGE:figures/full_fig_p004_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative comparison of DEM reconstructions from simulated lunar fisheye descent imagery against a high-resolution mesh [PITH_FULL_IMAGE:figures/full_fig_p005_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative comparison of reconstructed DEMs from simulated Mars fisheye descent imagery, using a high-resolution mesh as [PITH_FULL_IMAGE:figures/full_fig_p005_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Qualitative comparisons of DEMs using simulated lunar fisheye descent images, with a high-resolution mesh as ground truth. [PITH_FULL_IMAGE:figures/full_fig_p007_7.png] view at source ↗
read the original abstract

Digital elevation modeling of planetary surfaces is essential for studying past and ongoing geological processes. Wide-angle imagery acquired during spacecraft descent promises to offer a low-cost option for high-resolution terrain reconstruction. However, accurate 3D reconstruction from such imagery is challenging due to strong radial distortion and limited parallax from vertically descending, predominantly nadir-facing cameras. Conventional multi-view stereo exhibits limited depth range and reduced fidelity under these conditions and also lacks domain-specific priors. We present the first study of modern neural reconstruction methods for planetary descent imaging. We also develop a novel approach that incorporates an explicit neural height field representation, which provides a strong prior since planetary surfaces are generally continuous, smooth, solid, and free from floating objects. This study demonstrates that neural approaches offer a strong and competitive alternative to traditional multi-view stereo (MVS) methods. Experiments on simulated descent sequences over high-fidelity lunar and Mars terrains demonstrate that the proposed approach achieves increased spatial coverage while maintaining satisfactory estimation accuracy.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript introduces the first application of modern neural reconstruction techniques to planetary descent-phase wide-angle imagery. It proposes a novel explicit neural height field representation that encodes the domain prior that planetary surfaces are generally continuous, smooth, solid, and free from floating objects. This prior is used to mitigate the effects of strong radial distortion and limited parallax in nadir-facing descent sequences. The central claim is that the method achieves greater spatial coverage than conventional multi-view stereo while maintaining satisfactory estimation accuracy, as shown in experiments on simulated descent sequences over high-fidelity lunar and Mars terrains.

Significance. If the quantitative results and robustness claims hold, the work would provide a practical low-cost route to high-resolution digital elevation models from a data source that is routinely collected but currently under-utilized because of reconstruction difficulties. The explicit incorporation of a physically motivated height-field prior into a neural pipeline is a clear methodological contribution that could transfer to other constrained imaging settings. The use of high-fidelity simulated terrains for controlled evaluation is a positive aspect of the experimental design.

major comments (3)
  1. [Abstract] Abstract: the claim that the proposed approach 'achieves increased spatial coverage while maintaining satisfactory estimation accuracy' is the central result, yet the abstract supplies no numerical metrics, coverage percentages, RMSE values, or direct MVS comparisons; without these numbers the assertion cannot be evaluated.
  2. [Method] Method section (description of the neural height field): the explicit height-field representation is presented as a 'strong prior' that overcomes limited parallax and distortion precisely because surfaces are 'generally continuous, smooth, solid, and free from floating objects.' This assumption is load-bearing for the performance advantage over MVS, but the manuscript contains no ablation that removes the height-field prior nor any test cases containing overhangs, large boulders, or sharp crater rims that would violate the prior.
  3. [Experiments] Experiments section: the evaluation is performed exclusively on simulated high-fidelity terrains; the manuscript does not report how these terrains were constructed with respect to surface discontinuities or quantify the frequency of prior-violating features, leaving open the possibility that the reported accuracy holds only where the smoothness assumption is already satisfied.
minor comments (2)
  1. [Abstract] The abstract would be strengthened by the inclusion of at least one key quantitative result (e.g., mean depth error or coverage gain) to support the 'satisfactory accuracy' claim.
  2. [Method] Notation for the neural height field (e.g., how the MLP is parameterized and how the height is queried) should be introduced with a short equation or diagram for clarity.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for their constructive review and positive evaluation of the work's significance. We address each major comment in turn below, indicating the revisions we will incorporate.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that the proposed approach 'achieves increased spatial coverage while maintaining satisfactory estimation accuracy' is the central result, yet the abstract supplies no numerical metrics, coverage percentages, RMSE values, or direct MVS comparisons; without these numbers the assertion cannot be evaluated.

    Authors: We agree that the abstract should include concrete quantitative support for the central claim. In the revised manuscript we will insert the key experimental metrics, specifically the percentage increase in spatial coverage relative to MVS and the corresponding RMSE values on the lunar and Mars sequences. revision: yes

  2. Referee: [Method] Method section (description of the neural height field): the explicit height-field representation is presented as a 'strong prior' that overcomes limited parallax and distortion precisely because surfaces are 'generally continuous, smooth, solid, and free from floating objects.' This assumption is load-bearing for the performance advantage over MVS, but the manuscript contains no ablation that removes the height-field prior nor any test cases containing overhangs, large boulders, or sharp crater rims that would violate the prior.

    Authors: The height-field prior is indeed central to the claimed advantage. We will add an ablation that disables the explicit height-field constraint (reducing the representation to a general neural field) and report the resulting degradation in coverage and accuracy. We will also introduce controlled synthetic test sequences containing overhangs, boulders, and sharp crater rims to quantify performance when the prior is violated. revision: yes

  3. Referee: [Experiments] Experiments section: the evaluation is performed exclusively on simulated high-fidelity terrains; the manuscript does not report how these terrains were constructed with respect to surface discontinuities or quantify the frequency of prior-violating features, leaving open the possibility that the reported accuracy holds only where the smoothness assumption is already satisfied.

    Authors: We will expand the experiments section with a detailed description of the terrain-generation pipeline, including the modeling of surface discontinuities, and will report the measured frequency of boulders, crater rims, and other prior-violating features across the evaluated descent sequences. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation relies on external domain priors and independent experiments

full rationale

The paper introduces an explicit neural height field as a domain prior justified by general assumptions about planetary surfaces (continuous, smooth, solid, free from floating objects). No equations, self-referential predictions, fitted parameters renamed as outputs, or self-citation chains are described that reduce the central claims to the inputs by construction. Experiments on simulated high-fidelity terrains provide independent evaluation, and the method is presented as a competitive alternative to MVS without load-bearing self-definitional steps. The reader's assessment of score 1.0 aligns with this, as the approach depends on verifiable external assumptions rather than circular fitting.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 1 invented entities

The central claim depends on the domain assumption that planetary surfaces lack floating objects and are smooth enough for a height field prior to resolve limited-parallax reconstruction; neural network weights are implicitly fitted but not quantified here.

free parameters (1)
  • neural network weights
    Standard trainable parameters in the height field representation, fitted to simulated descent sequences.
axioms (1)
  • domain assumption Planetary surfaces are generally continuous, smooth, solid, and free from floating objects.
    Invoked to justify why an explicit neural height field supplies a strong prior that improves reconstruction under limited parallax and distortion.
invented entities (1)
  • explicit neural height field representation no independent evidence
    purpose: To encode surface geometry directly in a neural network that enforces continuity and solidity priors.
    Presented as the key novel component; no independent falsifiable prediction outside the simulated experiments is described.

pith-pipeline@v0.9.0 · 5477 in / 1358 out tokens · 42121 ms · 2026-05-10T14:45:56.995033+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

40 extracted references · 2 canonical work pages

  1. [1]

    Agisoft metashape: Intelligent photogrammetry soft- ware.https : / / www . agisoft . com/. Accessed: 2026-02-27. 1, 3, 4, 7, 8

  2. [2]

    The application of neural radiance fields (nerf) in generating digital surface models from uav imagery

    Dante Abate, Kyriakos Themistocleous, and D Hadjimitsis. The application of neural radiance fields (nerf) in generating digital surface models from uav imagery. InIGARSS 2024- 2024 IEEE International Geoscience and Remote Sensing Symposium, pages 10228–10231. IEEE, 2024. 3

  3. [3]

    Mip-nerf: A multiscale representation for anti-aliasing neu- ral radiance fields

    Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, and Pratul P Srinivasan. Mip-nerf: A multiscale representation for anti-aliasing neu- ral radiance fields. InProceedings of the IEEE/CVF inter- national conference on computer vision, pages 5855–5864,

  4. [4]

    Mip-nerf 360: Unbounded anti-aliased neural radiance fields

    Jonathan T Barron, Ben Mildenhall, Dor Verbin, Pratul P Srinivasan, and Peter Hedman. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5470–5479, 2022. 2, 3, 4, 6, 7

  5. [5]

    Image simulation for camera development—python image simulator for planetary exploration (simply).Space Science and Technology, 5: 0319, 2025

    George Brydon. Image simulation for camera development—python image simulator for planetary exploration (simply).Space Science and Technology, 5: 0319, 2025. 6

  6. [6]

    Brydon, D.M

    G. Brydon, D.M. Persaud, and G.H. Jones. Planetary topography measurement by descent stereophotogramme- try.Mullard Space Science Laboratory, University Col- lege London, 2023. The Centre for Planetary Sciences at UCL/Birkbeck, London, UK. 1, 3

  7. [7]

    Neural elevation models for terrain mapping and path planning.arXiv preprint arXiv:2405.15227, 2024

    Adam Dai, Shubh Gupta, and Grace Gao. Neural elevation models for terrain mapping and path planning.arXiv preprint arXiv:2405.15227, 2024. 3

  8. [8]

    I. J. Daubar, A. G. Hayes, G. C. Collins, and et al. Planned geological investigations of the europa clipper mission. Space Science Reviews, 220:18, 2024. 1

  9. [9]

    Seitz, Charles E

    Frank Dellaert, Steven M. Seitz, Charles E. Thorpe, and et al. Em, mcmc, and chain flipping for structure from motion with unknown correspondence.Machine Learning, 50:45– 71, 2003. 3

  10. [10]

    Industrial computed tomography in re- verse engineering applications.DGZ-fP-Proceedings BB, 4 (7):45–53, 1999

    Alexander Flisch, Joachim Wirth, Robert Zanini, Michael Breitenstein, Adrian Rudin, Florian Wendt, Franz Mnich, and Roland Golz. Industrial computed tomography in re- verse engineering applications.DGZ-fP-Proceedings BB, 4 (7):45–53, 1999. 3

  11. [11]

    Imaging cellular network dynamics in three dimensions us- ing fast 3d laser scanning.Nature methods, 4(1):73–79,

    Werner G ¨obel, Bj ¨orn M Kampa, and Fritjof Helmchen. Imaging cellular network dynamics in three dimensions us- ing fast 3d laser scanning.Nature methods, 4(1):73–79,

  12. [12]

    Analyzing the effectiveness of neural radiance 8 fields for geometric modeling of lunar terrain

    Margaret Hansen, Caleb Adams, Terrence Fong, and David Wettergreen. Analyzing the effectiveness of neural radiance 8 fields for geometric modeling of lunar terrain. In2024 IEEE Aerospace Conference, pages 1–12. IEEE, 2024. 4

  13. [13]

    B. M. Hynek. Extraterrestrial digital elevation models: Con- straints on planetary evolution, with focus on mars.Interna- tional Journal of Remote Sensing, 31(23):6259–6274, 2010. 1

  14. [14]

    An open system for 3d data acquisition from multiple sensor

    Francesco Isgro, Francesca Odone, and Alessandro Verri. An open system for 3d data acquisition from multiple sensor. In Seventh international workshop on computer architecture for machine perception (CAMP’05), pages 52–57. IEEE, 2005. 3

  15. [15]

    3d gaussian splatting for real-time radiance field rendering.ACM Trans

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimk ¨uhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering.ACM Trans. Graph., 42(4):139–1,

  16. [16]

    Determination of terrain models in wooded areas with airborne laser scanner data.IS- PRS Journal of Photogrammetry and remote Sensing, 53(4): 193–203, 1998

    Karl Kraus and Norbert Pfeifer. Determination of terrain models in wooded areas with airborne laser scanner data.IS- PRS Journal of Photogrammetry and remote Sensing, 53(4): 193–203, 1998. 3

  17. [17]

    Nerf: Representing scenes as neural radiance fields for view syn- thesis.Communications of the ACM, 65(1):99–106, 2021

    Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view syn- thesis.Communications of the ACM, 65(1):99–106, 2021. 1, 2, 3, 4, 7

  18. [18]

    Instant neural graphics primitives with a mul- tiresolution hash encoding.ACM transactions on graphics (TOG), 41(4):1–15, 2022

    Thomas M ¨uller, Alex Evans, Christoph Schied, and Alexan- der Keller. Instant neural graphics primitives with a mul- tiresolution hash encoding.ACM transactions on graphics (TOG), 41(4):1–15, 2022. 4, 7

  19. [19]

    Divya M. Persaud. On multi-resolution 3d orbital imagery and visualisation for mars geological analysis. UCL Discov- ery, 2022. University College London repository. 1, 2, 6

  20. [20]

    Phillips, Jennifer E

    Cynthia B. Phillips, Jennifer E. C. Scully, Marissa E. Cameron, Kathleen L. Craft, Cyril Grima, Divya M. Per- saud, and the Europa Clipper Reconnaissance Focus Group. A reconnaissance strategy for landing on europa, based on europa clipper data. In55th Lunar and Planetary Sci- ence Conference (LPI Contrib. 3040), page 1672, 2024. 2024LPICo3040.1672P. 1

  21. [21]

    A generic structure-from-motion framework.Computer Vision and Image Understanding, 103(3):218–228, 2006

    Srikumar Ramalingam, Suresh K Lodha, and Peter Sturm. A generic structure-from-motion framework.Computer Vision and Image Understanding, 103(3):218–228, 2006. 3

  22. [22]

    A low cost 3d scanner based on structured light

    CMPPC Rocchini, Paulo Cignoni, Claudio Montani, Paolo Pingi, and Roberto Scopigno. A low cost 3d scanner based on structured light. Incomputer graphics forum, pages 299–

  23. [23]

    Wiley Online Library, 2001. 3

  24. [24]

    Micmac–a free, open-source solution for photogram- metry.Open geospatial data, software and standards, 2(1): 14, 2017

    Ewelina Rupnik, Mehdi Daakir, and Marc Pierrot Deseil- ligny. Micmac–a free, open-source solution for photogram- metry.Open geospatial data, software and standards, 2(1): 14, 2017. 1, 3

  25. [25]

    H. Sato, M. S. Robinson, B. Hapke, B. W. Denevi, and A. K. Boyd. Resolved hapke parameter maps of the moon.Journal of Geophysical Research: Planets, 119(8):1775–1805, 2014. 2, 5, 6, 8

  26. [26]

    Scholten, J

    F. Scholten, J. Oberst, K.-D. Matz, T. Roatsch, M. W¨ahlisch, E. J. Speyerer, and M. S. Robinson. Gld100: The near- global lunar 100 m raster dtm from lroc wac stereo image data.Journal of Geophysical Research: Planets, 117(E12): E00H17, 2012. 1, 2, 6

  27. [27]

    Structure- from-motion revisited

    Johannes L Schonberger and Jan-Michael Frahm. Structure- from-motion revisited. InProceedings of the IEEE con- ference on computer vision and pattern recognition, pages 4104–4113, 2016. 1, 3

  28. [28]

    Mapping the world in 3d.Nature Photonics, 4(7):429–430, 2010

    Brent Schwarz. Mapping the world in 3d.Nature Photonics, 4(7):429–430, 2010. 3

  29. [29]

    A comparison and evalua- tion of multi-view stereo reconstruction algorithms

    Steven M Seitz, Brian Curless, James Diebel, Daniel Scharstein, and Richard Szeliski. A comparison and evalua- tion of multi-view stereo reconstruction algorithms. In2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), pages 519–528. IEEE, 2006. 1, 3, 7

  30. [30]

    Topography and geomorphology of the huygens landing site on titan.Planetary and Space Sci- ence, 55(13):2015–2024, 2007

    Laurence A Soderblom, Martin G Tomasko, Brent A Archi- nal, Tammy L Becker, Michael W Bushroe, Debbie A Cook, Lyn R Doose, Donna M Galuszka, Trent M Hare, Elpitha Howington-Kraus, et al. Topography and geomorphology of the huygens landing site on titan.Planetary and Space Sci- ence, 55(13):2015–2024, 2007. 1, 3

  31. [31]

    Multi-view geometry for general camera mod- els

    Peter Sturm. Multi-view geometry for general camera mod- els. In2005 IEEE Computer Society Conference on Com- puter Vision and Pattern Recognition (CVPR’05), pages 206–212. IEEE, 2005. 3

  32. [32]

    On calibration, structure from motion and multi-view geometry for generic camera models

    Peter Sturm, Srikumar Ramalingam, and Suresh Lodha. On calibration, structure from motion and multi-view geometry for generic camera models. InImaging Beyond the Pinhole Camera, pages 87–105. Springer, 2006. 3

  33. [33]

    Szeliski

    R. Szeliski. A multi-view approach to motion and stereo. InProceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), pages 157–163 V ol. 1, 1999. 3

  34. [34]

    Nerfstudio: A modular framework for neural radiance field development

    Matthew Tancik, Ethan Weber, Evonne Ng, Ruilong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristof- fersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, and Angjoo Kanazawa. Nerfstudio: A modular framework for neural radiance field development. InACM SIGGRAPH 2023 Conference Proceedings, 2023. 2, 3, 4, 7

  35. [35]

    Shape and motion from image streams: A factorization method.Proceedings of the National Academy of Sciences of the United States of Amer- ica, 90(21):9795–9802, 1993

    Carlo Tomasi and Takeo Kanade. Shape and motion from image streams: A factorization method.Proceedings of the National Academy of Sciences of the United States of Amer- ica, 90(21):9795–9802, 1993. 3

  36. [36]

    The interpretation of structure from mo- tion.Proceedings of the Royal Society of London

    Shimon Ullman. The interpretation of structure from mo- tion.Proceedings of the Royal Society of London. Series B. Biological Sciences, 203(1153):405–426, 1979. 1, 3, 7

  37. [37]

    Neus: Learning neural im- plicit surfaces by volume rendering for multi-view recon- struction

    Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. Neus: Learning neural im- plicit surfaces by volume rendering for multi-view recon- struction. InAdvances in Neural Information Processing Systems (NeurIPS), 2021. 3

  38. [38]

    Photometric method for determining surface orientation from multiple images.Optical engineer- ing, 19(1):139–144, 1980

    Robert J Woodham. Photometric method for determining surface orientation from multiple images.Optical engineer- ing, 19(1):139–144, 1980. 3

  39. [39]

    A comprehensive review of vision-based 3d re- construction methods.Sensors, 24(7):2314, 2024

    Linglong Zhou, Guoxin Wu, Yunbo Zuo, Xuanyu Chen, and Hongle Hu. A comprehensive review of vision-based 3d re- construction methods.Sensors, 24(7):2314, 2024. 3

  40. [40]

    Ewa volume splatting

    Matthias Zwicker, Hanspeter Pfister, Jeroen Van Baar, and Markus Gross. Ewa volume splatting. InProceedings Visu- alization, 2001. VIS’01., pages 29–538. IEEE, 2001. 3 9