Recognition: 2 theorem links
· Lean TheoremSwiftGS: Episodic Priors for Immediate Satellite Surface Recovery
Pith reviewed 2026-05-15 08:57 UTC · model grok-4.3
The pith
SwiftGS reconstructs 3D satellite surfaces in one forward pass by predicting decoupled Gaussian primitives and a lightweight SDF.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
SwiftGS reconstructs 3D surfaces in a single forward pass by predicting geometry-radiation-decoupled Gaussian primitives together with a lightweight SDF, replacing expensive per-scene fitting with episodic training that captures transferable priors. The model couples a differentiable physics graph for projection, illumination, and sensor response with spatial gating that blends sparse Gaussian detail and global SDF structure, and incorporates semantic-geometric fusion, conditional lightweight task heads, and multi-view supervision from a frozen geometric teacher under an uncertainty-aware multi-task loss.
What carries the argument
Hybrid representation of geometry-radiation-decoupled Gaussian primitives combined with a lightweight signed distance function, trained episodically and rendered through a differentiable physics graph with spatial gating.
If this is right
- Zero-shot DSM reconstruction and view-consistent rendering become feasible on new inputs at greatly reduced cost.
- Optional compact calibration allows quick adaptation while retaining the single-pass speed.
- The hybrid Gaussian-SDF structure with physics-aware rendering improves accuracy over pure Gaussian or pure SDF baselines.
- Ablations confirm that episodic meta-training is necessary for the observed transfer performance.
Where Pith is reading between the lines
- The same episodic training approach could be tested on aerial or drone imagery to check whether the priors generalize beyond satellite sensors.
- If the priors prove robust, archives of historical satellite images could be processed in batch without repeated optimization runs.
- Extending the conditional task heads might allow the model to output additional surface attributes such as material labels in the same forward pass.
Load-bearing premise
Priors learned through episodic meta-training on the training distribution transfer zero-shot to new scenes, illumination conditions, and sensor types without per-scene optimization or significant accuracy loss.
What would settle it
Run the model zero-shot on a held-out satellite dataset from an unseen sensor and illumination condition; if the resulting DSM errors exceed those of standard per-scene Gaussian optimization on the same data, the central claim does not hold.
Figures
read the original abstract
Rapid, large-scale 3D reconstruction from multi-date satellite imagery is vital for environmental monitoring, urban planning, and disaster response, yet remains difficult due to illumination changes, sensor heterogeneity, and the cost of per-scene optimization. We introduce SwiftGS, a meta-learned system that reconstructs 3D surfaces in a single forward pass by predicting geometry-radiation-decoupled Gaussian primitives together with a lightweight SDF, replacing expensive per-scene fitting with episodic training that captures transferable priors. The model couples a differentiable physics graph for projection, illumination, and sensor response with spatial gating that blends sparse Gaussian detail and global SDF structure, and incorporates semantic-geometric fusion, conditional lightweight task heads, and multi-view supervision from a frozen geometric teacher under an uncertainty-aware multi-task loss. At inference, SwiftGS operates zero-shot with optional compact calibration and achieves accurate DSM reconstruction and view-consistent rendering at significantly reduced computational cost, with ablations highlighting the benefits of the hybrid representation, physics-aware rendering, and episodic meta-training.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces SwiftGS, a meta-learned system for rapid 3D surface reconstruction from multi-date satellite imagery. It predicts geometry-radiation-decoupled Gaussian primitives together with a lightweight SDF in a single forward pass, using episodic training to capture transferable priors instead of per-scene optimization. The approach incorporates a differentiable physics graph for projection/illumination/sensor response, spatial gating to blend sparse Gaussians with global SDF structure, semantic-geometric fusion, conditional task heads, and an uncertainty-aware multi-task loss with multi-view supervision from a frozen teacher. At inference it operates zero-shot (with optional compact calibration) to produce accurate DSMs and view-consistent renderings at reduced cost.
Significance. If the zero-shot generalization claim holds under realistic variability in illumination, sensors, and scenes, the work would substantially lower the computational barrier for large-scale satellite 3D reconstruction, directly benefiting environmental monitoring, urban planning, and disaster response. The hybrid Gaussian-SDF representation and physics-aware rendering are technically interesting directions that address known limitations of pure Gaussian splatting on satellite data.
major comments (2)
- [Abstract / Experimental Results] Abstract and Experimental Results: the central claim of 'accurate DSM reconstruction' and 'significantly reduced computational cost' with zero-shot transfer is asserted without any reported quantitative metrics (RMSE, MAE, PSNR, runtime, or ablation tables) or direct comparisons to per-scene Gaussian fitting baselines. This absence makes the performance and generalization assertions impossible to evaluate.
- [Methods] Methods (episodic training description): the zero-shot transfer relies on meta-learned priors from episodic training, yet no details are supplied on episode diversity, the distribution of illumination/sensor variations across training and held-out splits, or any cross-sensor/illumination validation protocol. Without these, the generalization guarantee cannot be assessed and is load-bearing for the main contribution.
minor comments (2)
- [Abstract] The abstract introduces 'physics graph' and 'spatial gating' without a one-sentence definition or pointer to the relevant equations; a brief inline clarification would improve readability.
- Notation for the decoupled Gaussian primitives and the lightweight SDF should be introduced consistently with symbols defined at first use.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback highlighting gaps in quantitative evaluation and training protocol details. We agree these elements are essential to substantiate the zero-shot claims and will revise the manuscript accordingly.
read point-by-point responses
-
Referee: [Abstract / Experimental Results] Abstract and Experimental Results: the central claim of 'accurate DSM reconstruction' and 'significantly reduced computational cost' with zero-shot transfer is asserted without any reported quantitative metrics (RMSE, MAE, PSNR, runtime, or ablation tables) or direct comparisons to per-scene Gaussian fitting baselines. This absence makes the performance and generalization assertions impossible to evaluate.
Authors: We agree that the submitted manuscript does not report quantitative metrics such as RMSE/MAE for DSM accuracy, PSNR for rendering quality, runtime benchmarks, or ablation tables with comparisons to per-scene Gaussian baselines. These were omitted from the initial version. In revision we will add a dedicated experimental results section containing these metrics, direct baseline comparisons, and ablations to support the accuracy and efficiency claims. revision: yes
-
Referee: [Methods] Methods (episodic training description): the zero-shot transfer relies on meta-learned priors from episodic training, yet no details are supplied on episode diversity, the distribution of illumination/sensor variations across training and held-out splits, or any cross-sensor/illumination validation protocol. Without these, the generalization guarantee cannot be assessed and is load-bearing for the main contribution.
Authors: We concur that the Methods section currently lacks explicit information on episode diversity, the distribution of illumination/sensor variations between training and held-out data, and the cross-sensor validation protocol. We will expand the episodic training subsection to include these specifics (e.g., number of episodes, scene/condition statistics, and validation splits) so that the generalization properties can be properly evaluated. revision: yes
Circularity Check
No significant circularity in derivation chain
full rationale
The provided abstract and context describe a meta-learned hybrid Gaussian-SDF model trained episodically to produce transferable priors for single-pass satellite surface reconstruction. No equations, self-citations, or derivation steps are exhibited that reduce a claimed prediction to its own fitted inputs by construction, import uniqueness from author prior work, or rename known results. The zero-shot inference claim is framed as an empirical outcome of the episodic training process rather than a tautological re-expression of the training distribution itself. The derivation chain remains self-contained with independent content from the physics graph, spatial gating, and multi-task loss.
Axiom & Free-Parameter Ledger
free parameters (1)
- episodic meta-learned priors
axioms (1)
- domain assumption A differentiable physics graph accurately models projection, illumination, and sensor response for heterogeneous satellite imagery.
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
hybrid scene representation that combines geometry and radiance decoupled Gaussian primitives with a compact implicit signed-distance field via learned spatial gating
-
IndisputableMonolith/Foundation/ArithmeticFromLogic.leanLogicNat recovery theorem unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
episodic meta-training protocol augmented by auxiliary multi-view stereo guidance
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Li Zhao, Haiyan Wang, Yi Zhu, and Mei Song. A review of 3d reconstruction from high-resolution urban satellite images.International Journal of Remote Sensing, 44(2):713–748, 2023
work page 2023
-
[2]
Machine-learned 3d building vectorization from satellite imagery
Yi Wang, Stefano Zorzi, and Ksenia Bittner. Machine-learned 3d building vectorization from satellite imagery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1072–1081, 2021
work page 2021
-
[3]
Chi Zhang, Yiming Yan, Chunhui Zhao, Nan Su, and Weikun Zhou. Fvmd-isre: 3-d reconstruction from few-view multidate satellite images based on the implicit surface representation of neural radiance fields.IEEE Transactions on Geoscience and Remote Sensing, 62:1–14, 2024
work page 2024
-
[4]
Neural radiance fields for the real world: A survey.arXiv preprint arXiv:2501.13104, 2025
Wenhui Xiao, Remi Chierchia, Rodrigo Santa Cruz, Xuesong Li, David Ahmedt-Aristizabal, Olivier Salvado, Clin- ton Fookes, and Leo Lebrat. Neural radiance fields for the real world: A survey.arXiv preprint arXiv:2501.13104, 2025
-
[5]
Anurag Dalal, Daniel Hagen, Kjell G Robbersmyr, and Kristian Muri Knausgård. Gaussian splatting: 3d reconstruction and novel view synthesis: A review.IEEE Access, 12:96797–96820, 2024
work page 2024
-
[6]
Tongtong Zhang and Yuanxiang Li. rpcprf: Generalizable mpi neural radiance field for satellite camera.arXiv preprint arXiv:2310.07179, 2023
-
[7]
Sundial: 3d satellite understanding through direct ambient and complex lighting decomposition
Nikhil Behari, Akshat Dave, Kushagra Tiwary, William Yang, and Ramesh Raskar. Sundial: 3d satellite understanding through direct ambient and complex lighting decomposition. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 522–532, 2024
work page 2024
-
[8]
Sayak Nag, Dripta S Raychaudhuri, Sujoy Paul, and Amit K Roy-Chowdhury. Reconstruction guided meta- learning for few shot open set recognition.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45 (12):15394–15405, 2023
work page 2023
-
[9]
Wenbin Guan, Zijiu Yang, Xiaohong Wu, Liqiong Chen, Feng Huang, Xiaohai He, and Honggang Chen. Efficient meta-learning enabled lightweight multiscale few-shot object detection in remote sensing images.arXiv preprint arXiv:2404.18426, 2024
-
[10]
Roger Marí, Gabriele Facciolo, and Thibaud Ehret. Sat-nerf: Learning multi-view satellite photogrammetry with transient objects and shadow modeling using rpc cameras. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1311–1321, 2022
work page 2022
-
[11]
Multi-date earth observation nerf: The detail is in the shadows
Roger Marí, Gabriele Facciolo, and Thibaud Ehret. Multi-date earth observation nerf: The detail is in the shadows. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2035–2045, 2023
work page 2035
-
[12]
Nidhi Mathihalli, Audrey Wei, Giovanni Lavezzi, Peng Mun Siew, Victor Rodriguez-Fernandez, Hodei Urrutxua, and Richard Linares. Dreamsat: Towards a general 3d model for novel view synthesis of space objects.arXiv preprint arXiv:2410.05097, 2024
-
[13]
Shadow neural radiance fields for multi-view satellite photogrammetry
Dawa Derksen and Dario Izzo. Shadow neural radiance fields for multi-view satellite photogrammetry. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1152–1161, 2021
work page 2021
-
[14]
Gaussian splatting for efficient satellite image pho- togrammetry
Luca Savant Aira, Gabriele Facciolo, and Thibaud Ehret. Gaussian splatting for efficient satellite image pho- togrammetry. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 5959–5969, 2025
work page 2025
-
[15]
Nan Bai, Anran Yang, Hao Chen, and Chun Du. Satgs: Remote sensing novel view synthesis using multi-temporal satellite images with appearance-adaptive 3dgs.Remote Sensing, 17(9):1609, 2025
work page 2025
-
[16]
SkySplat: Generalizable 3D Gaussian Splatting from Multi-Temporal Sparse Satellite Images
Xuejun Huang, Xinyi Liu, Yi Wan, Zhi Zheng, Bin Zhang, Mingtao Xiong, Yingying Pei, and Yongjun Zhang. Skysplat: Generalizable 3d gaussian splatting from multi-temporal sparse satellite images.arXiv preprint arXiv:2508.09479, 2025
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[17]
Van Minh Nguyen, Emma Sandidge, Trupti Mahendrakar, and Ryan T White. Satsplatyolo: 3d gaussian splatting- based virtual object detection ensembles for satellite feature recognition.arXiv preprint arXiv:2406.02533, 2024. 12 SwiftGS
-
[18]
Camille Billouard, Dawa Derksen, Emmanuelle Sarrazin, and Bruno Vallet. Sat-ngp: Unleashing neural graphics primitives for fast relightable transient-free 3d reconstruction from satellite imagery. InIGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium, pages 8749–8753. IEEE, 2024
work page 2024
-
[19]
Camille Billouard, Dawa Derksen, Alexandre Constantin, and Bruno Vallet. Tile and slide: A new framework for scaling nerf from local to global 3d earth observation.arXiv preprint arXiv:2507.01631, 2025
-
[20]
Jian Gao, Jin Liu, and Shunping Ji. A general deep learning based framework for 3d reconstruction from multi-view stereo satellite images.ISPRS Journal of Photogrammetry and Remote Sensing, 195:446–461, 2023
work page 2023
-
[21]
Hongbin Xu, Weitao Chen, Baigui Sun, Xuansong Xie, and Wenxiong Kang. Robustmvs: Single domain generalized deep multi-view stereo.IEEE Transactions on Circuits and Systems for Video Technology, 34(10): 9181–9194, 2024
work page 2024
-
[22]
Hyucksang Lee, Seongmin Lee, and Sanghoon Lee. Visibility-aware multi-view stereo by surface normal weighting for occlusion robustness.IEEE transactions on pattern analysis and machine intelligence, 2025
work page 2025
-
[23]
Meta-zsdetr: Zero-shot detr with meta-learning
Lu Zhang, Chenbo Zhang, Jiajia Zhao, Jihong Guan, and Shuigeng Zhou. Meta-zsdetr: Zero-shot detr with meta-learning. InProceedings of the IEEE/CVF international conference on computer vision, pages 6845–6854, 2023
work page 2023
-
[24]
S-eo: A large-scale dataset for geometry-aware shadow detection in remote sensing applications
Elías Masquil, Roger Marí, Thibaud Ehret, Enric Meinhardt-Llopis, Pablo Musé, and Gabriele Facciolo. S-eo: A large-scale dataset for geometry-aware shadow detection in remote sensing applications. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 2383–2393, 2025
work page 2025
-
[25]
Kang Du, Zhihao Liang, Yulin Shen, and Zeyu Wang. Gs-id: Illumination decomposition on gaussian splatting via adaptive light aggregation and diffusion-guided material priors. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 26220–26229, 2025
work page 2025
-
[26]
Pranav Chougule. Novel view synthesis with gaussian splatting: Impact on photogrammetry model accuracy and resolution.arXiv preprint arXiv:2508.07483, 2025
-
[27]
Chuanyu Fu, Yuqi Zhang, Kunbin Yao, Guanying Chen, Yuan Xiong, Chuan Huang, Shuguang Cui, and Xi- aochun Cao. Robustsplat: Decoupling densification and dynamics for transient-free 3dgs.arXiv preprint arXiv:2506.02751, 2025
-
[28]
Lei Hu, Wei He, Liangpei Zhang, and Hongyan Zhang. Cross-domain meta-learning under dual-adjustment mode for few-shot hyperspectral image classification.IEEE Transactions on Geoscience and Remote Sensing, 61:1–16, 2023
work page 2023
-
[29]
Jie Feng, Gaiqin Bai, Di Li, Xiangrong Zhang, Ronghua Shang, and Licheng Jiao. Mr-selection: A meta- reinforcement learning approach for zero-shot hyperspectral band selection.IEEE Transactions on Geoscience and Remote Sensing, 61:1–20, 2022
work page 2022
-
[30]
Jiaojiao Li, Zhiyuan Zhang, Rui Song, Haitao Xu, Yunsong Li, and Qian Du. Contrastive mlp network based on adjacent coordinates for cross-domain zero-shot hyperspectral image classification.IEEE Transactions on Circuits and Systems for Video Technology, 2025
work page 2025
-
[31]
Lingbo Huang, Yushi Chen, Zhaokui Li, Pedram Ghamisi, and Qian Du. Hzscm: Hyperspectral image zero-shot classification via vision-language models.IEEE Transactions on Geoscience and Remote Sensing, 2025
work page 2025
-
[32]
Meta-transfer learning for zero-shot super-resolution
Jae Woong Soh, Sunwoo Cho, and Nam Ik Cho. Meta-transfer learning for zero-shot super-resolution. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3516–3525, 2020
work page 2020
-
[33]
Guoxing Sun, Rishabh Dabral, Pascal Fua, Christian Theobalt, and Marc Habermann. Metacap: Meta-learning priors from multi-view imagery for sparse-view human performance capture and rendering. InEuropean Conference on Computer Vision, pages 341–361. Springer, 2024
work page 2024
-
[34]
Camera distortion-aware 3d human pose estimation in video with optimization-based meta-learning
Hanbyel Cho, Yooshin Cho, Jaemyung Yu, and Junmo Kim. Camera distortion-aware 3d human pose estimation in video with optimization-based meta-learning. InProceedings of the IEEE/CVF international conference on computer vision, pages 11169–11178, 2021
work page 2021
-
[35]
Yumeng He, Yunbo Wang, and Xiaokang Yang. Metags: A meta-learned gaussian-phong model for out-of- distribution 3d scene relighting.arXiv preprint arXiv:2405.20791, 2024. 13 SwiftGS
-
[36]
Zero-shot visual grounding in 3d gaussians via view retrieval.arXiv preprint arXiv:2509.15871, 2025
Liwei Liao, Xufeng Li, Xiaoyun Zheng, Boning Liu, Feng Gao, and Ronggang Wang. Zero-shot visual grounding in 3d gaussians via view retrieval.arXiv preprint arXiv:2509.15871, 2025
-
[37]
Fast-mvsnet: Sparse-to-dense multi-view stereo with learned propagation and gauss-newton refinement
Zehao Yu and Shenghua Gao. Fast-mvsnet: Sparse-to-dense multi-view stereo with learned propagation and gauss-newton refinement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1949–1958, 2020
work page 1949
-
[38]
Tong Wang and Shuichi Kurabayashi. Adaptive multi-nerf: Exploit efficient parallelism in adaptive multiple scale neural radiance field rendering.arXiv preprint arXiv:2310.01881, 2023
-
[39]
Yuru Xiao, Deming Zhai, Wenbo Zhao, Kui Jiang, Junjun Jiang, and Xianming Liu. Mcgs: Multiview consistency enhancement for sparse-view 3d gaussian radiance fields.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025
work page 2025
-
[40]
3d gaussian splat- ting for large-scale surface reconstruction from aerial images
YuanZheng Wu, Jin Liu, and Shunping Ji. 3d gaussian splatting for large-scale surface reconstruction from aerial images.arXiv preprint arXiv:2409.00381, 2024
-
[41]
Yu Chen and Gim Hee Lee. Dogs: Distributed-oriented gaussian splatting for large-scale 3d reconstruction via gaussian consensus.Advances in Neural Information Processing Systems, 37:34487–34512, 2024
work page 2024
-
[42]
Uvgs: Reimagining unstructured 3d gaussian splatting using uv mapping
Aashish Rai, Dilin Wang, Mihir Jain, Nikolaos Sarafianos, Kefan Chen, Srinath Sridhar, and Aayush Prakash. Uvgs: Reimagining unstructured 3d gaussian splatting using uv mapping. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 5927–5937, 2025
work page 2025
-
[43]
Linus Franke, Laura Fink, and Marc Stamminger. Vr-splatting: Foveated radiance field rendering via 3d gaussian splatting and neural points.Proceedings of the ACM on Computer Graphics and Interactive Techniques, 8(1): 1–21, 2025
work page 2025
-
[44]
3d convex splatting: Radiance field rendering with 3d smooth convexes
Jan Held, Renaud Vandeghen, Abdullah Hamdi, Adrien Deliege, Anthony Cioppa, Silvio Giancola, Andrea Vedaldi, Bernard Ghanem, and Marc Van Droogenbroeck. 3d convex splatting: Radiance field rendering with 3d smooth convexes. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 21360–21369, 2025
work page 2025
-
[45]
Han Gong, Qiyue Li, Jie Li, and Zhi Liu. Adaptive 3d gaussian splatting video streaming: Visual saliency-aware tiling and meta-learning-based bitrate adaptation.arXiv preprint arXiv:2507.14454, 2025
-
[46]
Semantic stereo for incidental satellite images
Marc Bosch, Kevin Foster, Gordon Christie, Sean Wang, Gregory D Hager, and Myron Brown. Semantic stereo for incidental satellite images. In2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 1524–1532. IEEE, 2019
work page 2019
-
[47]
A multiple view stereo benchmark for satellite imagery
Marc Bosch, Zachary Kurtz, Shea Hagstrom, and Myron Brown. A multiple view stereo benchmark for satellite imagery. In2016 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), pages 1–9. IEEE, 2016
work page 2016
-
[48]
Sat-mesh: Learning neural implicit surfaces for multi-view satellite reconstruction
Yingjie Qu and Fei Deng. Sat-mesh: Learning neural implicit surfaces for multi-view satellite reconstruction. Remote Sensing, 15(17):4297, 2023
work page 2023
-
[49]
Automatic 3d reconstruction from multi-date satellite images
Gabriele Facciolo, Carlo De Franchis, and Enric Meinhardt-Llopis. Automatic 3d reconstruction from multi-date satellite images. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 57–66, 2017
work page 2017
-
[50]
s2p-hd: Gpu-accelerated binocular stereo pipeline for large-scale same-date stereo
Tristan Amadei, Enric Meinhardt-Llopis, Carlo de Franchis, Jérémy Anger, Thibaud Ehret, and Gabriele Facciolo. s2p-hd: Gpu-accelerated binocular stereo pipeline for large-scale same-date stereo. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 2339–2348, 2025
work page 2025
-
[51]
pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction
David Charatan, Sizhe Lester Li, Andrea Tagliasacchi, and Vincent Sitzmann. pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 19457–19467, 2024
work page 2024
-
[52]
Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images
Yuedong Chen, Haofei Xu, Chuanxia Zheng, Bohan Zhuang, Marc Pollefeys, Andreas Geiger, Tat-Jen Cham, and Jianfei Cai. Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images. InEuropean Conference on Computer Vision, pages 370–386. Springer, 2024
work page 2024
-
[53]
Jeongyun Kim, Jeongho Noh, Dong-Guw Lee, and Ayoung Kim. Transplat: Surface embedding-guided 3d gaussian splatting for transparent object manipulation.arXiv preprint arXiv:2502.07840, 2025. 14 SwiftGS
-
[54]
Depthsplat: Connecting gaussian splatting and depth
Haofei Xu, Songyou Peng, Fangjinhua Wang, Hermann Blum, Daniel Barath, Andreas Geiger, and Marc Pollefeys. Depthsplat: Connecting gaussian splatting and depth. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 16453–16463, 2025
work page 2025
-
[55]
Shengji Tang, Weicai Ye, Peng Ye, Weihao Lin, Yang Zhou, Tao Chen, and Wanli Ouyang. Hisplat: Hierarchical 3d gaussian splatting for generalizable sparse-view reconstruction.arXiv preprint arXiv:2410.06245, 2024
-
[56]
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis.Communications of the ACM, 65(1):99–106, 2021. A Mathematical Proofs and Consistency Results A.1 Pointwise consistency of the Gaussian–SDF mixture Proposition A.1.Suppose the true radi...
work page 2021
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.