pith. machine review for the scientific record. sign in

arxiv: 2602.15355 · v3 · submitted 2026-02-17 · 💻 cs.CV

Recognition: no theorem link

DAV-GSWT: Diffusion-Active-View Sampling for Data-Efficient Gaussian Splatting Wang Tiles

Authors on Pith no claims yet

Pith reviewed 2026-05-15 22:04 UTC · model grok-4.3

classification 💻 cs.CV
keywords Gaussian SplattingWang TilesDiffusion ModelsActive View SamplingData-Efficient RenderingNeural RenderingProcedural GenerationVirtual Environments
0
0 comments X

The pith

Diffusion priors and active view sampling let Gaussian Splatting Wang Tiles be built from minimal observations while preserving quality.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces DAV-GSWT, a framework that combines generative diffusion models with an active sampling strategy to produce high-fidelity Gaussian Splatting Wang Tiles from far fewer input views than conventional dense sampling requires. It uses uncertainty measures to select the most informative camera angles and then hallucinates missing structural details so that the resulting tiles fit together without visible seams. A sympathetic reader would care because building large-scale photorealistic environments currently demands expensive, time-consuming capture of many overlapping images or scans. If the approach works, procedural landscape generation becomes practical for interactive applications without sacrificing visual integrity or frame rates.

Core claim

DAV-GSWT integrates a hierarchical uncertainty quantification mechanism with generative diffusion models to autonomously select informative viewpoints and synthesize missing structural details, thereby enabling the construction of seamless Gaussian Splatting Wang Tiles from sparse input observations while retaining visual fidelity and real-time rendering performance.

What carries the argument

Hierarchical uncertainty quantification mechanism integrated with generative diffusion models that selects informative viewpoints and hallucinates structural details for seamless tile transitions.

If this is right

  • Large-scale virtual environments can be reconstructed with substantially lower data-collection effort.
  • Interactive rendering performance is preserved even as the modeled area grows.
  • Procedural methods such as Wang Tiles become viable for neural rendering pipelines without dense exemplar data.
  • Sparse real-world captures suffice to generate expansive, photorealistic landscapes.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same uncertainty-plus-diffusion loop could be adapted to other procedural or tiling-based 3D representations facing data scarcity.
  • Real-time scene editing might become feasible if the active sampler can be run incrementally as new observations arrive.
  • Boundary consistency tests on diverse terrain types would reveal whether the hallucination step generalizes beyond the evaluated cases.

Load-bearing premise

Diffusion models can reliably hallucinate missing structural details in a way that produces seamless tile transitions without visible artifacts or inconsistencies across boundaries.

What would settle it

Rendering a large tiled virtual scene and observing visible seams, boundary inconsistencies, or loss of structural coherence at tile edges would disprove the claim that the method maintains visual integrity from minimal inputs.

Figures

Figures reproduced from arXiv: 2602.15355 by Haiyun Wei, Jiekai Wu, Rong Fu, Simon Fong, Wangyu Wu, Xiaowen Ma, Yang Li, Yee Tan Jia.

Figure 1
Figure 1. Figure 1: Overview of the DAV-GSWT framework for data-efficient Gaussian Splatting and tiling. The pipeline begins with a coarse reconstruction G0 computed from sparse initial images Iinit. During the active cycle, a pre-trained diffusion model generates M stochastic latent samples zm(θ) using attention dropout. These samples are evaluated by the uncertainty estimator, which computes a score u(θ) from image-space LP… view at source ↗
Figure 2
Figure 2. Figure 2: Active-view uncertainty over a dense candidate viewing sphere. Each point represents a candidate camera [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Iterative reconstruction evolution under DAV-GSWT. Top row shows rendered reconstructions at iterations [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Ablation study of uncertainty formulations for active view selection. From left to right and top to bottom: [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Comparison of seam artifacts using color-only graph cuts versus semantic-aware cuts augmented with SAM. [PITH_FULL_IMAGE:figures/full_fig_p009_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Visualization of the tile-level uncertainty cache during online reconstruction. Warm colors indicate high [PITH_FULL_IMAGE:figures/full_fig_p010_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Reconstruction quality versus capture budget. DAV-GSWT achieves near-exhaustive reconstruction quality [PITH_FULL_IMAGE:figures/full_fig_p011_7.png] view at source ↗
read the original abstract

The emergence of 3D Gaussian Splatting has fundamentally redefined the capabilities of photorealistic neural rendering by enabling high-throughput synthesis of complex environments. While procedural methods like Wang Tiles have recently been integrated to facilitate the generation of expansive landscapes, these systems typically remain constrained by a reliance on densely sampled exemplar reconstructions. We present DAV-GSWT, a data-efficient framework that leverages diffusion priors and active view sampling to synthesize high-fidelity Gaussian Splatting Wang Tiles from minimal input observations. By integrating a hierarchical uncertainty quantification mechanism with generative diffusion models, our approach autonomously identifies the most informative viewpoints while hallucinating missing structural details to ensure seamless tile transitions. Experimental results indicate that our system significantly reduces the required data volume while maintaining the visual integrity and interactive performance necessary for large-scale virtual environments.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The manuscript introduces DAV-GSWT, a framework that combines diffusion priors with active view sampling and a hierarchical uncertainty quantification mechanism to synthesize high-fidelity Gaussian Splatting Wang Tiles from minimal input observations, claiming to significantly reduce required data volume while preserving visual integrity, seamless tile transitions, and interactive performance for large-scale virtual environments.

Significance. If the central claims hold under rigorous validation, the work could meaningfully advance data-efficient neural rendering and procedural 3D scene generation by demonstrating how generative diffusion models can reliably supplement sparse observations in tiled Gaussian Splatting pipelines.

major comments (1)
  1. [Experimental Results] The central claim that diffusion hallucination plus active sampling produces artifact-free boundaries when tiles are assembled at scale lacks supporting quantitative evidence; no boundary-specific metrics (e.g., LPIPS or PSNR restricted to 5-pixel edge strips, or cross-tile consistency scores) are reported to compare hallucinated seams against dense ground-truth reconstructions, leaving the seamless-transition guarantee unverified.
minor comments (1)
  1. The abstract states that the system 'significantly reduces the required data volume' but does not quantify the reduction factor or specify the exact number of input views versus dense baselines.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We have carefully addressed the major comment regarding the need for boundary-specific quantitative validation of seamless tile transitions.

read point-by-point responses
  1. Referee: The central claim that diffusion hallucination plus active sampling produces artifact-free boundaries when tiles are assembled at scale lacks supporting quantitative evidence; no boundary-specific metrics (e.g., LPIPS or PSNR restricted to 5-pixel edge strips, or cross-tile consistency scores) are reported to compare hallucinated seams against dense ground-truth reconstructions, leaving the seamless-transition guarantee unverified.

    Authors: We acknowledge that the current experimental section primarily reports aggregate metrics (PSNR, SSIM, LPIPS) over entire tiles and assembled scenes, along with qualitative visualizations of boundaries, without dedicated boundary-restricted quantitative analysis. This leaves the seamless-transition claim less rigorously supported than it could be. In the revised manuscript we will add the suggested evaluations: PSNR and LPIPS computed on 5-pixel edge strips around tile boundaries, plus a cross-tile consistency score that directly compares hallucinated seams against dense ground-truth reconstructions. These new results will be presented in an expanded experimental subsection with comparisons to baselines, thereby providing the missing quantitative verification. revision: yes

Circularity Check

0 steps flagged

No circularity: framework description contains no derivations or fitted predictions

full rationale

The manuscript describes a data-efficient Gaussian Splatting Wang Tiles pipeline that combines diffusion priors with hierarchical uncertainty-based active view sampling. No equations, parameter-fitting procedures, or prediction steps are exhibited in the abstract or surrounding text. Consequently, none of the enumerated circularity patterns (self-definitional relations, fitted inputs renamed as predictions, load-bearing self-citations, imported uniqueness theorems, or ansatz smuggling) can be identified. The central claim—that diffusion hallucination plus uncertainty sampling yields seamless tiles—remains an empirical assertion whose validity is independent of any internal reduction to its own inputs. The derivation chain is therefore self-contained.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Only abstract available; no explicit free parameters, axioms, or invented entities can be extracted. The approach implicitly assumes diffusion models produce structurally accurate completions and that uncertainty quantification reliably selects informative views.

pith-pipeline@v0.9.0 · 5452 in / 1030 out tokens · 18861 ms · 2026-05-15T22:04:31.171210+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

52 extracted references · 52 canonical work pages

  1. [1]

    Gaussian splatting: 3d reconstruction and novel view synthesis: A review.IEEE Access, 12:96797–96820, 2024

    Anurag Dalal, Daniel Hagen, Kjell G Robbersmyr, and Kristian Muri Knausgård. Gaussian splatting: 3d reconstruction and novel view synthesis: A review.IEEE Access, 12:96797–96820, 2024

  2. [2]

    Radsplat: Radiance field-informed gaussian splatting for robust real-time rendering with 900+ fps

    Michael Niemeyer, Fabian Manhardt, Marie-Julie Rakotosaona, Michael Oechsle, Daniel Duckworth, Rama Gosula, Keisuke Tateno, John Bates, Dominik Kaeser, and Federico Tombari. Radsplat: Radiance field-informed gaussian splatting for robust real-time rendering with 900+ fps. In2025 International Conference on 3D Vision (3DV), pages 134–144. IEEE, 2025

  3. [3]

    Speedy-splat: Fast 3d gaussian splatting with sparse pixels and sparse primitives

    Alex Hanson, Allen Tu, Geng Lin, Vasu Singla, Matthias Zwicker, and Tom Goldstein. Speedy-splat: Fast 3d gaussian splatting with sparse pixels and sparse primitives. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 21537–21546, 2025

  4. [4]

    Gsarch: Breaking memory barriers in 3d gaussian splatting training via architectural support

    Houshu He, Gang Li, Fangxin Liu, Li Jiang, Xiaoyao Liang, and Zhuoran Song. Gsarch: Breaking memory barriers in 3d gaussian splatting training via architectural support. In2025 IEEE International Symposium on High Performance Computer Architecture (HPCA), pages 366–379. IEEE, 2025

  5. [5]

    A decoupled 3d gaussian splatting method for real-time high-fidelity dynamic scene reconstruction.Knowledge-Based Systems, page 115321, 2026

    Yunxiao Li and Shuhuan Wen. A decoupled 3d gaussian splatting method for real-time high-fidelity dynamic scene reconstruction.Knowledge-Based Systems, page 115321, 2026

  6. [6]

    Splatmap: Online dense monocular slam with 3d gaussian splatting.Proceedings of the ACM on Computer Graphics and Interactive Techniques, 8(1):1–18, 2025

    Yue Hu, Rong Liu, Meida Chen, Peter Beerel, and Andrew Feng. Splatmap: Online dense monocular slam with 3d gaussian splatting.Proceedings of the ACM on Computer Graphics and Interactive Techniques, 8(1):1–18, 2025

  7. [7]

    Flashgs: Efficient 3d gaussian splatting for large-scale and high-resolution rendering

    Guofeng Feng, Siyan Chen, Rong Fu, Zimu Liao, Yi Wang, Tao Liu, Boni Hu, Linning Xu, Zhilin Pei, Hengjie Li, et al. Flashgs: Efficient 3d gaussian splatting for large-scale and high-resolution rendering. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 26652–26662, 2025

  8. [8]

    Gs-gvins: A tightly-integrated gnss-visual- inertial navigation system augmented by 3d gaussian splatting.IEEE Access, 2025

    Zelin Zhou, Shichuang Nie, Saurav Uprety, and Hongzhou Yang. Gs-gvins: A tightly-integrated gnss-visual- inertial navigation system augmented by 3d gaussian splatting.IEEE Access, 2025

  9. [9]

    Instantsplat: Sparse-view gaussian splatting in seconds.arXiv preprint arXiv:2403.20309, 2024

    Zhiwen Fan, Wenyan Cong, Kairun Wen, Kevin Wang, Jian Zhang, Xinghao Ding, Danfei Xu, Boris Ivanovic, Marco Pavone, Georgios Pavlakos, et al. Instantsplat: Sparse-view gaussian splatting in seconds.arXiv preprint arXiv:2403.20309, 2024

  10. [10]

    Optimizing 3d gaussian splatting for sparse viewpoint scene reconstruction

    Shen Chen, Jiale Zhou, and Lei Li. Optimizing 3d gaussian splatting for sparse viewpoint scene reconstruction. arXiv preprint arXiv:2409.03213, 2024

  11. [11]

    Tengfei Wang, Xin Wang, Yongmao Hou, Yiwei Xu, Wendi Zhang, and Zongqian Zhan. Pg-sag: Parallel gaussian splatting for fine-grained large-scale urban buildings reconstruction via semantic-aware grouping.PFG–Journal of Photogrammetry, Remote Sensing and Geoinformation Science, pages 1–16, 2025

  12. [12]

    Gaussianupdate: Continual 3d gaussian splatting update for changing environments

    Lin Zeng, Boming Zhao, Jiarui Hu, Xujie Shen, Ziqiang Dang, Hujun Bao, and Zhaopeng Cui. Gaussianupdate: Continual 3d gaussian splatting update for changing environments. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 25800–25809, 2025

  13. [13]

    Compact 3d gaussian representation for radiance field

    Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. Compact 3d gaussian representation for radiance field. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21719–21728, 2024

  14. [14]

    Gs- occ3d: Scaling vision-only occupancy reconstruction with gaussian splatting

    Baijun Ye, Minghui Qin, Saining Zhang, Moonjun Gong, Shaoting Zhu, Hao Zhao, and Hang Zhao. Gs- occ3d: Scaling vision-only occupancy reconstruction with gaussian splatting. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 25925–25937, 2025

  15. [15]

    Uncertainty-driven active vision for implicit scene reconstruction.arXiv preprint arXiv:2210.00978, 2022

    Edward J Smith, Michal Drozdzal, Derek Nowrouzezahrai, David Meger, and Adriana Romero-Soriano. Uncertainty-driven active vision for implicit scene reconstruction.arXiv preprint arXiv:2210.00978, 2022

  16. [16]

    Peering into the unknown: Active view selection with neural uncertainty maps for 3d reconstruction.arXiv preprint arXiv:2506.14856, 2025

    Zhengquan Zhang, Feng Xu, and Mengmi Zhang. Peering into the unknown: Active view selection with neural uncertainty maps for 3d reconstruction.arXiv preprint arXiv:2506.14856, 2025

  17. [17]

    Active neural 3d reconstruction with colorized surface voxel-based view selection.arXiv preprint arXiv:2405.02568, 2024

    Hyunseo Kim, Hyeonseo Yang, Taekyung Kim, YoonSung Kim, Jin-Hwa Kim, and Byoung-Tak Zhang. Active neural 3d reconstruction with colorized surface voxel-based view selection.arXiv preprint arXiv:2405.02568, 2024. 12 DA V-GSWT

  18. [18]

    Active3d: Active high-fidelity 3d reconstruction via hierarchical uncertainty quantification.arXiv preprint arXiv:2511.20050, 2025

    Yan Li, Yingzhao Li, and Gim Hee Lee. Active3d: Active high-fidelity 3d reconstruction via hierarchical uncertainty quantification.arXiv preprint arXiv:2511.20050, 2025

  19. [19]

    Uncertainty-aware global-view reconstruction for multi-view multi-label feature selection

    Pingting Hao, Kunpeng Liu, and Wanfu Gao. Uncertainty-aware global-view reconstruction for multi-view multi-label feature selection. InProceedings of the AAAI Conference on Artificial Intelligence, volume 39, pages 17068–17076, 2025

  20. [20]

    ifusion: Inverting diffusion for pose-free reconstruction from sparse views

    Chin-Hsuan Wu, Yen-Chun Chen, Bolivar Solarte, Lu Yuan, and Min Sun. ifusion: Inverting diffusion for pose-free reconstruction from sparse views. In2025 International Conference on 3D Vision (3DV), pages 813–823. IEEE, 2025

  21. [21]

    Sparse3d: Distilling multiview-consistent diffusion for object reconstruction from sparse views

    Zixin Zou, Weihao Cheng, Yan-Pei Cao, Shi-Sheng Huang, Ying Shan, and Song-Hai Zhang. Sparse3d: Distilling multiview-consistent diffusion for object reconstruction from sparse views. InProceedings of the AAAI conference on artificial intelligence, volume 38, pages 7900–7908, 2024

  22. [22]

    Mvdiffusion++: A dense high-resolution multi-view diffusion model for single or sparse-view 3d object reconstruction

    Shitao Tang, Jiacheng Chen, Dilin Wang, Chengzhou Tang, Fuyang Zhang, Yuchen Fan, Vikas Chandra, Yasutaka Furukawa, and Rakesh Ranjan. Mvdiffusion++: A dense high-resolution multi-view diffusion model for single or sparse-view 3d object reconstruction. InEuropean Conference on Computer Vision, pages 175–191. Springer, 2024

  23. [23]

    Neural radiance fields for the real world: A survey.arXiv preprint arXiv:2501.13104, 2025

    Wenhui Xiao, Remi Chierchia, Rodrigo Santa Cruz, Xuesong Li, David Ahmedt-Aristizabal, Olivier Salvado, Clin- ton Fookes, and Leo Lebrat. Neural radiance fields for the real world: A survey.arXiv preprint arXiv:2501.13104, 2025

  24. [24]

    Vr-splatting: Foveated radiance field rendering via 3d gaussian splatting and neural points.Proceedings of the ACM on Computer Graphics and Interactive Techniques, 8(1): 1–21, 2025

    Linus Franke, Laura Fink, and Marc Stamminger. Vr-splatting: Foveated radiance field rendering via 3d gaussian splatting and neural points.Proceedings of the ACM on Computer Graphics and Interactive Techniques, 8(1): 1–21, 2025

  25. [25]

    3d convex splatting: Radiance field rendering with 3d smooth convexes

    Jan Held, Renaud Vandeghen, Abdullah Hamdi, Adrien Deliege, Anthony Cioppa, Silvio Giancola, Andrea Vedaldi, Bernard Ghanem, and Marc Van Droogenbroeck. 3d convex splatting: Radiance field rendering with 3d smooth convexes. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 21360–21369, 2025

  26. [26]

    3dgs-drag: Dragging gaussians for intuitive point-based 3d editing.arXiv preprint arXiv:2601.07963, 2026

    Jiahua Dong and Yu-Xiong Wang. 3dgs-drag: Dragging gaussians for intuitive point-based 3d editing.arXiv preprint arXiv:2601.07963, 2026

  27. [27]

    Estimating 3d uncertainty field: Quantify- ing uncertainty for neural radiance fields

    Jianxiong Shen, Ruijie Ren, Adria Ruiz, and Francesc Moreno-Noguer. Estimating 3d uncertainty field: Quantify- ing uncertainty for neural radiance fields. In2024 IEEE International Conference on Robotics and Automation (ICRA), pages 2375–2381. IEEE, 2024

  28. [28]

    Epistemic uncertainty quantification for pre-trained neural networks

    Hanjing Wang and Qiang Ji. Epistemic uncertainty quantification for pre-trained neural networks. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11052–11061, 2024

  29. [29]

    4d gaussian splatting in the wild with uncertainty-aware regularization.Advances in Neural Information Processing Systems, 37:129209–129226, 2024

    Mijeong Kim, Jongwoo Lim, and Bohyung Han. 4d gaussian splatting in the wild with uncertainty-aware regularization.Advances in Neural Information Processing Systems, 37:129209–129226, 2024

  30. [30]

    Uncertainty-guided optimal transport in depth supervised sparse-view 3d gaussian.arXiv preprint arXiv:2405.19657, 2024

    Wei Sun, Qi Zhang, Yanzhao Zhou, Qixiang Ye, Jianbin Jiao, and Yuan Li. Uncertainty-guided optimal transport in depth supervised sparse-view 3d gaussian.arXiv preprint arXiv:2405.19657, 2024

  31. [31]

    Visibility- uncertainty-guided 3d gaussian inpainting via scene conceptional learning.arXiv preprint arXiv:2504.17815, 2025

    Mingxuan Cui, Qing Guo, Yuyi Wang, Hongkai Yu, Di Lin, Qin Zou, Ming-Ming Cheng, and Xi Li. Visibility- uncertainty-guided 3d gaussian inpainting via scene conceptional learning.arXiv preprint arXiv:2504.17815, 2025

  32. [32]

    Gradient-based local next-best-view planning for improved perception of targeted plant nodes

    Akshay K Burusa, Eldert J van Henten, and Gert Kootstra. Gradient-based local next-best-view planning for improved perception of targeted plant nodes. In2024 IEEE International Conference on Robotics and Automation (ICRA), pages 15854–15860. IEEE, 2024

  33. [33]

    Uncertainty guided policy for active robotic 3d reconstruction using neural radiance fields.IEEE Robotics and Automation Letters, 7 (4):12070–12077, 2022

    Soomin Lee, Le Chen, Jiahao Wang, Alexander Liniger, Suryansh Kumar, and Fisher Yu. Uncertainty guided policy for active robotic 3d reconstruction using neural radiance fields.IEEE Robotics and Automation Letters, 7 (4):12070–12077, 2022

  34. [34]

    Active implicit object reconstruction using uncertainty-guided next-best-view optimization.IEEE Robotics and Automation Letters, 8(10):6395–6402, 2023

    Dongyu Yan, Jianheng Liu, Fengyu Quan, Haoyao Chen, and Mengmeng Fu. Active implicit object reconstruction using uncertainty-guided next-best-view optimization.IEEE Robotics and Automation Letters, 8(10):6395–6402, 2023. 13 DA V-GSWT

  35. [35]

    Gauss-mi: Gaussian splatting shannon mutual information for active 3d reconstruction.arXiv preprint arXiv:2504.21067, 2025

    Yuhan Xie, Yixi Cai, Yinqiang Zhang, Lei Yang, and Jia Pan. Gauss-mi: Gaussian splatting shannon mutual information for active 3d reconstruction.arXiv preprint arXiv:2504.21067, 2025

  36. [36]

    Map-nbv: Multi-agent prediction-guided next-best-view planning for active 3d object reconstruction

    Harnaik Dhami, Vishnu Dutt Sharma, and Pratap Tokekar. Map-nbv: Multi-agent prediction-guided next-best-view planning for active 3d object reconstruction. In2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 5724–5731. IEEE, 2024

  37. [37]

    Gennbv: Generalizable next-best-view policy for active 3d reconstruction

    Xiao Chen, Quanyi Li, Tai Wang, Tianfan Xue, and Jiangmiao Pang. Gennbv: Generalizable next-best-view policy for active 3d reconstruction. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16436–16445, 2024

  38. [38]

    Activeinitsplat: How active image selection helps gaussian splatting.arXiv preprint arXiv:2503.06859, 2025

    Konstantinos D Polyzos, Athanasios Bacharis, Saketh Madhuvarasu, Nikos Papanikolopoulos, and Tara Javidi. Activeinitsplat: How active image selection helps gaussian splatting.arXiv preprint arXiv:2503.06859, 2025

  39. [39]

    Activegs: Active scene reconstruction using gaussian splatting.IEEE Robotics and Automation Letters, 2025

    Liren Jin, Xingguang Zhong, Yue Pan, Jens Behley, Cyrill Stachniss, and Marija Popovi´c. Activegs: Active scene reconstruction using gaussian splatting.IEEE Robotics and Automation Letters, 2025

  40. [40]

    Active view selector: Fast and accurate active view selection with cross reference image quality assessment.arXiv preprint arXiv:2506.19844, 2025

    Zirui Wang, Yash Bhalgat, Ruining Li, and Victor Adrian Prisacariu. Active view selector: Fast and accurate active view selection with cross reference image quality assessment.arXiv preprint arXiv:2506.19844, 2025

  41. [41]

    Citygs-x: A scalable architecture for efficient and geometrically accurate large-scale scene reconstruction.arXiv preprint arXiv:2503.23044, 2025

    Yuanyuan Gao, Hao Li, Jiaqi Chen, Zhengyu Zou, Zhihang Zhong, Dingwen Zhang, Xiao Sun, and Junwei Han. Citygs-x: A scalable architecture for efficient and geometrically accurate large-scale scene reconstruction.arXiv preprint arXiv:2503.23044, 2025

  42. [42]

    Flod: Integrating flexible level of detail into 3d gaussian splatting for customizable rendering.arXiv preprint arXiv:2408.12894, 2024

    Yunji Seo, Young Sun Choi, Hyun Seung Son, and Youngjung Uh. Flod: Integrating flexible level of detail into 3d gaussian splatting for customizable rendering.arXiv preprint arXiv:2408.12894, 2024

  43. [43]

    Lodge: Level-of-detail large-scale gaussian splatting with efficient rendering.arXiv preprint arXiv:2505.23158, 2025

    Jonas Kulhanek, Marie-Julie Rakotosaona, Fabian Manhardt, Christina Tsalicoglou, Michael Niemeyer, Torsten Sattler, Songyou Peng, and Federico Tombari. Lodge: Level-of-detail large-scale gaussian splatting with efficient rendering.arXiv preprint arXiv:2505.23158, 2025

  44. [44]

    Tile-based methods for texture synthesis

    Ares Lagae. Tile-based methods for texture synthesis. InWang Tiles in Computer Graphics, pages 25–38. Springer, 2022

  45. [45]

    Gswt: Gaussian splatting wang tiles

    Yunfan Zeng, Li Ma, and Pedro V Sander. Gswt: Gaussian splatting wang tiles. InProceedings of the SIGGRAPH Asia 2025 Conference Papers, pages 1–11, 2025

  46. [46]

    Zero-shot uncertainty quantification using diffusion probabilistic models

    Dule Shu and Amir Barati Farimani. Zero-shot uncertainty quantification using diffusion probabilistic models. arXiv preprint arXiv:2408.04718, 2024

  47. [47]

    Reconx: Reconstruct any scene from sparse views with video diffusion model.arXiv preprint arXiv:2408.16767, 2024

    Fangfu Liu, Wenqiang Sun, Hanyang Wang, Yikai Wang, Haowen Sun, Junliang Ye, Jun Zhang, and Yueqi Duan. Reconx: Reconstruct any scene from sparse views with video diffusion model.arXiv preprint arXiv:2408.16767, 2024

  48. [48]

    Sparsefusion: Distilling view-conditioned diffusion for 3d reconstruction

    Zhizhuo Zhou and Shubham Tulsiani. Sparsefusion: Distilling view-conditioned diffusion for 3d reconstruction. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12588–12597, 2023

  49. [49]

    Reconfusion: 3d reconstruction with diffusion priors

    Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P Srinivasan, Dor Verbin, Jonathan T Barron, Ben Poole, et al. Reconfusion: 3d reconstruction with diffusion priors. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 21551–21561, 2024

  50. [50]

    3dgs-enhancer: Enhancing unbounded 3d gaussian splatting with view-consistent 2d diffusion priors.Advances in Neural Information Processing Systems, 37:133305–133327, 2024

    Xi Liu, Chaoyi Zhou, and Siyu Huang. 3dgs-enhancer: Enhancing unbounded 3d gaussian splatting with view-consistent 2d diffusion priors.Advances in Neural Information Processing Systems, 37:133305–133327, 2024

  51. [51]

    Diffusion epistemic uncertainty with asymmetric learning for diffusion-generated image detection

    Yingsong Huang, Hui Guo, Jing Huang, Bing Bai, and Qi Xiong. Diffusion epistemic uncertainty with asymmetric learning for diffusion-generated image detection. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 17097–17107, 2025

  52. [52]

    Zero-1-to-3: Zero-shot one image to 3d object

    Ruoshi Liu, Rundi Wu, Basile Van Hoorick, Pavel Tokmakov, Sergey Zakharov, and Carl V ondrick. Zero-1-to-3: Zero-shot one image to 3d object. InProceedings of the IEEE/CVF international conference on computer vision, pages 9298–9309, 2023. 14 DA V-GSWT A Theoretical Analysis The efficacy of the DA V-GSWT framework is established through a rigorous examina...