Recognition: 2 theorem links
· Lean TheoremConfidence-Based Mesh Extraction from 3D Gaussians
Pith reviewed 2026-05-14 23:55 UTC · model grok-4.3
The pith
Learnable per-primitive confidence values added to 3D Gaussian Splatting resolve view-dependent ambiguities to deliver state-of-the-art unbounded mesh extraction while staying efficient.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We introduce a self-supervised confidence framework to 3DGS in which learnable confidence values dynamically balance photometric and geometric supervision. Extending this formulation, we introduce losses that penalize per-primitive color and normal variance and demonstrate their benefit to surface extraction. We further complement the approach with an improved appearance model obtained by decoupling the individual terms of the D-SSIM loss. Our final method achieves state-of-the-art results for unbounded meshes while remaining highly efficient.
What carries the argument
Learnable per-primitive confidence values that dynamically balance photometric and geometric supervision signals.
If this is right
- Meshes extracted from 3DGS achieve higher accuracy in unbounded scenes containing view-dependent effects.
- The overall pipeline retains the computational efficiency of standard 3D Gaussian Splatting.
- Self-supervised signals alone suffice, removing the need for explicit multi-view consistency or external pre-trained models.
- Decoupling the D-SSIM loss terms produces a stronger appearance model that aids surface reconstruction.
Where Pith is reading between the lines
- The confidence balancing idea could transfer to other explicit radiance-field representations that also need to separate geometry from appearance.
- Real-time robotics or AR pipelines that already use 3DGS could directly output usable meshes without extra post-processing stages.
- The variance penalties might generalize to other per-primitive attributes such as opacity or scale to further stabilize extraction.
Load-bearing premise
Learnable per-primitive confidence values can reliably resolve view-dependent ambiguities from only self-supervised photometric and geometric signals without multi-view checks or external models.
What would settle it
Extracted meshes that still show large deviations from ground-truth surfaces in regions dominated by strong view-dependent effects after applying the confidence balancing and variance penalties.
Figures
read the original abstract
Recently, 3D Gaussian Splatting (3DGS) greatly accelerated mesh extraction from posed images due to its explicit representation and fast software rasterization. While the addition of geometric losses and other priors has improved the accuracy of extracted surfaces, mesh extraction remains difficult in scenes with abundant view-dependent effects. To resolve the resulting ambiguities, prior works rely on multi-view techniques, iterative mesh extraction, or large pre-trained models, sacrificing the inherent efficiency of 3DGS. In this work, we present a simple and efficient alternative by introducing a self-supervised confidence framework to 3DGS: within this framework, learnable confidence values dynamically balance photometric and geometric supervision. Extending our confidence-driven formulation, we introduce losses which penalize per-primitive color and normal variance and demonstrate their benefits to surface extraction. Finally, we complement the above with an improved appearance model, by decoupling the individual terms of the D-SSIM loss. Our final approach delivers state-of-the-art results for unbounded meshes while remaining highly efficient.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes a self-supervised confidence framework for mesh extraction from 3D Gaussian Splatting. Learnable per-primitive confidence scalars dynamically weight photometric and geometric losses; additional terms penalize per-primitive color and normal variance, and the D-SSIM loss is decoupled into separate components. The method claims to resolve view-dependent ambiguities in unbounded scenes, delivering state-of-the-art mesh quality while preserving the efficiency of 3DGS without multi-view consistency checks or external models.
Significance. If the quantitative results hold, the work would be significant for real-time 3D reconstruction pipelines. It offers a lightweight, self-supervised route to high-quality unbounded meshes that avoids the computational overhead of iterative refinement or large pre-trained networks, directly addressing a practical bottleneck in 3DGS-based surface extraction.
major comments (2)
- [§3] §3 (confidence framework): The central claim that learnable per-primitive confidence scalars, trained only on photometric and geometric self-supervision, reliably resolve view-dependent ambiguities is load-bearing for both the accuracy and efficiency assertions. The formulation lacks an explicit multi-view consistency regularizer; if the learned weights fail to enforce cross-view coherence, the extracted surfaces will retain the same inconsistencies that prior multi-view methods were designed to avoid. An ablation that isolates the confidence term and reports view-consistency metrics (e.g., normal variance across held-out views) is required.
- [§4] §4 (experiments): The SOTA claim for unbounded meshes rests on quantitative tables that are not referenced in the abstract. The manuscript must supply concrete comparisons (Chamfer distance, F-score, normal consistency) against the cited baselines on standard unbounded datasets, together with ablations that isolate the variance penalties and the decoupled D-SSIM terms. Without these numbers the efficiency advantage cannot be weighed against possible accuracy trade-offs.
minor comments (2)
- [§3.1] Notation for the per-primitive confidence scalar should be introduced once in §3.1 and used consistently thereafter to avoid confusion with other weighting parameters.
- Figure captions would benefit from explicit mention of the dataset and metric shown, especially for qualitative unbounded-scene comparisons.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback. We address the two major comments below and will revise the manuscript accordingly to strengthen the presentation of our confidence framework and experimental validation.
read point-by-point responses
-
Referee: [§3] The central claim that learnable per-primitive confidence scalars, trained only on photometric and geometric self-supervision, reliably resolve view-dependent ambiguities is load-bearing... An ablation that isolates the confidence term and reports view-consistency metrics (e.g., normal variance across held-out views) is required.
Authors: We agree that explicit validation of cross-view coherence is valuable. In the revised manuscript we will add an ablation isolating the learnable confidence scalars and report view-consistency metrics including normal variance across held-out views. Our formulation uses the confidence-weighted photometric and geometric losses to implicitly encourage coherence; the added ablation will quantify this effect without introducing an explicit multi-view regularizer. revision: yes
-
Referee: [§4] The SOTA claim for unbounded meshes rests on quantitative tables that are not referenced in the abstract. The manuscript must supply concrete comparisons (Chamfer distance, F-score, normal consistency) against the cited baselines on standard unbounded datasets, together with ablations that isolate the variance penalties and the decoupled D-SSIM terms.
Authors: We accept this point. The revised version will reference the quantitative tables in the abstract and include the requested concrete comparisons (Chamfer distance, F-score, normal consistency) on standard unbounded datasets. We will also expand the experimental section with ablations that isolate the per-primitive variance penalties and the decoupled D-SSIM terms. revision: yes
Circularity Check
No circularity: learnable confidence parameters are independent of target mesh outputs
full rationale
The paper introduces learnable per-primitive confidence scalars that are optimized via self-supervised photometric and geometric losses (plus variance penalties and decoupled D-SSIM). No equations are shown that define these confidences in terms of the extracted surfaces they are claimed to improve, nor does any derivation reduce the SOTA unbounded-mesh claim to a fitted quantity by construction. The central mechanism is standard parameter learning from data signals; the derivation chain remains open to external validation through the reported empirical results rather than closing on its own inputs.
Axiom & Free-Parameter Ledger
free parameters (1)
- learnable confidence values
axioms (1)
- domain assumption Self-supervised photometric and geometric signals can resolve view-dependent ambiguities
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
learnable per-primitive confidence values dynamically balance photometric and geometric supervision... L_conf = L_rgb · Ĉ − β·log Ĉ
-
IndisputableMonolith/Foundation/BranchSelection.leanbranch_selection unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
we introduce losses which penalize per-primitive color and normal variance
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
In: CVPR (2022) 9, 11, 12, 25, 30, 31, 33
Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. In: CVPR (2022) 9, 11, 12, 25, 30, 31, 33
work page 2022
-
[2]
Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields. In: ICCV (2023) 3
work page 2023
-
[3]
Bojanowski, P., Joulin, A., Lopez-Paz, D., Szlam, A.: Optimizing the Latent Space of Generative Networks. In: ICML (2017) 4
work page 2017
-
[4]
Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: Tensorial Radiance Fields. In: ECCV (2022) 34
work page 2022
-
[5]
IEEE TVCG31(09) (2025) 2, 4, 6, 9, 10, 11, 12, 13, 14, 21, 22, 23, 25, 26, 27, 28, 30
Chen, D., Li, H., Ye, W., Wang, Y., Xie, W., Zhai, S., Wang, N., Liu, H., Bao, H., Zhang, G.: PGSR: Planar-Based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction . IEEE TVCG31(09) (2025) 2, 4, 6, 9, 10, 11, 12, 13, 14, 21, 22, 23, 25, 26, 27, 28, 30
work page 2025
-
[6]
In: NeurIPS (2024) 2, 3, 4, 9, 14
Chen, H., Wei, F., Li, C., Huang, T., Wang, Y., Lee, G.H.: VCR-GauS: View Consistent Depth-Normal Regularizer for Gaussian Surface Reconstruction. In: NeurIPS (2024) 2, 3, 4, 9, 14
work page 2024
-
[7]
Chen, H., Miller, B., Gkioulekas, I.: 3D Reconstruction with Fast Dipole Sums. ACM TOG43(6) (2024) 4
work page 2024
-
[8]
Dahmani, H., Bennehar, M., Piasco, N., Roldao, L., Tsishkou, D.: SWAG: Splatting in the Wild images with Appearance-conditioned Gaussians. In: ECCV (2024) 4
work page 2024
-
[9]
Dai, P., Xu, J., Xie, W., Liu, X., Wang, H., Xu, W.: High-quality Surface Recon- struction using Gaussian Surfels. In: SIGGRAPH Asia (2024) 4
work page 2024
-
[10]
Deutsch, I., Moënne-Loccoz, N., State, G., Gojcic, Z.: PPISP: Physically-Plausible Compensation and Control of Photometric Variations in Radiance Field Recon- struction (2026),https://arxiv.org/abs/2601.183364, 13, 24
work page internal anchor Pith review Pith/arXiv arXiv 2026
- [11]
-
[12]
arXiv preprint arXiv:2503.14665 (2025) 8
Ewen, P., Chen, H., Isaacson, S., Wilson, J., Skinner, K.A., Vasudevan, R.: These Magic Moments: Differentiable Uncertainty Quantification of Radiance Field Mod- els. arXiv preprint arXiv:2503.14665 (2025) 8
-
[13]
Fang, G., Wang, B.: Mini-Splatting: Representing Scenes with a Constrained Num- ber of Gaussians. In: ECCV (2024) 3
work page 2024
-
[14]
Fischer, T., Bulò, S.R., Yang, Y.H., Keetha, N., Porzi, L., Müller, N., Schwarz, K., Luiten, J., Pollefeys, M., Kontschieder, P.: FlowR: Flowing from Sparse to Dense 3D Reconstructions. In: ICCV (2025) 14
work page 2025
-
[15]
IEEE TPAMI32(12), 2276–2288 (2010) 19
Goldman, D.B.: Vignette and Exposure Calibration and Compensation. IEEE TPAMI32(12), 2276–2288 (2010) 19
work page 2010
-
[16]
Goli, L., Reading, C., Sellán, S., Jacobson, A., Tagliasacchi, A.: Bayes’ Rays: Un- certainty Quantification in Neural Radiance Fields. In: CVPR (2024) 3
work page 2024
-
[17]
Govindarajan, S., Rebain, D., Yi, K.M., Tagliasacchi, A.: Radiant Foam: Real- Time Differentiable Ray Tracing. In: ICCV (2025) 3 16 L. Radl et al
work page 2025
-
[18]
ACM TOG44(6) (2025) 2, 4, 5, 9, 11, 12, 21, 22, 27, 29, 30, 31, 33, 34
Guédon, A., Gomez, D., Maruani, N., Gong, B., Drettakis, G., Ovsjanikov, M.: MILo: Mesh-In-the-Loop Gaussian Splatting for Detailed and Efficient Surface Reconstruction. ACM TOG44(6) (2025) 2, 4, 5, 9, 11, 12, 21, 22, 27, 29, 30, 31, 33, 34
work page 2025
-
[19]
Guédon, A., Lepetit, V.: SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering. In: CVPR (2024) 2, 4
work page 2024
-
[20]
Hahlbohm, F., Friederichs, F., Weyrich, T., Franke, L., Kappel, M., Castillo, S., Stamminger, M., Eisemann, M., Magnor, M.: Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency. Comput. Graph. Forum44(2) (2025) 3, 5
work page 2025
-
[21]
In: SIGGRAPH (2024) 2, 4, 8, 27
Huang, B., Yu, Z., Chen, A., Geiger, A., Gao, S.: 2D Gaussian Splatting for Geo- metrically Accurate Radiance Fields. In: SIGGRAPH (2024) 2, 4, 8, 27
work page 2024
-
[22]
Jena, S., Ouasfi, A., Younes, M., Boukhayma, A.: Sparfels: Fast Reconstruction from Sparse Unposed Imagery. In: ICCV (2025) 8
work page 2025
-
[23]
Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., Aanæs, H.: Large Scale Multi-View Stereopsis Evaluation. In: CVPR (2014) 9, 25, 30
work page 2014
-
[24]
Jiang, K., Sivaram, V., Peng, C., Ramamoorthi, R.: Geometry Field Splatting with Gaussian Surfels. In: CVPR (2025) 4
work page 2025
-
[25]
Jiang, W., Lei, B., Daniilidis, K.: FisherRF: Active View Selection and Uncertainty Quantification for Radiance Fields using Fisher Information. In: ECCV (2024) 3
work page 2024
-
[26]
IEEE Robotics and Automation Letters (2025) 3
Jin, L., Zhong, X., Pan, Y., Behley, J., Stachniss, C., Popović, M.: Activegs: Ac- tive scene reconstruction using gaussian splatting. IEEE Robotics and Automation Letters (2025) 3
work page 2025
-
[27]
Kendall, A., Gal, Y.: What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? In: NeurIPS (2017) 2
work page 2017
-
[28]
ACM TOG42(4) (2023) 2, 3, 4, 13, 33
Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM TOG42(4) (2023) 2, 3, 4, 13, 33
work page 2023
-
[29]
Kerbl, B., Meuleman, A., Kopanas, G., Wimmer, M., Lanvin, A., Drettakis, G.: A Hierarchical 3D Gaussian Representation for Real-Time Rendering of Very Large Datasets. ACM TOG43(4) (2024) 4, 13, 23
work page 2024
-
[30]
Kheradmand, S., Rebain, D., Sharma, G., Sun, W., Tseng, Y.C., Isack, H., Kar, A., Tagliasacchi, A., Yi, K.M.: 3D Gaussian Splatting as Markov Chain Monte Carlo. In: NeurIPS (2024) 3
work page 2024
-
[31]
ACM TOG36(4) (2017) 9, 10, 11, 13, 20, 21, 26
Knapitsch,A.,Park,J.,Zhou,Q.Y.,Koltun,V.:TanksandTemples:Benchmarking Large-Scale Scene Reconstruction. ACM TOG36(4) (2017) 9, 10, 11, 13, 20, 21, 26
work page 2017
-
[32]
Kulhanek, J., Peng, S., Kukelova, Z., Pollefeys, M., Sattler, T.: WildGaussians: 3D Gaussian Splatting in the Wild. In: NeurIPS (2024) 4
work page 2024
-
[33]
In: NeurIPS (2025) 2, 4, 9, 14, 25, 26
Li, J., Zhang, J., Zhang, Y., Bai, X., Zheng, J., Yu, X., Gu, L.: GeoSVR: Taming Sparse Voxels for Geometrically Accurate Surface Reconstruction. In: NeurIPS (2025) 2, 4, 9, 14, 25, 26
work page 2025
-
[34]
Li, Q., Feng, H., Gong, X., Liu, Y.S.: VA-GS: Enhancing the Geometric Repre- sentation of Gaussian Splatting via View Alignment. In: NeurIPS (2025) 2, 4, 9
work page 2025
-
[35]
Li, R., Cheung, Y.m.: Variational Multi-scale Representation for Estimating Un- certainty in 3D Gaussian Splatting. In: NeurIPS (2024) 3
work page 2024
-
[36]
In: CVPR (2025) 4 Confidence-Based Mesh Extraction from 3D Gaussians 17
Li, S., Liu, Y.S., Han, Z.: GaussianUDF: Inferring Unsigned Distance Functions through 3D Gaussian Splatting. In: CVPR (2025) 4 Confidence-Based Mesh Extraction from 3D Gaussians 17
work page 2025
-
[37]
In: CVPR (2024) 4, 5, 6, 13, 19
Lin, J., Li, Z., Tang, X., Liu, J., Liu, S., Liu, J., Lu, Y., Wu, X., Xu, S., Yan, Y., Yang, W.: VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction. In: CVPR (2024) 4, 5, 6, 13, 19
work page 2024
-
[38]
Liu, R., Sun, D., Chen, M., Wang, Y., Feng, A.: Deformable Beta Splatting. In: SIGGRAPH (2025) 3
work page 2025
-
[39]
In: Eu- rographics Symposium on Rendering (2025) 2, 3
Liu, S., Wu, J., Wu, W., Chu, L., Liu, X.: Uncertainty-Aware Gaussian Splatting with View-Dependent Regularization for High-Fidelity 3D Reconstruction. In: Eu- rographics Symposium on Rendering (2025) 2, 3
work page 2025
-
[40]
von Lützow, N., Nießner, M.: LinPrim: Linear Primitives for Differentiable Volu- metric Rendering. In: NeurIPS (2025) 3
work page 2025
-
[41]
Lyu, X., Sun, Y.T., Huang, Y.H., Wu, X., Yang, Z., Chen, Y., Pang, J., Qi, X.: 3DGSR: Implicit Surface Reconstruction with 3D Gaussian Splatting. ACM TOG 43(6) (Nov 2024) 4
work page 2024
-
[42]
In: SIG- GRAPH Asia (2024) 3, 19
Mallick, S., Goel, R., Kerbl, B., Vicente Carrasco, F., Steinberger, M., De La Torre, F.: Taming 3DGS: High-Quality Radiance Fields with Limited Resources. In: SIG- GRAPH Asia (2024) 3, 19
work page 2024
-
[43]
Martin-Brualla,R.,Radwan,N.,Sajjadi,M.S.,Barron,J.T.,Dosovitskiy,A.,Duck- worth, D.: NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In: CVPR (2021) 4
work page 2021
-
[44]
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In: ECCV (2020) 2, 3
work page 2020
-
[45]
Miller, B., Chen, H., Lai, A., Gkioulekas, I.: Objects as volumes: A stochastic geometry view of opaque solids. In: CVPR (2024) 4
work page 2024
-
[46]
Moenne-Loccoz, N., Mirzaei, A., Perel, O., de Lutio, R., Esturo, J.M., State, G., Fidler, S., Sharp, N., Gojcic, Z.: 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes. ACM TOG43(6) (2024) 3
work page 2024
-
[47]
Müller, T., Evans, A., Schied, C., Keller, A.: Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM TOG41(4) (2022) 3
work page 2022
-
[48]
Niemeyer, M., Manhardt, F., Rakotosaona, M.J., Oechsle, M., Tsalicoglou, C., Tateno, K., Barron, J.T., Tombari, F.: Learning Neural Exposure Fields for View Synthesis. In: NeurIPS (2025) 4
work page 2025
- [49]
-
[50]
Papantonakis, P., Kopanas, G., Kerbl, B., Lanvin, A., Drettakis, G.: Reducing the MemoryFootprintof3DGaussianSplatting.Proc.ACMComput.Graph.Interact. Tech7(1) (2024) 3
work page 2024
-
[51]
Radl, L., Steiner, M., Parger, M., Weinrauch, A., Kerbl, B., Steinberger, M.: StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time Render- ing. ACM TOG43(4) (2024) 3, 25
work page 2024
-
[52]
In: SIGGRAPH Asia (2025) 4, 5, 8, 9, 11, 12, 20, 22, 23, 25, 30, 31, 33
Radl, L., Windisch, F., Deixelberger, T., Hladky, J., Steiner, M., Schmalstieg, D., Steinberger, M.: SOF: Sorted Opacity Fields for Fast Unbounded Surface Recon- struction. In: SIGGRAPH Asia (2025) 4, 5, 8, 9, 11, 12, 20, 22, 23, 25, 30, 31, 33
work page 2025
-
[53]
RotaBulò,S.,Porzi,L.,Kontschieder,P.:RevisingDensificationinGaussianSplat- ting. In: ECCV (2024) 3
work page 2024
-
[54]
Rückert, D., Franke, L., Stamminger, M.: ADOP: Approximate Differentiable One- Pixel Point Rendering. ACM TOG41(4) (2022) 4
work page 2022
-
[55]
Shen, J., Agudo, A., Moreno-Noguer, F., Ruiz, A.: Conditional-Flow NeRF: Ac- curate 3D Modelling with Reliable Uncertainty Quantification. In: ECCV (2022) 3 18 L. Radl et al
work page 2022
-
[56]
Shen, J., Ruiz, A., Agudo, A., Moreno-Noguer, F.: Stochastic Neural Radiance Fields: Quantifying Uncertainty in Implicit 3D Representations. In: 3DV (2021) 3
work page 2021
-
[57]
Steiner, M., Köhler, T., Radl, L., Windisch, F., Schmalstieg, D., Steinberger, M.: AAA-Gaussians: Anti-Aliased and Artifact-Free 3D Gaussian Rendering. In: ICCV (2025) 3, 5
work page 2025
-
[58]
Sun, C., Choe, J., Loop, C., Ma, W.C., Wang, Y.C.F.: Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering. In: CVPR (2025) 3, 4, 25
work page 2025
-
[59]
Sünderhauf, N., Abou-Chakra, J., Miller, D.: Density-aware NeRF Ensembles: Quantifying Predictive Uncertainty in Neural Radiance Fields. In: ICRA (2022) 3
work page 2022
-
[60]
Tu, X., Radl, L., Steiner, M., Steinberger, M., Kerbl, B., de la Torre, F.: VRsplat: Fast and Robust Gaussian Splatting for Virtual Reality. Proc. ACM Comput. Graph. Interact. Tech8(1) (2025) 3
work page 2025
-
[61]
Wang, S., Leroy, V., Cabon, Y., Chidlovskii, B., Revaud, J.: DUSt3R: Geometric 3D Vision Made Easy. In: CVPR (2024) 2, 6
work page 2024
-
[62]
Wang, Y., Wang, C., Gong, B., Xue, T.: Bilateral Guided Radiance Field Process- ing. ACM TOG43(4), 1–13 (2024) 4
work page 2024
-
[63]
IEEE TIP13(4), 600–612 (2004) 5
Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE TIP13(4), 600–612 (2004) 5
work page 2004
-
[64]
Wu, J.Z., Zhang, Y., Turki, H., Ren, X., Gao, J., Shou, M.Z., Fidler, S., Gojcic, Z., Ling, H.: Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models. In: CVPR (2025) 14
work page 2025
-
[65]
Wu, Q., Martinez Esturo, J., Mirzaei, A., Moenne-Loccoz, N., Gojcic, Z.: 3DGUT: Enabling Distorted Cameras and Secondary Rays in Gaussian Splatting. In: CVPR (2025) 3
work page 2025
-
[66]
Xue, S., Dill, J., Mathur, P., Dellaert, F., Tsiotras, P., Xu, D.: Neural Visibility Field for Uncertainty-Driven Active Mapping. In: CVPR (2024) 3
work page 2024
-
[67]
Yang, Z., Gao, X., Sun, Y., Huang, Y., Lyu, X., Zhou, W., Jiao, S., Qi, X., Jin, X.: Spec-Gaussian: Anisotropic View-Dependent Appearance for 3D Gaussian Splat- ting. In: NeurIPS (2024) 3
work page 2024
-
[68]
Ye, Z., Li, W., Liu, S., Qiao, P., Dou, Y.: AbsGS: Recovering Fine Details in 3D Gaussian Splatting. In: ACM MM (2024) 3, 7
work page 2024
-
[69]
In: ICCV (2023) 9, 10, 11, 21, 22
Yeshwanth, C., Liu, Y.C., Nießner, M., Dai, A.: ScanNet++: A High-Fidelity Dataset of 3D Indoor Scenes. In: ICCV (2023) 9, 10, 11, 21, 22
work page 2023
-
[70]
Yu, M., Lu, T., Xu, L., Jiang, L., Xiangli, Y., Dai, B.: GSDF: 3DGS Meets SDF for Improved Rendering and Reconstruction. In: NeurIPS (2024) 4
work page 2024
-
[71]
Yu, Z., Chen, A., Huang, B., Sattler, T., Geiger, A.: Mip-Splatting: Alias-free 3D Gaussian Splatting. In: CVPR (2024) 3
work page 2024
-
[72]
ACM TOG43(6) (2024) 2, 4, 5, 6, 7, 8, 9, 21, 22, 23, 27
Yu, Z., Sattler, T., Geiger, A.: Gaussian Opacity Fields: Efficient Adaptive Surface Reconstruction in Unbounded Scenes. ACM TOG43(6) (2024) 2, 4, 5, 6, 7, 8, 9, 21, 22, 23, 27
work page 2024
- [73]
-
[74]
Zhang, D., Wang, C., Wang, W., Li, P., Qin, M., Wang, H.: Gaussian in the Wild: 3D Gaussian Splatting for Unconstrained Image Collections. In: ECCV (2024) 4
work page 2024
-
[75]
Zhang, W., Liu, Y.S., Han, Z.: Neural Signed Distance Function Inference through Splatting 3D Gaussians Pulled on Zero-Level Set. In: NeurIPS (2024) 4
work page 2024
-
[76]
In: SIGGRAPH (2025) 8 Confidence-Based Mesh Extraction from 3D Gaussians 19
Zhang, Z., Roussel, N., Muller, T., Zeltner, T., Nimier-David, M., Rousselle, F., Jakob, W.: Radiance Surfaces: Optimizing Surface Representations with a 5D Ra- diance Field Loss. In: SIGGRAPH (2025) 8 Confidence-Based Mesh Extraction from 3D Gaussians 19
work page 2025
-
[77]
In: ICCV (2025) 2, 4, 9, 10, 11, 12, 23, 26, 30, 31
Zhang, Z., Huang, B., Jiang, H., Zhou, L., Xiang, X., Shen, S.: Quadratic Gaus- sian Splatting: High Quality Surface Reconstruction with Second-order Geometric Primitives. In: ICCV (2025) 2, 4, 9, 10, 11, 12, 23, 26, 30, 31
work page 2025
-
[78]
Zhao, C., Wang, X., Zhang, T., Javed, S., Salzmann, M.: Self-Ensembling Gaussian Splatting for Few-Shot Novel View Synthesis. In: ICCV (2025) 3 A Implementation Details To aid reproducibility, this section contains additional implementation details for our method. A.1 Decoupled D-SSIM We made the following implementation changes to the decoupled appearanc...
work page 2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.