Recognition: 2 theorem links
· Lean TheoremNeural Harmonic Textures for High-Quality Primitive Based Neural Reconstruction
Pith reviewed 2026-05-13 22:18 UTC · model grok-4.3
The pith
Neural Harmonic Textures let primitive-based models capture high-frequency details by turning feature interpolation into a harmonic sum decoded in one pass.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Anchoring latent feature vectors on a virtual scaffold surrounding each primitive, interpolating the features at ray intersection points, and applying periodic activations converts alpha blending into a weighted sum of harmonic components that a small neural network decodes in one deferred pass.
What carries the argument
Neural Harmonic Textures: per-primitive virtual scaffold that holds latent features, periodic activation functions applied after interpolation, and a lightweight deferred decoder that reconstructs the final signal from the resulting harmonics.
If this is right
- The same representation integrates directly into pipelines such as 3DGUT, Triangle Splatting, and 2DGS.
- High-frequency surface detail becomes representable without increasing the number or size of primitives.
- The deferred single-pass decoder keeps inference cost low enough for real-time rendering.
- The same construction extends to 2D image fitting and semantic reconstruction tasks.
Where Pith is reading between the lines
- Periodic feature activations could be swapped for other basis functions to target specific frequency bands in future work.
- The scaffold idea might transfer to non-primitive representations when local high-frequency control is needed.
- Because the harmonics are computed per primitive, the approach could support efficient editing or animation of individual scene elements.
Load-bearing premise
That placing features on a virtual scaffold around each primitive and applying periodic activations will reliably extract high-frequency content from diverse scenes without artifacts or per-scene retuning.
What would settle it
Apply the method to a test scene containing fine high-frequency patterns such as printed text or thin fabric threads and measure whether visible artifacts appear or quality falls below that of a comparable neural-field baseline at equal frame rate.
Figures
read the original abstract
Primitive-based methods such as 3D Gaussian Splatting have recently become the state-of-the-art for novel-view synthesis and related reconstruction tasks. Compared to neural fields, these representations are more flexible, adaptive, and scale better to large scenes. However, the limited expressivity of individual primitives makes modeling high-frequency detail challenging. We introduce Neural Harmonic Textures, a neural representation approach that anchors latent feature vectors on a virtual scaffold surrounding each primitive. These features are interpolated within the primitive at ray intersection points. Inspired by Fourier analysis, we apply periodic activations to the interpolated features, turning alpha blending into a weighted sum of harmonic components. The resulting signal is then decoded in a single deferred pass using a small neural network, significantly reducing computational cost. Neural Harmonic Textures yield state-of-the-art results in real-time novel view synthesis while bridging the gap between primitive- and neural-field-based reconstruction. Our method integrates seamlessly into existing primitive-based pipelines such as 3DGUT, Triangle Splatting, and 2DGS. We further demonstrate its generality with applications to 2D image fitting and semantic reconstruction.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes Neural Harmonic Textures to enhance primitive-based representations (e.g., 3D Gaussian Splatting) for novel view synthesis. Latent feature vectors are anchored on a virtual scaffold around each primitive, interpolated at ray-hit points, and processed with periodic activations to convert alpha blending into a weighted sum of harmonic components; the result is decoded by a small deferred neural network. The work claims state-of-the-art real-time performance, seamless integration into pipelines such as 3DGUT, Triangle Splatting, and 2DGS, and extensions to 2D image fitting and semantic reconstruction.
Significance. If the empirical claims hold, the approach would meaningfully advance the field by increasing the expressivity of efficient, scalable primitive representations toward neural-field quality without incurring high computational overhead, potentially offering a practical bridge between the two paradigms in real-time reconstruction tasks.
major comments (2)
- [Abstract] Abstract: the claim that the method 'yield[s] state-of-the-art results in real-time novel view synthesis' is presented without any quantitative metrics, tables, ablation studies, or implementation details, which is load-bearing for the central performance assertion and prevents verification of the bridging claim.
- [Method] Method description (periodic activations and scaffold interpolation): no derivation, frequency bounds, or conditioning analysis is supplied for the harmonic basis under typical primitive densities and interpolation, leaving the assumption that the scheme captures high-frequency content without ringing or per-scene tuning unexamined and therefore load-bearing for the expressivity claim.
minor comments (1)
- [Abstract] Abstract: the acronym '3DGUT' is used without expansion, which reduces clarity for readers outside the immediate sub-area.
Simulated Author's Rebuttal
We thank the referee for their detailed and constructive comments on our manuscript. We address each of the major comments below and have prepared revisions to the manuscript to incorporate the suggested improvements.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim that the method 'yield[s] state-of-the-art results in real-time novel view synthesis' is presented without any quantitative metrics, tables, ablation studies, or implementation details, which is load-bearing for the central performance assertion and prevents verification of the bridging claim.
Authors: We agree that the abstract would be strengthened by including quantitative support for the state-of-the-art claim. In the revised manuscript, we will update the abstract to reference specific metrics from our experiments, such as improved PSNR and real-time FPS compared to baselines, while maintaining its conciseness. This will directly tie the claim to the results presented in the paper. revision: yes
-
Referee: [Method] Method description (periodic activations and scaffold interpolation): no derivation, frequency bounds, or conditioning analysis is supplied for the harmonic basis under typical primitive densities and interpolation, leaving the assumption that the scheme captures high-frequency content without ringing or per-scene tuning unexamined and therefore load-bearing for the expressivity claim.
Authors: The referee correctly identifies that the current method section lacks a formal derivation and analysis of the harmonic components. We will revise the manuscript to include a mathematical derivation of the periodic activations inspired by Fourier series, specify frequency bounds based on typical primitive densities, and provide a conditioning analysis. Additionally, we will add discussion and experiments addressing potential ringing artifacts and the lack of need for per-scene tuning, thereby substantiating the expressivity claims. revision: yes
Circularity Check
No circularity: forward proposal of scaffolded periodic features with no self-referential reduction
full rationale
The provided abstract and description introduce Neural Harmonic Textures by anchoring latent vectors on virtual scaffolds around primitives, interpolating at ray hits, applying periodic activations to produce harmonic sums, and decoding via a small deferred network. No equations, derivations, or uniqueness theorems are shown that reduce the claimed expressivity or SOTA results to fitted parameters, self-citations, or inputs by construction. The method is presented as a design choice integrating into existing pipelines (3DGUT, Triangle Splatting, 2DGS) without load-bearing self-referential steps. This matches the reader's assessment of score 2.0 as a non-circular forward proposal.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/DimensionForcing.lean8-tick period from 2^D = 8 echoes?
echoesECHOES: this paper passage has the same mathematical shape or conceptual pattern as the Recognition theorem, but is not a direct formal dependency.
Inspired by Fourier analysis, we apply periodic activations to the interpolated features, turning alpha blending into a weighted sum of harmonic components... sin(fi) cos(fi)
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel (J-cost uniqueness) echoes?
echoesECHOES: this paper passage has the same mathematical shape or conceptual pattern as the Recognition theorem, but is not a direct formal dependency.
the activated features act as frequency components, while the primitive opacity modulates their amplitude
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip- nerf 360: Unbounded anti-aliased neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022) 10, 11, 12, 14, 22, 24, 30
work page 2022
-
[2]
In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2023) 10
Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: Anti-aliased grid-based neural radiance fields. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2023) 10
work page 2023
-
[3]
Chao, B., Tseng, H.Y., Porzi, L., Gao, C., Li, T., Li, Q., Saraf, A., Huang, J.B., Kopf, J., Wetzstein, G., Kim, C.: Textured gaussians for enhanced 3d scene ap- pearance modeling. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2025) 5, 10
work page 2025
-
[4]
In: European conference on computer vision
Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: Tensorf: Tensorial radiance fields. In: European conference on computer vision. pp. 333–350. Springer (2022) 3
work page 2022
-
[5]
In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
Chen, Y., Chen, Z., Zhang, C., Wang, F., Yang, X., Wang, Y., Cai, Z., Yang, L., Liu, H., Lin, G.: Gaussianeditor: Swift and controllable 3d editing with gaussian splatting. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 21476–21485 (2024) 2
work page 2024
-
[6]
In: The Conference on Computer Vision and Pattern Recognition (CVPR) (2023) 4
Chen, Z., Funkhouser, T., Hedman, P., Tagliasacchi, A.: MobileNeRF: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. In: The Conference on Computer Vision and Pattern Recognition (CVPR) (2023) 4
work page 2023
- [7]
-
[8]
ACM Transactions on Graphics44(1) (2025) 4
Condor, J., Speierer, S., Bode, L., Bozic, A., Green, S., Didyk, P., Jarabo, A.: Don’t splat your gaussians: Volumetric ray-traced primitives for modeling and rendering scattering and emissive media. ACM Transactions on Graphics44(1) (2025) 4
work page 2025
-
[9]
arXiv preprint arXiv:2512.14180 (2025) 2, 10, 11
Di Sario, F., Rebain, D., Verbin, D., Grangetto, M., Tagliasacchi, A.: Spherical voronoi: Directional appearance as a differentiable partition of the sphere. arXiv preprint arXiv:2512.14180 (2025) 2, 10, 11
-
[10]
Duckworth, D., Hedman, P., Reiser, C., Zhizhin, P., Thibert, J.F., Lučić, M., Szeliski, R., Barron, J.T.: Smerf: Streamable memory efficient radiance fields for real-time large-scene exploration (2023) 7
work page 2023
-
[11]
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
Fridovich-Keil, S., Meanti, G., Warburg, F.R., Recht, B., Kanazawa, A.: K-planes: Explicit radiance fields in space, time, and appearance. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12479– 12488 (2023) 3
work page 2023
-
[12]
Gadirov, H., Wu, Q., Bauer, D., Ma, K.L., Roerdink, J.B., Frey, S.: Hyperflint: Hypernetwork-based flow estimation and temporal interpolation for scientific en- semble visualization. Computer Graphics Forum44(3), e70134 (2025).https: //doi.org/https://doi.org/10.1111/cgf.70134,https://onlinelibrary. wiley.com/doi/abs/10.1111/cgf.701343
-
[13]
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Govindarajan, S., Rebain, D., Yi, K.M., Tagliasacchi, A.: Radiant foam: Real- time differentiable ray tracing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 4135–4145 (2025) 4 Neural Harmonic Textures 17
work page 2025
-
[14]
In: European Conference on Computer Vision
Govindarajan, S., Sambugaro, Z., Shabanov, A., Takikawa, T., Rebain, D., Sun, W., Conci, N., Yi, K.M., Tagliasacchi, A.: Lagrangian hashing for compressed neural field representations. In: European Conference on Computer Vision. pp. 183–199. Springer (2024) 3
work page 2024
-
[15]
In: 2025 International Conference on 3D Vision (3DV)
Hahlbohm, F., Franke, L., Kappel, M., Castillo, S., Eisemann, M., Stamminger, M., Magnor, M.: INPC: Implicit Neural Point Clouds for Radiance Field Render- ing . In: 2025 International Conference on 3D Vision (3DV). pp. 168–178. IEEE Computer Society, Los Alamitos, CA, USA (Mar 2025).https://doi.org/10. 1109/3DV66043.2025.00021,https://doi.ieeecomputersoc...
-
[16]
In: Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S
Han, K., Xiang, W., Yu, L.: Volume feature rendering for fast neural radiance field reconstruction. In: Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S. (eds.) Advances in Neural Information Processing Systems. vol. 36, pp. 65416–65427. Curran Associates, Inc. (2023),https://proceedings.neurips.cc/ paper _ files / paper / 2023 / file ...
work page 2023
-
[17]
Harris, D., Harris, S.L.: Digital design and computer architecture. Morgan Kauf- mann (2010) 6
work page 2010
-
[18]
Hedman, P., Philip, J., Price, T., Frahm, J.M., Drettakis, G., Brostow, G.: Deep blending for free-viewpoint image-based rendering. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia)37(6), 257:1–257:15 (2018) 10, 11, 22, 24, 30
work page 2018
-
[19]
In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2021) 4, 8
Hedman, P., Srinivasan, P.P., Mildenhall, B., Barron, J.T., Debevec, P.: Bak- ing neural radiance fields for real-time view synthesis. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2021) 4, 8
work page 2021
-
[20]
In: Thirteenth International Conference on 3D Vision (3DV) (2025) 2, 4, 5, 10, 12
Held, J., Vandeghen, R., Deliege, A., Hamdi, A., Rebain, D., Giancola, S., Cioppa, A., Vedaldi, A., Ghanem, B., Tagliasacchi, A., et al.: Triangle splatting for real- time radiance field rendering. In: Thirteenth International Conference on 3D Vision (3DV) (2025) 2, 4, 5, 10, 12
work page 2025
-
[21]
2d gaussian splatting for geometrically accurate radiance fields
Huang, B., Yu, Z., Chen, A., Geiger, A., Gao, S.: 2D Gaussian splatting for geo- metrically accurate radiance fields. In: ACM SIGGRAPH 2024 Conference Papers (2024).https://doi.org/10.1145/3641519.36574284, 5, 10, 12
-
[22]
arXiv preprint arXiv:2407.09733 (2024) 5
Huang, Z., Gong, M.: Textured-gs: Gaussian splatting with spatially defined color and opacity. arXiv preprint arXiv:2407.09733 (2024) 5
-
[23]
Joint Photographic Experts Group: JPEG XL image coding system.https:// jpeg.org/jpegxl/(2024), accessed: 2024-05-24 14, 15, 33
work page 2024
-
[24]
ACM Transactions on Graphics (Proceedings of SIGGRAPH)42(4) (2023) 2, 4, 5, 9
Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics (Proceedings of SIGGRAPH)42(4) (2023) 2, 4, 5, 9
work page 2023
-
[25]
In: Advances in Neural Information Processing Systems (NeurIPS) (2024) 9, 10
Kheradmand, S., Rebain, D., Sharma, G., Sun, W., Tseng, Y.C., Isack, H., Kar, A., Tagliasacchi, A., Yi, K.M.: 3D gaussian splatting as markov chain monte carlo. In: Advances in Neural Information Processing Systems (NeurIPS) (2024) 9, 10
work page 2024
-
[26]
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization (2017) 28
work page 2017
-
[27]
ACM Transactions on Graphics (Proceedings of SIGGRAPH)36(4) (2017) 10, 11, 12, 22, 24, 30
Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics (Proceedings of SIGGRAPH)36(4) (2017) 10, 11, 12, 22, 24, 30
work page 2017
-
[28]
arXiv preprint arXiv:2505.23158 (2025) 4
Kulhanek, J., Rakotosaona, M.J., Manhardt, F., Tsalicoglou, C., Niemeyer, M., Sattler, T., Peng, S., Tombari, F.: Lodge: Level-of-detail large-scale gaussian splat- ting with efficient rendering. arXiv preprint arXiv:2505.23158 (2025) 4
-
[29]
Language-driven semantic segmentation,
Li, B., Weinberger, K.Q., Belongie, S., Koltun, V., Ranftl, R.: Language-driven semantic segmentation. arXiv preprint arXiv:2201.03546 (2022) 13 18 J. Condor et al
-
[30]
In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
Li, Z., Müller, T., Evans, A., Taylor, R.H., Unberath, M., Liu, M.Y., Lin, C.H.: Neuralangelo: High-fidelity neural surface reconstruction. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 8456–8465 (2023) 3
work page 2023
-
[31]
arXiv preprint arXiv:2412.03526 (2024) 2
Liang, H., Ren, J., Mirzaei, A., Torralba, A., Liu, Z., Gilitschenski, I., Fidler, S., Oztireli, C., Ling, H., Gojcic, Z., Huang, J.: Feed-forward bullet-time reconstruc- tion of dynamic scenes from monocular videos. arXiv preprint arXiv:2412.03526 (2024) 2
-
[32]
In: Advances in Neural Information Processing Systems (NeurIPS) (2020) 3
Liu, L., Gu, J., Lin, K.Z., Chua, T.S., Theobalt, C.: Neural sparse voxel fields. In: Advances in Neural Information Processing Systems (NeurIPS) (2020) 3
work page 2020
-
[33]
In: Proceedings of SIGGRAPH Conference Papers (2025) 2, 4, 10
Liu, R., Sun, D., Chen, M., Wang, Y., Feng, A.: Deformable beta splatting. In: Proceedings of SIGGRAPH Conference Papers (2025) 2, 4, 10
work page 2025
-
[34]
ACM Trans- actions on Graphics (Proceedings of SIGGRAPH)38(4) (2019) 3
Lombardi, S., Simon, T., Saragih, J., Schwartz, G., Lehrmann, A., Sheikh, Y.: Neural volumes: Learning dynamic renderable volumes from images. ACM Trans- actions on Graphics (Proceedings of SIGGRAPH)38(4) (2019) 3
work page 2019
-
[35]
ACM Transactions on Graphics (Proceedings of SIGGRAPH)40(4) (2021) 3
Lombardi, S., Simon, T., Schwartz, G., Zollhoefer, M., Sheikh, Y., Saragih, J.: Mixture of volumetric primitives for efficient neural rendering. ACM Transactions on Graphics (Proceedings of SIGGRAPH)40(4) (2021) 3
work page 2021
-
[36]
Advances in Neural Information Processing Systems35, 3165–3177 (2022) 3
Luo, A., Du, Y., Tarr, M., Tenenbaum, J., Torralba, A., Gan, C.: Learning neural acoustic fields. Advances in Neural Information Processing Systems35, 3165–3177 (2022) 3
work page 2022
- [38]
-
[39]
Martel, J.N.P., Lindell, D.B., Lin, C.Z., Chan, E.R., Monteiro, M., Wetzstein, G.: ACORN adaptive coordinate networks for neural scene representation. ACM Transactions on Graphics (Proceedings of SIGGRAPH)40(4) (2021).https: //doi.org/10.1145/3450626.3459785,https://doi.org/10.1145/3450626. 34597853
-
[40]
Micikevicius, P., Narang, S., Alben, J., Diamos, G., Elsen, E., Garcia, D., Ginsburg, B., Houston, M., Kuchaiev, O., Venkatesh, G., et al.: Mixed precision training. arXiv preprint arXiv:1710.03740 (2017) 9
work page internal anchor Pith review Pith/arXiv arXiv 2017
-
[41]
In: Proceedings of ECCV (2020) 2, 3, 4
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: Representing scenes as neural radiance fields for view synthesis. In: Proceedings of ECCV (2020) 2, 3, 4
work page 2020
-
[42]
ACM Transactions on Graphics and SIGGRAPH Asia (2024) 4
Moenne-Loccoz, N., Mirzaei, A., Perel, O., de Lutio, R., Esturo, J.M., State, G., Fidler, S., Sharp, N., Gojcic, Z.: 3d gaussian ray tracing: Fast tracing of particle scenes. ACM Transactions on Graphics and SIGGRAPH Asia (2024) 4
work page 2024
-
[43]
Mujkanovic, F., Nsampi, N.E., Theobalt, C., Seidel, H.P., Leimkühler, T.: Neural gaussian scale-space fields. ACM Trans. Graph.43(4) (Jul 2024).https://doi. org/10.1145/3658163,https://doi.org/10.1145/36581633
- [44]
-
[45]
ACM Transactions on Graphics (TOG)38(5), 1–19 (2019) 6 Neural Harmonic Textures 19
Müller, T., McWilliams, B., Rousselle, F., Gross, M., Novák, J.: Neural importance sampling. ACM Transactions on Graphics (TOG)38(5), 1–19 (2019) 6 Neural Harmonic Textures 19
work page 2019
-
[46]
In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
Park,J.J.,Florence,P.,Straub,J.,Newcombe,R.,Lovegrove,S.:Deepsdf:Learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 165– 174 (2019) 3
work page 2019
-
[47]
In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
Pumarola, A., Corona, E., Pons-Moll, G., Moreno-Noguer, F.: D-nerf: Neural ra- diance fields for dynamic scenes. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 10318–10327 (2021) 3
work page 2021
-
[48]
In: Proceedings of ICCV (2021) 3
Reiser, C., Peng, S., Liao, Y., Geiger, A.: KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs. In: Proceedings of ICCV (2021) 3
work page 2021
-
[49]
In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
Saragadam, V., LeJeune, D., Tan, J., Balakrishnan, G., Veeraraghavan, A., Bara- niuk, R.G.: Wire: Wavelet implicit neural representations. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 18507– 18516 (2023) 3
work page 2023
-
[50]
Advances in neural information processing systems33, 7462–7473 (2020) 2
Sitzmann, V., Martel, J., Bergman, A., Lindell, D., Wetzstein, G.: Implicit neural representations with periodic activation functions. Advances in neural information processing systems33, 7462–7473 (2020) 2
work page 2020
- [51]
-
[52]
In: Proceedings of the SIGGRAPH Asia 2025 Conference Papers
Su, R., Dong, H., Jin, H., Chen, Y., Wang, G., Li, S.: Vertex features for neu- ral global illumination. In: Proceedings of the SIGGRAPH Asia 2025 Conference Papers. pp. 1–11 (2025) 26
work page 2025
-
[53]
Sun, C., Sun, M., Chen, H.: Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In: CVPR (2022) 7
work page 2022
-
[54]
arXiv preprint arXiv:2411.08508 (2024) 5
Svitov, D., Morerio, P., Agapito, L., Del Bue, A.: Billboard splatting (bb- splat): Learnable textured primitives for novel view synthesis. arXiv preprint arXiv:2411.08508 (2024) 5
-
[55]
In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
Takikawa, T., Litalien, J., Yin, K., Kreis, K., Loop, C., Nowrouzezahrai, D., Ja- cobson, A., McGuire, M., Fidler, S.: Neural geometric level of detail: Real-time rendering with implicit 3d shapes. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 11358–11367 (2021) 3, 6
work page 2021
-
[56]
In: SIGGRAPH Asia 2023 Conference Papers
Takikawa, T., Müller, T., Nimier-David, M., Evans, A., Fidler, S., Jacobson, A., Keller, A.: Compact neural graphics primitives with learned hash probing. In: SIGGRAPH Asia 2023 Conference Papers. SA ’23, Association for Computing Machinery, New York, NY, USA (2023).https://doi.org/10.1145/3610548. 3618167,https://doi.org/10.1145/3610548.36181672
-
[57]
Tancik, M., Srinivasan, P.P., Mildenhall, B., Fridovich-Keil, S., Raghavan, N., Sing- hal, U., Ramamoorthi, R., Barron, J.T., Ng, R.: Fourier features let networks learn high frequency functions in low dimensional domains. NeurIPS (2020) 2, 3, 4, 6
work page 2020
-
[58]
Acm Transactions on Graphics (Proceedings of SIGGRAPH) 38(4) (2019) 3, 4
Thies, J., Zollhöfer, M., Nießner, M.: Deferred neural rendering: Image synthesis using neural textures. Acm Transactions on Graphics (Proceedings of SIGGRAPH) 38(4) (2019) 3, 4
work page 2019
-
[59]
In: Proceedings of the Computer Vision and Pattern Recognition Conference
Wang, J., Chen, M., Karaev, N., Vedaldi, A., Rupprecht, C., Novotny, D.: Vggt: Visual geometry grounded transformer. In: Proceedings of the Computer Vision and Pattern Recognition Conference. pp. 5294–5306 (2025) 2
work page 2025
-
[60]
Neu S : Learning neural implicit surfaces by volume rendering for multi-view reconstruction
Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. arXiv preprint arXiv:2106.10689 (2021) 3
-
[61]
$\pi^3$: Permutation-Equivariant Visual Geometry Learning
Wang, Y., Zhou, J., Zhu, H., Chang, W., Zhou, Y., Li, Z., Chen, J., Pang, J., Shen, C., He, T.:π 3: Permutation-equivariant visual geometry learning (2025), https://arxiv.org/abs/2507.133472 20 J. Condor et al
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[62]
Wu, G., Yi, T., Fang, J., Xie, L., Zhang, X., Wei, W., Liu, W., Tian, Q., Wang, X.: 4D gaussian splatting for real-time dynamic scene rendering. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024) 2, 4
work page 2024
-
[63]
IEEE Transactions on Visualization and Computer Graphics pp
Wu, Q., Bauer, D., Doyle, M.J., Ma, K.L.: Interactive volume visualization via multi-resolution hash encoding based neural representation. IEEE Transactions on Visualization and Computer Graphics pp. 1–14 (2023).https://doi.org/10. 1109/TVCG.2023.32931213
-
[64]
Wu, Q., Insley, J.A., Mateevitsi, V.A., Rizzi, S., Papka, M.E., Ma, K.L.: Dis- tributed neural representation for reactive in situ visualization. IEEE Transac- tions on Visualization and Computer Graphics31(9), 5199–5214 (2025).https: //doi.org/10.1109/TVCG.2024.34327103
-
[65]
Conference on Computer Vision and Pattern Recognition (CVPR) (2025) 4, 5, 6, 7, 8, 9, 10
Wu, Q., Martinez Esturo, J., Mirzaei, A., Moenne-Loccoz, N., Gojcic, Z.: 3dgut: Enabling distorted cameras and secondary rays in gaussian splatting. Conference on Computer Vision and Pattern Recognition (CVPR) (2025) 4, 5, 6, 7, 8, 9, 10
work page 2025
-
[66]
Wurster, S., Zhang, R., Zheng, C.: Gabor splatting for high-quality gigapixel image representations. In: ACM SIGGRAPH 2024 Posters. SIGGRAPH ’24, Association for Computing Machinery, New York, NY, USA (2024).https://doi.org/10. 1145/3641234.3671081,https://doi.org/10.1145/3641234.36710815
-
[67]
Xie, Y., Takikawa, T., Saito, S., Litany, O., Yan, S., Khan, N., Tombari, F., Tomp- kin, J., Sitzmann, V., Sridhar, S.: Neural fields in visual computing and beyond. In: Computer graphics forum. vol. 41, pp. 641–676. Wiley Online Library (2022) 3
work page 2022
-
[68]
In: European Conference on Computer Vision
Xu, T.X., Hu, W., Lai, Y.K., Shan, Y., Zhang, S.H.: Texture-gs: Disentangling the geometry and texture for 3d gaussian splatting editing. In: European Conference on Computer Vision. pp. 37–53. Springer (2024) 5
work page 2024
-
[69]
In: ACM SIGGRAPH 2023 Conference Proceedings (2023) 4
Yariv, L., Hedman, P., Reiser, C., Verbin, D., Srinivasan, P.P., Szeliski, R., Barron, J.T., Mildenhall, B.: BakedSDF: Meshing neural SDFs for real-time view synthesis. In: ACM SIGGRAPH 2023 Conference Proceedings (2023) 4
work page 2023
-
[70]
Journal of Machine Learning Research26(34), 1–17 (2025) 9, 32
Ye, V., Li, R., Kerr, J., Turkulainen, M., Yi, B., Pan, Z., Seiskari, O., Ye, J., Hu, J., Tancik, M., Kanazawa, A.: gsplat: An open-source library for gaussian splatting. Journal of Machine Learning Research26(34), 1–17 (2025) 9, 32
work page 2025
-
[71]
Journal of Machine Learning Research26(34), 1–17 (2025) 11
Ye, V., Li, R., Kerr, J., Turkulainen, M., Yi, B., Pan, Z., Seiskari, O., Ye, J., Hu, J., Tancik, M., Kanazawa, A.: gsplat: An open-source library for gaussian splatting. Journal of Machine Learning Research26(34), 1–17 (2025) 11
work page 2025
-
[72]
European Conference on Computer Vision (2024) 2
Zhang, K., Bi, S., Tan, H., Xiangli, Y., Zhao, N., Sunkavalli, K., Xu, Z.: Gs-lrm: Large reconstruction model for 3d gaussian splatting. European Conference on Computer Vision (2024) 2
work page 2024
-
[73]
In: Proceedings of the IEEE/CVF International Conference on Computer Vision
Zhang, X., Chen, A., Xiong, J., Dai, P., Shen, Y., Xu, W.: Neural shell texture splatting: More details and fewer primitives. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 25229–25238 (2025) 2, 3, 4
work page 2025
-
[74]
In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2025) 8, 10
Zhang, X., Chen, A., Xiong, J., Dai, P., Shen, Y., Xu, W.: Neural shell texture splatting: More details and fewer primitives. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2025) 8, 10
work page 2025
-
[75]
arXiv preprint arXiv:2508.05343 (2025) 5
Zhou, J., Huang, Y., Dai, W., Zou, J., Zheng, Z., Kan, N., Li, C., Xiong, H.: 3dgabsplat: 3d gabor splatting for frequency-adaptive radiance field rendering. arXiv preprint arXiv:2508.05343 (2025) 5
-
[76]
In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
Zhou,S.,Chang,H.,Jiang,S.,Fan,Z.,Zhu,Z.,Xu,D.,Chari,P.,You,S.,Wang,Z., Kadambi, A.: Feature 3dgs: Supercharging 3d gaussian splatting to enable distilled feature fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 21676–21685 (2024) 13, 14, 23 Neural Harmonic Textures 21
work page 2024
-
[77]
arXiv preprint arXiv:2510.08491 (2025) 3 22 J
Zhou, X., Nguyen, B.H., Magne, L., Golyanik, V., Leimkühler, T., Theobalt, C.: Splat the net: Radiance fields with splattable neural primitives. arXiv preprint arXiv:2510.08491 (2025) 3 22 J. Condor et al. MipNeRF360 Tanks & Temples Deep Blending Method PSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓PSNR↑SSIM↑LPIPS↓ 3DGS-MCMC + SH 28.21 0.841 0.214 24.46 0.866 0.174 29....
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.