pith. machine review for the scientific record. sign in

arxiv: 2604.15862 · v1 · submitted 2026-04-17 · 💻 cs.CV

Recognition: unknown

Splats in Splats++: Robust and Generalizable 3D Gaussian Splatting Steganography

Authors on Pith no claims yet

Pith reviewed 2026-05-10 08:54 UTC · model grok-4.3

classification 💻 cs.CV
keywords 3D Gaussian Splattingsteganographymessage embeddingspherical harmonicsopacity mapping3D reconstructionrobustness to attacksrendering efficiency
0
0 comments X

The pith

Splats in Splats++ embeds high-capacity 3D and 4D messages directly into native 3D Gaussian Splatting representations using graded spherical harmonics encryption and opacity coupling.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces a steganography framework called Splats in Splats++ that hides content inside 3D Gaussian Splatting scenes without breaking the standard rendering process. It grades the encryption of spherical harmonics coefficients according to their frequency importance so that changes stay invisible to viewers. Hash-grid guided opacity mapping together with a gradient-gated consistency loss ties the original and hidden attributes together in space, reducing leakage under geometric changes. Experiments report higher recovered message quality, quicker rendering, and stronger resistance to targeted 3D attacks than earlier approaches, plus straightforward extension to 2D images and 4D scenes.

Core claim

By grounding message embedding in the frequency distribution of spherical harmonics for importance-graded encryption and by enforcing spatial-attribute coupling through hash-grid guided opacity mapping and gradient-gated opacity consistency loss, high-capacity invisible steganography becomes possible inside the explicit 3DGS representation while preserving visual fidelity and rendering speed.

What carries the argument

Importance-graded encryption of spherical harmonics coefficients together with hash-grid guided opacity mapping and gradient-gated opacity consistency loss, which together create a continuous, attack-resilient latent manifold linking the visible and hidden scenes.

If this is right

  • Recovered message quality reaches up to 6.28 dB higher than prior steganography methods for 3DGS.
  • Rendering speed increases by a factor of three while the hidden content remains extractable.
  • The embedding survives aggressive 3D structural attacks that alter Gaussian positions and attributes.
  • The same pipeline applies without modification to 2D image embedding and 4D dynamic scene steganography.
  • The framework supports diverse downstream tasks that use 3DGS assets.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the spatial coupling holds under real-world capture noise, the technique could protect 3D assets captured from phones or drones without extra post-processing steps.
  • The graded encryption idea might transfer to other explicit 3D formats such as point clouds or voxel grids where frequency-like attributes exist.
  • Widespread adoption would let creators watermark and trace 3D models at the representation level rather than relying on separate encryption layers.
  • The attack resilience suggests the method could be tested on scenes undergoing common 3D editing operations like cropping or decimation to measure practical limits.

Load-bearing premise

The frequency distribution of spherical harmonics permits selective encryption that stays imperceptible and does not reduce the original scene's detail, while the opacity mapping and consistency loss can couple attributes spatially without introducing new geometric or rendering artifacts.

What would settle it

Apply the embedding to a standard 3DGS scene, then render it from multiple viewpoints and compare pixel values to the original renders; any systematic visible difference, or inability to recover the hidden message after removing or repositioning a substantial fraction of the Gaussians, would falsify the central claim.

Figures

Figures reproduced from arXiv: 2604.15862 by Gaolei Li, Jianhua Li, Lei Ma, Liwen Hu, Shengbo Chen, Tiejun Huang, Tong Hu, Wenkai Huang, Xitong Ling, Yang Li, Yijia Guo, Yuxin Hong.

Figure 1
Figure 1. Figure 1: Left: GS-Hider and our method’s rendering pipeline. GS-Hider [17] employs a coupled feature field and neural decoders to render the original and hidden scenes simultaneously, affecting user’s conventional usage. We retain the vanilla 3DGS pipeline to preserve user experience. Right: Comparison of different 3DGS steganography methods. Existing works all have shortcomings in terms of robustness, fidelity, ef… view at source ↗
Figure 2
Figure 2. Figure 2: Top: Steganography Pipeline. We utilize the original and hidden views to train two sets of SH coefficients and opacity, while ensuring that both sets share the same Gaussian primitive locations. Training is supervised by rendering losses from the original and hidden scenes, supplemented by an opacity gradient-based gating mechanism that facilitates the seamless fusion of both scene components. Following th… view at source ↗
Figure 3
Figure 3. Figure 3: (a) Decomposition of scene appearance. A stan￾dard BRDF can be decomposed into diffuse and specular components, with the diffuse component being invariant to the viewing direction, whereas the specular component ex￾hibits pronounced view-dependent behavior. (b) Importance of Spherical Harmonics Order. We visualize the spherical harmonics (SH) decomposition of the outgoing radiance distri￾bution. Low-order … view at source ↗
Figure 4
Figure 4. Figure 4: Overview of the proposed dual-stream steganography framework. The architecture consists of two key modules: Top: Importance-graded SH Encryption, which embeds hidden bit-streams into SH coefficients using an importance-graded ranking strategy. By selectively encrypting SH coefficients across frequency orders via bit-wise XOR operations, it prioritizes the protection of critical visual components. Bottom: H… view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative comparisons. The difference maps visualize the pixel-wise residuals between GT and renderings with 10× [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Visual results under various pruning attacks. Compared with SOTA steganographic frameworks, our method exhibits remarkable robustness against both standard Opacity Pruning and the more aggressive GSPure attack. Our framework preserves sharp textures and accurate color representation, whereas competing methods show significant fidelity loss or erroneous rendering. the vanilla 3DGS’s rendering pipeline (Pipe… view at source ↗
Figure 7
Figure 7. Figure 7: Comparison of rendering results. GS-hider and [PITH_FULL_IMAGE:figures/full_fig_p010_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: 4D Steganography Visualization on HyperNeRF (4D Dynamic Scene) and Mip-NeRF 360 (3D Hidden Watermark O [PITH_FULL_IMAGE:figures/full_fig_p012_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Visualization results under different noise levels. [PITH_FULL_IMAGE:figures/full_fig_p012_9.png] view at source ↗
Figure 11
Figure 11. Figure 11: Visualization results of downstream tasks. Our [PITH_FULL_IMAGE:figures/full_fig_p013_11.png] view at source ↗
Figure 10
Figure 10. Figure 10: Visualization results of image embedding. We [PITH_FULL_IMAGE:figures/full_fig_p013_10.png] view at source ↗
read the original abstract

3D Gaussian Splatting (3DGS) has recently redefined the paradigm of 3D reconstruction, striking an unprecedented balance between visual fidelity and computational efficiency. As its adoption proliferates, safeguarding the copyright of explicit 3DGS assets has become paramount. However, existing invisible message embedding frameworks struggle to reconcile secure and high-capacity data embedding with intrinsic asset utility, often disrupting the native rendering pipeline or exhibiting vulnerability to structural perturbations. In this work, we present \textbf{\textit{Splats in Splats++}}, a unified and pipeline-agnostic steganography framework that seamlessly embeds high-capacity 3D/4D content directly within the native 3DGS representation. Grounded in a principled analysis of the frequency distribution of Spherical Harmonics (SH), we propose an importance-graded SH coefficient encryption scheme that achieves imperceptible embedding without compromising the original expressive power. To fundamentally resolve the geometric ambiguities that lead to message leakage, we introduce a \textbf{Hash-Grid Guided Opacity Mapping} mechanism. Coupled with a novel \textbf{Gradient-Gated Opacity Consistency Loss}, our formulation enforces a stringent spatial-attribute coupling between the original and hidden scenes, effectively projecting the discrete attribute mapping into a continuous, attack-resilient latent manifold. Extensive experiments demonstrate that our method substantially outperforms existing approaches, achieving up to \textbf{6.28 db} higher message fidelity, \textbf{3$\times$} faster rendering, and exceptional robustness against aggressive 3D-targeted structural attacks (e.g., GSPure). Furthermore, our framework exhibits remarkable versatility, generalizing seamlessly to 2D image embedding, 4D dynamic scene steganography, and diverse downstream tasks.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper presents Splats in Splats++, a steganography framework for 3D Gaussian Splatting that embeds high-capacity 3D/4D messages directly into the native representation. It uses an importance-graded encryption scheme on spherical harmonics coefficients derived from frequency analysis, combined with a Hash-Grid Guided Opacity Mapping mechanism and a Gradient-Gated Opacity Consistency Loss to enforce spatial-attribute coupling. The method claims to achieve up to 6.28 dB higher message fidelity, 3× faster rendering, and strong robustness to structural attacks such as GSPure, while generalizing to 2D images and 4D scenes.

Significance. If the robustness and imperceptibility claims are verified, the work would advance copyright protection for explicit 3DGS assets by providing a pipeline-agnostic embedding approach that preserves rendering efficiency and scene fidelity better than prior methods. The empirical outperformance and generalization to dynamic scenes represent practical strengths, though the absence of error bars and ablations limits immediate impact assessment.

major comments (3)
  1. [Abstract and §4] Abstract and §4 (method description): The headline robustness claim against GSPure and other structural attacks rests on the Hash-Grid Guided Opacity Mapping plus Gradient-Gated Opacity Consistency Loss projecting discrete mappings into an attack-resilient manifold, yet no derivation, proof sketch, or post-perturbation analysis is provided showing why hash-grid lookups and gradient gates remain intact after Gaussian pruning or alteration.
  2. [Experiments] Experiments section (quantitative tables): Results report up to 6.28 dB gains in message fidelity and 3× rendering speedup without error bars, confidence intervals, or statistical tests; baseline implementations are not specified in sufficient detail to reproduce, and no ablation is shown on the free parameters (SH importance grading thresholds and loss weighting coefficients).
  3. [§3.1] §3.1 (SH frequency analysis): The assumption that frequency distribution of spherical harmonics permits an importance-graded scheme that remains imperceptible and preserves expressive power is stated but not supported by quantitative analysis of how grading thresholds affect scene PSNR or perceptual metrics under varying scene complexity.
minor comments (2)
  1. [§4.3] Notation for the Gradient-Gated Opacity Consistency Loss could be clarified with an explicit equation showing the gating function and its dependence on hash-grid features.
  2. [Figures] Figure captions for robustness visualizations should include the exact attack parameters (e.g., pruning ratio in GSPure) to allow direct comparison.

Circularity Check

0 steps flagged

No significant circularity; novel mechanisms introduced independently of reported metrics

full rationale

The paper proposes new algorithmic components—an importance-graded SH coefficient encryption scheme derived from frequency distribution analysis, Hash-Grid Guided Opacity Mapping, and Gradient-Gated Opacity Consistency Loss—to enforce spatial-attribute coupling and attack resilience. These are presented as original constructions to resolve geometric ambiguities, not quantities defined in terms of the target message fidelity (6.28 dB) or robustness numbers. No equations reduce the performance claims to fitted inputs by construction, and the provided text contains no load-bearing self-citations or uniqueness theorems imported from prior author work. Experimental outperformance is reported as empirical validation rather than a tautological prediction, rendering the derivation chain self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 2 invented entities

The framework adds two algorithmic inventions and a small number of tunable thresholds rather than relying on many free parameters or deep unstated axioms beyond standard 3DGS rendering assumptions.

free parameters (2)
  • SH importance grading thresholds
    Hand-chosen cutoffs that decide which spherical harmonic coefficients receive encryption based on frequency distribution.
  • loss weighting coefficients in Gradient-Gated Opacity Consistency Loss
    Scalars that balance the consistency term against rendering fidelity during optimization.
axioms (2)
  • standard math Spherical harmonics provide a frequency-ordered basis for view-dependent color in 3DGS
    Invoked when analyzing frequency distribution to justify importance grading.
  • domain assumption Opacity values can be remapped via hash grid without breaking the differentiable rendering pipeline
    Central to the claim that geometric ambiguities are resolved.
invented entities (2)
  • Hash-Grid Guided Opacity Mapping no independent evidence
    purpose: Projects discrete attribute mappings into a continuous latent manifold to prevent message leakage under structural perturbations
    New mechanism introduced to couple original and hidden scenes spatially.
  • Gradient-Gated Opacity Consistency Loss no independent evidence
    purpose: Enforces stringent spatial-attribute coupling during training
    Novel loss term proposed to make embedding attack-resilient.

pith-pipeline@v0.9.0 · 5655 in / 1629 out tokens · 43801 ms · 2026-05-10T08:54:29.290418+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

74 extracted references · 20 canonical work pages · 1 internal anchor

  1. [1]

    Hiding images in plain sight: Deep steganography,

    S. Baluja, “Hiding images in plain sight: Deep steganography,”Advances in neural information processing systems, vol. 30, 2017. 1

  2. [2]

    Stegastamp: Invisible hyperlinks in physical photographs,

    M. Tancik, B. Mildenhall, and R. Ng, “Stegastamp: Invisible hyperlinks in physical photographs,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 2117–2126. 1

  3. [3]

    Steganogan: High capacity image steganography with gans,

    K. A. Zhang, A. Cuesta-Infante, L. Xu, and K. Veeramachaneni, “Steganogan: High capacity image steganography with gans,”arXiv preprint arXiv:1901.03892, 2019. 1

  4. [4]

    Embedding water- marks into deep neural networks,

    Y . Uchida, Y . Nagai, S. Sakazawa, and S. Satoh, “Embedding water- marks into deep neural networks,” inProceedings of the 2017 ACM on international conference on multimedia retrieval, 2017, pp. 269–277. 1

  5. [5]

    Intellectual property protection for deep learning models: Taxonomy, methods, attacks, and evaluations,

    M. Xue, Y . Zhang, J. Wang, and W. Liu, “Intellectual property protection for deep learning models: Taxonomy, methods, attacks, and evaluations,” IEEE Transactions on Artificial Intelligence, vol. 3, no. 6, pp. 908–923,

  6. [6]

    3d gaussian splatting for real-time radiance field rendering,

    B. Kerbl, G. Kopanas, T. Leimk ¨uhler, and G. Drettakis, “3d gaussian splatting for real-time radiance field rendering,”ACM Transactions on Graphics, vol. 42, no. 4, pp. 1–14, 2023. 1, 3 UNDER REVIEW 14

  7. [7]

    Lgm: Large multi-view gaussian model for high-resolution 3d content creation,

    J. Tang, Z. Chen, X. Chen, T. Wang, G. Zeng, and Z. Liu, “Lgm: Large multi-view gaussian model for high-resolution 3d content creation,” in European Conference on Computer Vision. Springer, 2024, pp. 1–18. 1

  8. [8]

    Gart: Gaussian articulated template models,

    J. Lei, Y . Wang, G. Pavlakos, L. Liu, and K. Daniilidis, “Gart: Gaussian articulated template models,” inProceedings of the IEEE/CVF confer- ence on computer vision and pattern recognition, 2024, pp. 19 876– 19 887. 1

  9. [9]

    Flashavatar: High-fidelity head avatar with efficient gaussian embedding,

    J. Xiang, X. Gao, Y . Guo, and J. Zhang, “Flashavatar: High-fidelity head avatar with efficient gaussian embedding,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 1802–1812. 1

  10. [10]

    Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis,

    J. Luiten, G. Kopanas, B. Leibe, and D. Ramanan, “Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis,” in2024 International Conference on 3D Vision (3DV). IEEE, 2024, pp. 800–

  11. [11]

    Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction,

    Z. Yang, X. Gao, W. Zhou, S. Jiao, Y . Zhang, and X. Jin, “Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2024, pp. 20 331–20 341. 1

  12. [12]

    Gaussian splatting slam,

    H. Matsuki, R. Murai, P. H. Kelly, and A. J. Davison, “Gaussian splatting slam,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2024, pp. 18 039–18 048. 1

  13. [13]

    Splatam: Splat track & map 3d gaussians for dense rgb-d slam,

    N. Keetha, J. Karhade, K. M. Jatavallabhula, G. Yang, S. Scherer, D. Ramanan, and J. Luiten, “Splatam: Splat track & map 3d gaussians for dense rgb-d slam,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2024, pp. 21 357–21 366. 1

  14. [14]

    Dreamgaussian: Generative gaussian splatting for efficient 3d content creation,

    J. Tang, J. Ren, H. Zhou, Z. Liu, and G. Zeng, “Dreamgaussian: Generative gaussian splatting for efficient 3d content creation,” inThe Twelfth International Conference on Learning Representations. 1

  15. [15]

    Luciddreamer: Domain-free generation of 3d gaussian splatting scenes,

    J. Chung, S. Lee, H. Nam, J. Lee, and K. M. Lee, “Luciddreamer: Domain-free generation of 3d gaussian splatting scenes,”IEEE Trans- actions on Visualization & Computer Graphics, no. 01, pp. 1–12, 2025. 1

  16. [16]

    Steganerf: Embedding invisible information within neural radiance fields,

    C. Li, B. Y . Feng, Z. Fan, P. Pan, and Z. Wang, “Steganerf: Embedding invisible information within neural radiance fields,” inProceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 441–453. 1, 3, 9

  17. [17]

    Gs- hider: Hiding messages into 3d gaussian splatting,

    X. Zhang, J. Meng, R. Li, Z. Xu, Y . Zhang, and J. Zhang, “Gs- hider: Hiding messages into 3d gaussian splatting,”arXiv preprint arXiv:2405.15118, 2024. 1, 2, 3, 8, 9

  18. [18]

    Securegs: Boosting the security and fidelity of 3d gaussian splatting steganography,

    X. Zhang, J. Meng, Z. Xu, S. Yang, Y . Wu, R. Wang, and J. Zhang, “Securegs: Boosting the security and fidelity of 3d gaussian splatting steganography,”arXiv preprint arXiv:2503.06118, 2025. 1, 3, 8, 9

  19. [19]

    Gaussianmarker: Uncertainty-aware copyright protection of 3d gaussian splatting,

    X. Huang, R. Li, Y .-m. Cheung, K. C. Cheung, S. See, and R. Wan, “Gaussianmarker: Uncertainty-aware copyright protection of 3d gaussian splatting,”Advances in Neural Information Processing Systems, vol. 37, pp. 33 037–33 060, 2024. 1

  20. [20]

    Guardsplat: efficient and robust watermarking for 3d gaussian splatting,

    Z. Chen, G. Wang, J. Zhu, J. Lai, and X. Xie, “Guardsplat: efficient and robust watermarking for 3d gaussian splatting,” inProceedings of the Computer Vision and Pattern Recognition Conference, 2025, pp. 16 325–16 335. 1

  21. [21]

    Gs-marker: Generalizable and robust watermarking for 3d gaussian splatting,

    L. Li, J. Wang, X. Ming, and Y . Lu, “Gs-marker: Generalizable and robust watermarking for 3d gaussian splatting,”arXiv preprint arXiv:2503.18718, 2025. 1

  22. [22]

    Splats in splats: Embedding invisible 3d watermark within gaussian splatting,

    Y . Guo, W. Huang, Y . Li, G. Li, H. Zhang, L. Hu, J. Li, T. Huang, and L. Ma, “Splats in splats: Embedding invisible 3d watermark within gaussian splatting,”arXiv preprint arXiv:2412.03121, 2024. 1, 3, 11

  23. [23]

    Nerf: Representing scenes as neural radiance fields for view synthesis,

    B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,”Communications of the ACM, vol. 65, no. 1, pp. 99–106,

  24. [24]

    Nerfies: Deformable neural radiance fields,

    K. Park, U. Sinha, J. T. Barron, S. Bouaziz, D. B. Goldman, S. M. Seitz, and R. Martin-Brualla, “Nerfies: Deformable neural radiance fields,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 5865–5874. 3

  25. [25]

    D- nerf: Neural radiance fields for dynamic scenes,

    A. Pumarola, E. Corona, G. Pons-Moll, and F. Moreno-Noguer, “D- nerf: Neural radiance fields for dynamic scenes,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10 318–10 327. 3, 8

  26. [26]

    Neural scene flow fields for space-time view synthesis of dynamic scenes,

    Z. Li, S. Niklaus, N. Snavely, and O. Wang, “Neural scene flow fields for space-time view synthesis of dynamic scenes,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 6498–6508. 3

  27. [27]

    Nerf-ds: Neural radiance fields for dynamic specular objects,

    Z. Yan, C. Li, and G. H. Lee, “Nerf-ds: Neural radiance fields for dynamic specular objects,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 8285–8295. 3

  28. [28]

    Mip-nerf: A multiscale representation for anti- aliasing neural radiance fields,

    J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan, “Mip-nerf: A multiscale representation for anti- aliasing neural radiance fields,” inProceedings of the IEEE/CVF In- ternational Conference on Computer Vision, 2021, pp. 5855–5864. 3, 8

  29. [29]

    Mip-nerf 360: Unbounded anti-aliased neural radiance fields,

    J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman, “Mip-nerf 360: Unbounded anti-aliased neural radiance fields,” inPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 5470–5479. 3

  30. [30]

    E2nerf: Event enhanced neural radiance fields from blurry images,

    Y . Qi, L. Zhu, Y . Zhang, and J. Li, “E2nerf: Event enhanced neural radiance fields from blurry images,” inProceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 13 254–13 264. 3

  31. [31]

    Mip-splatting: Alias-free 3d gaussian splatting,

    Z. Yu, A. Chen, B. Huang, T. Sattler, and A. Geiger, “Mip-splatting: Alias-free 3d gaussian splatting,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 19 447–19 456. 3

  32. [32]

    arXiv preprint arXiv:2403.20309 (2024)

    Z. Fan, W. Cong, K. Wen, K. Wang, J. Zhang, X. Ding, D. Xu, B. Ivanovic, M. Pavone, G. Pavlakoset al., “Instantsplat: Unbounded sparse-view pose-free gaussian splatting in 40 seconds,”arXiv preprint arXiv:2403.20309, 2024. 3

  33. [33]

    Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images.arXiv preprint arXiv:2403.14627, 2024

    Y . Chen, H. Xu, C. Zheng, B. Zhuang, M. Pollefeys, A. Geiger, T.-J. Cham, and J. Cai, “Mvsplat: Efficient 3d gaussian splatting from sparse multi-view images,”arXiv preprint arXiv:2403.14627, 2024. 3

  34. [34]

    Compgs: Smaller and faster gaussian splatting with vector quantization,

    K. Navaneet, K. Pourahmadi Meibodi, S. Abbasi Koohpayegani, and H. Pirsiavash, “Compgs: Smaller and faster gaussian splatting with vector quantization,” inEuropean Conference on Computer Vision. Springer, 2024, pp. 330–349. 3

  35. [35]

    arXiv preprint arXiv:2310.08528 , year=

    G. Wu, T. Yi, J. Fang, L. Xie, X. Zhang, W. Wei, W. Liu, Q. Tian, and X. Wang, “4d gaussian splatting for real-time dynamic scene rendering,” arXiv preprint arXiv:2310.08528, 2023. 3, 10

  36. [36]

    St-4dgs: Spatial- temporally consistent 4d gaussian splatting for efficient dynamic scene rendering,

    D. Li, S.-S. Huang, Z. Lu, X. Duan, and H. Huang, “St-4dgs: Spatial- temporally consistent 4d gaussian splatting for efficient dynamic scene rendering,” inACM SIGGRAPH 2024 Conference Papers, 2024, pp. 1–

  37. [37]

    Citygaussian: Real-time high-quality large-scale scene rendering with gaussians,

    Y . Liu, H. Guan, C. Luo, L. Fan, J. Peng, and Z. Zhang, “Citygaussian: Real-time high-quality large-scale scene rendering with gaussians,” arXiv preprint arXiv:2404.01133, 2024. 3

  38. [38]

    On-the-fly large-scale 3d reconstruction from multi-camera rigs,

    Y . Guo, T. Hu, Z. Li, L. Hu, K. Qian, X. Lin, S. Chen, T. Huang, and L. Ma, “On-the-fly large-scale 3d reconstruction from multi-camera rigs,”arXiv preprint arXiv:2512.08498, 2025. 3

  39. [39]

    On- the-fly reconstruction for large-scale novel view synthesis from unposed images,

    A. Meuleman, I. Shah, A. Lanvin, B. Kerbl, and G. Drettakis, “On- the-fly reconstruction for large-scale novel view synthesis from unposed images,”ACM Transactions on Graphics (TOG), vol. 44, no. 4, pp. 1–14,

  40. [40]

    Event3dgs: Event-based 3d gaussian splatting for fast egomotion,

    T. Xiong, J. Wu, B. He, C. Fermuller, Y . Aloimonos, H. Huang, and C. A. Metzler, “Event3dgs: Event-based 3d gaussian splatting for fast egomotion,”arXiv preprint arXiv:2406.02972, 2024. 3

  41. [41]

    arXiv preprint arXiv:2405.20224 (2024)

    W. Yu, C. Feng, J. Tang, X. Jia, L. Yuan, and Y . Tian, “Evagaussians: Event stream assisted gaussian splatting from blurry images,”arXiv preprint arXiv:2405.20224, 2024. 3

  42. [42]

    Spikegs: Reconstruct 3d scene via fast-moving bio-inspired sensors,

    Y . Guo, L. Hu, L. Ma, and T. Huang, “Spikegs: Reconstruct 3d scene via fast-moving bio-inspired sensors,”arXiv preprint arXiv:2407.03771,

  43. [43]

    Instant neural graphics primitives with a multiresolution hash encoding,

    T. M ¨uller, A. Evans, C. Schied, and A. Keller, “Instant neural graphics primitives with a multiresolution hash encoding,”ACM transactions on graphics (TOG), vol. 41, no. 4, pp. 1–15, 2022. 3

  44. [44]

    Deblur-gs: 3d gaussian splatting from camera motion blurred images,

    W. Chen and L. Liu, “Deblur-gs: 3d gaussian splatting from camera motion blurred images,”Proceedings of the ACM on Computer Graphics and Interactive Techniques, vol. 7, no. 1, pp. 1–15, 2024. 3

  45. [45]

    Prtgs: Precomputed radiance transfer of gaussian splats for real-time high-quality relighting,

    Y . Guo, Y . Bai, L. Hu, Z. Guo, M. Liu, Y . Cai, T. Huang, and L. Ma, “Prtgs: Precomputed radiance transfer of gaussian splats for real-time high-quality relighting,”arXiv preprint arXiv:2408.03538, 2024. 3

  46. [46]

    Gs-ir: 3d gaussian splatting for inverse rendering,

    Z. Liang, Q. Zhang, Y . Feng, Y . Shan, and K. Jia, “Gs-ir: 3d gaussian splatting for inverse rendering,”arXiv preprint arXiv:2311.16473, 2023. 3

  47. [47]

    Relightable 3d gaussian: Real-time point cloud relighting with brdf decomposition and ray tracing,

    J. Gao, C. Gu, Y . Lin, H. Zhu, X. Cao, L. Zhang, and Y . Yao, “Relightable 3d gaussian: Real-time point cloud relighting with brdf decomposition and ray tracing,”arXiv preprint arXiv:2311.16043, 2023. 3

  48. [48]

    arXiv preprint arXiv:2310.08529 , year=

    T. Yi, J. Fang, G. Wu, L. Xie, X. Zhang, W. Liu, Q. Tian, and X. Wang, “Gaussiandreamer: Fast generation from text to 3d gaussian splatting with point cloud priors,”arXiv preprint arXiv:2310.08529, 2023. 3

  49. [49]

    Gaussianeditor: Swift and controllable 3d editing with gaussian splatting,

    Y . Chen, Z. Chen, C. Zhang, F. Wang, X. Yang, Y . Wang, Z. Cai, L. Yang, H. Liu, and G. Lin, “Gaussianeditor: Swift and controllable 3d editing with gaussian splatting,”arXiv preprint arXiv:2311.14521,

  50. [50]

    Hiding images in plain sight: Deep steganography,

    S. Baluja, “Hiding images in plain sight: Deep steganography,” in Advances in Neural Information Processing Systems (NeurIPS), 2017. 3

  51. [51]

    Hidden: Hiding data with deep networks,

    J. Zhu, R. Kaplan, J. Johnson, and L. Fei-Fei, “Hidden: Hiding data with deep networks,” inProceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 657–672. 3

  52. [52]

    Udh: Universal deep hiding for steganog- raphy, watermarking, and light field messaging,

    K. Zhang, L. Xu, and J. Liu, “Udh: Universal deep hiding for steganog- raphy, watermarking, and light field messaging,”IEEE Transactions on Image Processing, vol. 28, no. 9, pp. 4447–4460, 2019. 3

  53. [53]

    End-to-end learned image watermark- ing with encoder–decoder networks,

    M. Liu, X. Li, and Y . Wang, “End-to-end learned image watermark- ing with encoder–decoder networks,”IEEE Signal Processing Letters, vol. 26, no. 12, pp. 1817–1821, 2019. 3

  54. [54]

    Red- mark: Framework for residual diffusion watermarking based on deep networks,

    M. Ahmadi, A. Norouzi, N. Karimi, S. Samavi, and A. Emami, “Red- mark: Framework for residual diffusion watermarking based on deep networks,”Expert Systems with Applications, vol. 146, p. 113157, 2020. 3

  55. [55]

    The stable signature: Rooting watermarks in latent diffusion models,

    P. Fernandez, G. Couairon, H. J ´egou, M. Douze, and T. Furon, “The stable signature: Rooting watermarks in latent diffusion models,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 22 466–22 477. 3

  56. [56]

    Promark: Proactive diffusion watermarking for causal attribution,

    V . Asnani, J. Collomosse, T. Bui, X. Liu, and S. Agarwal, “Promark: Proactive diffusion watermarking for causal attribution,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion (CVPR), 2024, pp. 10 802–10 811. 3

  57. [57]

    A watermark-conditioned diffusion model for ip protection,

    R. Min, S. Li, H. Chen, and M. Cheng, “A watermark-conditioned diffusion model for ip protection,” inProceedings of the European Conference on Computer Vision (ECCV). Springer, 2024, pp. 104–

  58. [58]

    Gaussian shading: Provable performance-lossless image watermarking for diffu- sion models,

    Z. Yang, K. Zeng, K. Chen, H. Fang, W. Zhang, and N. Yu, “Gaussian shading: Provable performance-lossless image watermarking for diffu- sion models,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 12 162–12 171. 3

  59. [59]

    Tree-rings watermarks: Invisible fingerprints for diffusion images,

    Y . Wen, J. Kirchenbauer, J. Geiping, and T. Goldstein, “Tree-rings watermarks: Invisible fingerprints for diffusion images,”Advances in Neural Information Processing Systems, vol. 36, 2024. 3

  60. [60]

    Hiding images within images,

    S. Baluja, “Hiding images within images,”IEEE transactions on pattern analysis and machine intelligence, vol. 42, no. 7, pp. 1685–1697, 2019. 3

  61. [61]

    Robust invertible image steganography,

    Y . Xu, C. Mou, Y . Hu, J. Xie, and J. Zhang, “Robust invertible image steganography,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 7875–7884. 3

  62. [62]

    Large- capacity and flexible video steganography via invertible neural network,

    C. Mou, Y . Xu, J. Song, C. Zhao, B. Ghanem, and J. Zhang, “Large- capacity and flexible video steganography via invertible neural network,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 22 606–22 615. 3

  63. [63]

    Dvmark: a deep multiscale framework for video watermarking,

    X. Luo, Y . Li, H. Chang, C. Liu, P. Milanfar, and F. Yang, “Dvmark: a deep multiscale framework for video watermarking,”IEEE Transactions on Image Processing, 2023. 3

  64. [64]

    Zero watermarking scheme for 3d triangle mesh model based on global and local geometric features,

    D. Li, Z. Yang, and X. Jin, “Zero watermarking scheme for 3d triangle mesh model based on global and local geometric features,”Multimedia Tools and Applications, vol. 82, no. 28, pp. 43 635–43 648, 2023. 3

  65. [65]

    3d data security: Robust 3d mesh watermarking approach for copyright protection,

    I. F. Kallel, A. Grati, and A. Taktak, “3d data security: Robust 3d mesh watermarking approach for copyright protection,” inExamining Multimedia Forensics and Content Integrity. IGI Global, 2023, pp. 1–37. 3

  66. [66]

    3d object watermarking from data hiding in the homomorphic encrypted domain,

    B. J. Van Rensburg, P. Puteaux, W. Puech, and J.-P. Pedeboy, “3d object watermarking from data hiding in the homomorphic encrypted domain,” ACM Transactions on Multimedia Computing, Communications and Applications, vol. 19, no. 5s, pp. 1–20, 2023. 3

  67. [67]

    Waterf: Robust watermarks in radiance fields for protection of copyrights,

    Y . Jang, D. I. Lee, M. Jang, J. W. Kim, F. Yang, and S. Kim, “Waterf: Robust watermarks in radiance fields for protection of copyrights,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 12 087–12 097. 3

  68. [68]

    Scaffold-gs: Structured 3d gaussians for view-adaptive rendering,

    T. Lu, M. Yu, L. Xu, Y . Xiangli, L. Wang, D. Lin, and B. Dai, “Scaffold-gs: Structured 3d gaussians for view-adaptive rendering,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 20 654–20 664. 3

  69. [69]

    Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments,

    P.-P. Sloan, J. Kautz, and J. Snyder, “Precomputed radiance transfer for real-time rendering in dynamic, low-frequency lighting environments,”

  70. [70]

    Tanks and temples: Benchmarking large-scale scene reconstruction,

    A. Knapitsch, J. Park, Q.-Y . Zhou, and V . Koltun, “Tanks and temples: Benchmarking large-scale scene reconstruction,”ACM Transactions on Graphics (ToG), vol. 36, no. 4, pp. 1–13, 2017. 8

  71. [71]

    arXiv preprint arXiv:2106.13228 (2021)

    K. Park, U. Sinha, P. Hedman, J. T. Barron, S. Bouaziz, D. B. Goldman, R. Martin-Brualla, and S. M. Seitz, “Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields,”arXiv preprint arXiv:2106.13228, 2021. 8

  72. [72]

    Can Protective Watermarking Safeguard the Copyright of 3D Gaussian Splatting?

    W. Huang, Y . Guo, G. Li, L. Ma, H. Zhang, L. Hu, J. Wang, J. Li, and T. Huang, “Can protective watermarking safeguard the copyright of 3d gaussian splatting?”arXiv preprint arXiv:2511.22262, 2025. 9

  73. [73]

    Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering,

    A. Gu ´edon and V . Lepetit, “Sugar: Surface-aligned gaussian splatting for efficient 3d mesh reconstruction and high-quality mesh rendering,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 5354–5363. 13

  74. [74]

    Langsplat: 3d language gaussian splatting,

    M. Qin, W. Li, J. Zhou, H. Wang, and H. Pfister, “Langsplat: 3d language gaussian splatting,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 20 051–20 060. 13 Yijia Guoreceived the B.S. degree from Beihang University, Beijing, China, in 2022. He is currently working toward the Ph.D. degree with the National ...