pith. machine review for the scientific record. sign in

arxiv: 2605.10705 · v1 · submitted 2026-05-11 · 💻 cs.CV

Recognition: no theorem link

TransmissiveGS: Residual-Guided Disentangled Gaussian Splatting for Transmissive Scene Reconstruction and Rendering

Authors on Pith no claims yet

Pith reviewed 2026-05-12 04:36 UTC · model grok-4.3

classification 💻 cs.CV
keywords Gaussian splattingtransmissive scenesreflection disentanglementscene reconstructionnovel view synthesisdeferred shadingcomputer vision
0
0 comments X

The pith

TransmissiveGS disentangles reflections from transmitted content in scenes by modeling them as separate Gaussian sets guided by multi-view reconstruction residuals.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes a new approach to reconstruct and render transmissive scenes, such as those viewed through glass, where near-field reflections from the surroundings entangle with the transmitted background scene. It addresses this by representing the scene with two sets of Gaussians and using a deferred shading process to render them together while separating the components. The separation relies on the fact that reflections vary inconsistently across different viewpoints, so residuals left after reconstructing the consistent transmitted parts serve as signals to model geometry and appearance distinctly. A dedicated reflection light field improves estimation of nearby reflections, and high-frequency regularization helps retain sharp details during training. The work also provides a new synthetic dataset to test such methods, and shows gains over earlier Gaussian splatting techniques on both simulated and captured transmissive scenes.

Core claim

We present TransmissiveGS, a novel framework for disentangled reconstruction and rendering of transmissive scenes. Specifically, we model the scene with a dual-Gaussian representation and introduce a deferred shading function to jointly render the two Gaussian components. To separate reflection and transmission, we exploit the inherent multi-view inconsistency of reflections and leverage the residuals from reconstructing multi-view consistent content as cues for disentangled geometry and appearance modeling. We further propose a reflection light field that enables high-fidelity estimation of near-field reflections. During training, we introduce a high-frequency regularization to preserve the

What carries the argument

Dual-Gaussian representation that uses residuals from multi-view inconsistency of reflections as cues to disentangle geometry and appearance, together with a reflection light field and deferred shading for joint rendering.

If this is right

  • The dual-Gaussian model with residual guidance produces higher-fidelity reconstructions and renderings of both the reflected environment and the transmitted background than prior single-representation Gaussian splatting methods.
  • The reflection light field component enables accurate capture of near-field reflection effects without requiring dense sampling or additional inputs.
  • High-frequency regularization during training maintains fine surface details that would otherwise be lost in the disentanglement process.
  • A new synthetic dataset is provided that can serve as a benchmark for evaluating future transmissive scene methods on both geometry and appearance quality.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The residual cue idea might extend naturally to other view-dependent effects such as specular surfaces or thin-film interference where consistency varies with viewpoint.
  • Real-time applications like augmented reality could use the separated components to insert virtual objects realistically behind or in front of transparent real-world surfaces.
  • Applying the framework to scenes with multiple layered transmissive elements, such as double-glazed windows, would test whether the dual representation scales without introducing new ambiguities.

Load-bearing premise

The method assumes that residuals from multi-view inconsistencies in reflections supply enough reliable information to separate the reflection and transmission components without any extra supervision or ground-truth labels.

What would settle it

A controlled experiment on a transmissive scene where the surrounding environment produces identical reflections from all camera viewpoints would show whether the residual-based separation still succeeds or breaks down.

Figures

Figures reproduced from arXiv: 2605.10705 by Chi-Keung Tang, Jack C.P. Cheng, Tianchao Li, Xiao Zhang, Zhenyu Liang.

Figure 1
Figure 1. Figure 1: Comparison on a challenging example consisting of a hemispherical, refractive, reflec￾tive, and transmissive dome. We present TransmissiveGS, the first Gaussian Splatting framework that achieves both accurate transmissive surface reconstruction (MAE◦ ↓) and photorealistic inverse rendering (PSNR ↑) in non-planar transmissive scenes featuring near-field reflections and visible transmitted content behind the… view at source ↗
Figure 2
Figure 2. Figure 2: Disentanglement and composite appearance. Left: In the car case, the windshield exhibits both reflections of the surrounding environment and transmitted interior objects. Right: In the pyramid case, the glass surface reflects near-field surrounding structures while transmitting the people inside. Please see text for symbol definitions. 2 [PITH_FULL_IMAGE:figures/full_fig_p002_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Pipeline of our proposed TransmissiveGS. Our framework consists of three stages. In Stage I, the scattering Gaussians are trained to reconstruct multi-view consistent scene content. In Stage II, the residual signal, together with an environment map, jointly supervises the reflection Gaussians to recover the geometry of reflective and transmissive surfaces. In Stage III, a reflection light field is trained … view at source ↗
Figure 4
Figure 4. Figure 4: Qualitative comparison of photorealistic rendering quality. [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative comparison of transmissive surface reconstruction. [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative results of ablation studies. [PITH_FULL_IMAGE:figures/full_fig_p009_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Architecture of reflection light field. Green blocks denote input features, blue blocks denote hidden layers, and the red block denotes the output. Solid green arrows indicate data flow. Solid black arrows indicate ReLU activations, and dashed black arrows indicate a sigmoid activation. B Additional Ablation Studies on Reflection Light Field. We further conduct ablation studies on the reflection light fiel… view at source ↗
Figure 8
Figure 8. Figure 8: Additional qualitative comparison of photorealistic rendering quality. [PITH_FULL_IMAGE:figures/full_fig_p011_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Additional qualitative comparison of transmissive surface reconstruction. [PITH_FULL_IMAGE:figures/full_fig_p011_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: illustrates a failure case of our method. When the transmissive surface is dominated by transmission with only faint reflections at certain viewpoints, the reflection component at these regions can be difficult to estimate accurately and may degenerate. Ground Truth Rendering [PITH_FULL_IMAGE:figures/full_fig_p011_10.png] view at source ↗
read the original abstract

Transmissive scenes are ubiquitous in daily life, yet reconstructing and rendering them remains highly challenging due to the inherent entanglement between near-field reflections from the surrounding environment on the transmissive surface, and the transmitted content of the scene behind it. This coupling gives rise to dual surface geometries and dual radiance components within each observation, posing ambiguities for standard methods. We present TransmissiveGS, a novel framework for disentangled reconstruction and rendering of transmissive scenes. Specifically, we model the scene with a dual-Gaussian representation and introduce a deferred shading function to jointly render the two Gaussian components. To separate reflection and transmission, we exploit the inherent multi-view inconsistency of reflections and leverage the residuals from reconstructing multi-view consistent content as cues for disentangled geometry and appearance modeling. We further propose a reflection light field that enables high-fidelity estimation of near-field reflections. During training, we introduce a high-frequency regularization to preserve fine details. We also contribute a new synthetic dataset for evaluating transmissive surface reconstruction. Experiments on both synthetic and real-world scenes demonstrate that TransmissiveGS consistently outperforms prior Gaussian Splatting-based methods in both reconstruction and rendering quality for transmissive scenes.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper presents TransmissiveGS, a Gaussian Splatting framework for transmissive scene reconstruction and rendering. It models scenes using a dual-Gaussian representation, applies deferred shading to jointly render components, exploits multi-view inconsistency residuals to disentangle reflection from transmission, introduces a reflection light field for near-field effects, and adds high-frequency regularization. A new synthetic dataset is contributed, with experiments claiming consistent outperformance over prior GS-based methods on both synthetic and real-world transmissive scenes.

Significance. If the disentanglement and rendering claims hold with robust validation, the work would advance novel view synthesis for common but challenging transmissive surfaces (e.g., glass with environment reflections), extending Gaussian Splatting to handle dual geometry and radiance. The contributed synthetic dataset is a clear strength, providing a benchmark for future methods. The residual-guided and reflection light field components offer practical modeling innovations for light transport ambiguities.

major comments (2)
  1. [§3.2] §3.2 (Residual-Guided Disentanglement): The central claim that residuals from an initial multi-view-consistent reconstruction provide reliable cues to separate reflection and transmission components is load-bearing for the dual-Gaussian model and outperformance results. The approach starts from entangled observations without an explicit separation loss or ground-truth anchoring; if transmitted high-frequency details or partially view-consistent reflections produce noisy residuals, Gaussians may be misassigned. The high-frequency regularization and reflection light field address symptoms but not the root ambiguity. An ablation removing the residual cue or reporting separation accuracy metrics on the synthetic dataset is required to support the disentanglement.
  2. [Experiments] Experiments section (results tables): The abstract and claims assert consistent outperformance in reconstruction and rendering quality, but the provided description lacks detailed quantitative tables, per-scene breakdowns, ablation studies on the dual-Gaussian and deferred shading components, or error analysis (e.g., PSNR/SSIM deltas with standard deviations). Without these, the magnitude and reliability of improvements over baselines cannot be assessed, especially given the unverified experimental outcomes noted in the review.
minor comments (2)
  1. [Method] The notation and formulation of the reflection light field and deferred shading function should include explicit equations with variable definitions for clarity.
  2. [Figures] Figure captions for qualitative results on real-world scenes could better highlight the disentangled components (reflection vs. transmission) to aid reader interpretation.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed review of our manuscript. We address each major comment below with clarifications and commit to specific revisions that will strengthen the presentation and validation of our method.

read point-by-point responses
  1. Referee: [§3.2] §3.2 (Residual-Guided Disentanglement): The central claim that residuals from an initial multi-view-consistent reconstruction provide reliable cues to separate reflection and transmission components is load-bearing for the dual-Gaussian model and outperformance results. The approach starts from entangled observations without an explicit separation loss or ground-truth anchoring; if transmitted high-frequency details or partially view-consistent reflections produce noisy residuals, Gaussians may be misassigned. The high-frequency regularization and reflection light field address symptoms but not the root ambiguity. An ablation removing the residual cue or reporting separation accuracy metrics on the synthetic dataset is required to support the disentanglement.

    Authors: We appreciate the referee's focus on the reliability of the residual cue, which is indeed central to our disentanglement strategy. Our approach exploits the inherent multi-view inconsistency of near-field reflections (as opposed to the more consistent transmission) to derive residuals from an initial multi-view-consistent reconstruction; these residuals then guide Gaussian assignment without requiring an explicit separation loss during training. The reflection light field and high-frequency regularization further stabilize the process by modeling near-field effects and preserving details. To directly validate this mechanism, we will add an ablation that removes the residual-guided component and measures the resulting drop in performance. We will also report separation accuracy metrics (e.g., precision/recall of reflective vs. transmissive Gaussian assignment) on our synthetic dataset, which provides ground-truth component labels. These results will be included in the revised manuscript. revision: yes

  2. Referee: [Experiments] Experiments section (results tables): The abstract and claims assert consistent outperformance in reconstruction and rendering quality, but the provided description lacks detailed quantitative tables, per-scene breakdowns, ablation studies on the dual-Gaussian and deferred shading components, or error analysis (e.g., PSNR/SSIM deltas with standard deviations). Without these, the magnitude and reliability of improvements over baselines cannot be assessed, especially given the unverified experimental outcomes noted in the review.

    Authors: We agree that expanded experimental reporting is necessary for a thorough assessment of our claims. In the revised manuscript we will present complete quantitative tables containing per-scene PSNR, SSIM, and LPIPS values for all compared methods on both synthetic and real scenes. We will add dedicated ablation studies that isolate the dual-Gaussian representation and the deferred shading function, reporting their individual contributions. Error analysis will include mean metrics accompanied by standard deviations across scenes. All numerical results will be re-verified and presented with full implementation details to ensure transparency and reproducibility. revision: yes

Circularity Check

0 steps flagged

New modeling components introduced independently; no reduction of claims to fitted inputs or self-definitions

full rationale

The derivation introduces dual-Gaussian representation, deferred shading, residual cues from multi-view inconsistency, reflection light field, and high-frequency regularization as distinct architectural choices. These are not defined in terms of the target disentangled outputs or fitted parameters that would reproduce the input observations by construction. The use of residuals as cues relies on an external physical property (view-inconsistency of reflections) rather than circularly assuming the separation result. No self-citation chains, uniqueness theorems from prior author work, or ansatzes smuggled via citation are load-bearing in the provided description. The central reconstruction/rendering claims therefore retain independent content beyond the inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 2 invented entities

The framework rests on several modeling assumptions and new constructs introduced without external grounding in the provided abstract.

invented entities (2)
  • dual-Gaussian representation no independent evidence
    purpose: Separate modeling of reflection and transmission components
    Core modeling choice stated in the abstract; no independent evidence supplied.
  • reflection light field no independent evidence
    purpose: High-fidelity near-field reflection estimation
    Introduced as an enabling component; no external validation mentioned.

pith-pipeline@v0.9.0 · 5521 in / 1146 out tokens · 41144 ms · 2026-05-12T04:36:54.446025+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

48 extracted references · 48 canonical work pages

  1. [1]

    Nerd: Neural reflectance decomposition from image collections

    Mark Boss, Raphael Braun, Varun Jampani, Jonathan T Barron, Ce Liu, and Hendrik Lensch. Nerd: Neural reflectance decomposition from image collections. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 12684–12694, 2021

  2. [2]

    Glassgaussian: extending 3d gaussian splatting for realistic imperfections and glass materials.IEEE Access, 2025

    Junming Cao, Jiadi Cui, and Sören Schwertfeger. Glassgaussian: extending 3d gaussian splatting for realistic imperfections and glass materials.IEEE Access, 2025

  3. [3]

    A generic deep architecture for single image reflection removal and image smoothing

    Qingnan Fan, Jiaolong Yang, Gang Hua, Baoquan Chen, and David Wipf. A generic deep architecture for single image reflection removal and image smoothing. InProceedings of the IEEE international conference on computer vision, pages 3238–3247, 2017

  4. [4]

    Data-driven 3d primitives for single image understanding

    David F Fouhey, Abhinav Gupta, and Martial Hebert. Data-driven 3d primitives for single image understanding. InProceedings of the IEEE International Conference on Computer Vision, pages 3392–3399, 2013

  5. [5]

    Planar reflection- aware neural radiance fields

    Chen Gao, Yipeng Wang, Changil Kim, Jia-Bin Huang, and Johannes Kopf. Planar reflection- aware neural radiance fields. InSIGGRAPH Asia 2024 Conference Papers, pages 1–10, 2024

  6. [6]

    Irgs: Inter-reflective gaussian splatting with 2d gaussian ray tracing

    Chun Gu, Xiaofei Wei, Zixuan Zeng, Yuxuan Yao, and Li Zhang. Irgs: Inter-reflective gaussian splatting with 2d gaussian ray tracing. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 10943–10952, 2025

  7. [7]

    Robust separation of reflection from multiple images

    Xiaojie Guo, Xiaochun Cao, and Yi Ma. Robust separation of reflection from multiple images. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 2187–2194, 2014

  8. [8]

    Nerfren: Neural radiance fields with reflections

    Yuan-Chen Guo, Di Kang, Linchao Bao, Yu He, and Song-Hai Zhang. Nerfren: Neural radiance fields with reflections. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18409–18418, 2022

  9. [9]

    2d gaussian splatting for geometrically accurate radiance fields

    Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. InACM SIGGRAPH 2024 conference papers, pages 1–11, 2024

  10. [10]

    Transparentgs: Fast inverse rendering of transparent objects with gaussians.ACM Transactions on Graphics (TOG), 44(4):1–17, 2025

    Letian Huang, Dongwei Ye, Jialin Dan, Chengzhi Tao, Huiwen Liu, Kun Zhou, Bo Ren, Yuanqi Li, Yanwen Guo, and Jie Guo. Transparentgs: Fast inverse rendering of transparent objects with gaussians.ACM Transactions on Graphics (TOG), 44(4):1–17, 2025

  11. [11]

    Gaussianshader: 3d gaussian splatting with shading functions for reflective surfaces

    Yingwenqi Jiang, Jiadong Tu, Yuan Liu, Xifeng Gao, Xiaoxiao Long, Wenping Wang, and Yuexin Ma. Gaussianshader: 3d gaussian splatting with shading functions for reflective surfaces. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5322–5332, 2024

  12. [12]

    3d gaussian splatting for real-time radiance field rendering.ACM Transactions on Graphics, 42(4):1–14, 2023

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering.ACM Transactions on Graphics, 42(4):1–14, 2023

  13. [13]

    Polarized reflection removal with perfect alignment in the wild

    Chenyang Lei, Xuhua Huang, Mengdi Zhang, Qiong Yan, Wenxiu Sun, and Qifeng Chen. Polarized reflection removal with perfect alignment in the wild. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1750–1758, 2020

  14. [14]

    User assisted separation of reflections from a single image using a sparsity prior.IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(9):1647– 1654, 2007

    Anat Levin and Yair Weiss. User assisted separation of reflections from a single image using a sparsity prior.IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(9):1647– 1654, 2007

  15. [15]

    Single image reflection removal through cascaded refinement

    Chao Li, Yixiao Yang, Kun He, Stephen Lin, and John E Hopcroft. Single image reflection removal through cascaded refinement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3565–3574, 2020

  16. [16]

    Tsgs: Improving gaussian splatting for transparent surface reconstruction via normal and de-lighting priors

    Mingwei Li, Pu Pang, Hehe Fan, Hua Huang, and Yi Yang. Tsgs: Improving gaussian splatting for transparent surface reconstruction via normal and de-lighting priors. InProceedings of the 33rd ACM International Conference on Multimedia, pages 7220–7229, 2025. 12

  17. [17]

    Single image layer separation using relative smoothness

    Yu Li and Michael S Brown. Single image layer separation using relative smoothness. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2752– 2759, 2014

  18. [18]

    Gs-ir: 3d gaussian splatting for inverse rendering

    Zhihao Liang, Qi Zhang, Ying Feng, Ying Shan, and Kui Jia. Gs-ir: 3d gaussian splatting for inverse rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21644–21653, 2024

  19. [19]

    Tr-gaussians: High-fidelity real-time rendering of planar transmission and reflection with 3d gaussian splatting.IEEE Transactions on Visualization and Computer Graphics, 2026

    Yong Liu, Keyang Ye, Tianjia Shao, and Kun Zhou. Tr-gaussians: High-fidelity real-time rendering of planar transmission and reflection with 3d gaussian splatting.IEEE Transactions on Visualization and Computer Graphics, 2026

  20. [20]

    Nero: Neural geometry and brdf reconstruction of reflective objects from multiview images.ACM Transactions on Graphics (ToG), 42(4):1–22, 2023

    Yuan Liu, Peng Wang, Cheng Lin, Xiaoxiao Long, Jiepeng Wang, Lingjie Liu, Taku Komura, and Wenping Wang. Nero: Neural geometry and brdf reconstruction of reflective objects from multiview images.ACM Transactions on Graphics (ToG), 42(4):1–22, 2023

  21. [21]

    Nerf: Representing scenes as neural radiance fields for view synthesis

    Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. InEuropean Conference on Computer Vision, pages 405–421. Springer, 2020

  22. [22]

    Deepsdf: Learning continuous signed distance functions for shape representation

    Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. InProceed- ings of the IEEE/CVF conference on computer vision and pattern recognition, pages 165–174, 2019

  23. [23]

    Looking through the glass: Neural surface reconstruction against high specular reflections

    Jiaxiong Qiu, Peng-Tao Jiang, Yifan Zhu, Ze-Xin Yin, Ming-Ming Cheng, and Bo Ren. Looking through the glass: Neural surface reconstruction against high specular reflections. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20823–20833, 2023

  24. [24]

    On the spectral bias of neural networks

    Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. InInternational conference on machine learning, pages 5301–5310. PMLR, 2019

  25. [25]

    Reflection removal using ghosting cues

    YiChang Shih, Dilip Krishnan, Fredo Durand, and William T Freeman. Reflection removal using ghosting cues. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 3193–3201, 2015

  26. [26]

    Fourier features let networks learn high frequency functions in low dimensional domains.Advances in neural information processing systems, 33:7537–7547, 2020

    Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains.Advances in neural information processing systems, 33:7537–7547, 2020

  27. [27]

    Spectre-gs: Modeling highly specular surfaces with reflected nearby objects by tracing rays in 3d gaussian splatting

    Jiajun Tang, Fan Fei, Zhihao Li, Xiao Tang, Shiyong Liu, Youyu Chen, Binxiao Huang, Zhenyu Chen, Xiaofei Wu, and Boxin Shi. Spectre-gs: Modeling highly specular surfaces with reflected nearby objects by tracing rays in 3d gaussian splatting. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 16133–16142, 2025

  28. [28]

    3igs: Factorised tensorial illumination for 3d gaussian splatting

    Zhe Jun Tang and Tat-Jen Cham. 3igs: Factorised tensorial illumination for 3d gaussian splatting. InEuropean Conference on Computer Vision, pages 143–159. Springer, 2024

  29. [29]

    Ref-nerf: Structured view-dependent appearance for neural radiance fields.IEEE Transactions on Pattern Analysis and Machine Intelligence, 47(11):9426–9437, 2024

    Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T Barron, and Pratul P Srinivasan. Ref-nerf: Structured view-dependent appearance for neural radiance fields.IEEE Transactions on Pattern Analysis and Machine Intelligence, 47(11):9426–9437, 2024

  30. [30]

    Nerf-casting: Improved view-dependent appearance with consistent reflections

    Dor Verbin, Pratul P Srinivasan, Peter Hedman, Ben Mildenhall, Benjamin Attal, Richard Szeliski, and Jonathan T Barron. Nerf-casting: Improved view-dependent appearance with consistent reflections. InSIGGRAPH Asia 2024 Conference Papers, pages 1–10, 2024

  31. [31]

    Dc-gaussian: Improving 3d gaussian splatting for reflective dash cam videos.Advances in Neural Information Processing Systems, 37:99898–99920, 2024

    Linhan Wang, Kai Cheng, Shuo Lei, Shengkun Wang, Wei Yin, Chenyang Lei, Xiaoxiao Long, and Chang-Tien Lu. Dc-gaussian: Improving 3d gaussian splatting for reflective dash cam videos.Advances in Neural Information Processing Systems, 37:99898–99920, 2024. 13

  32. [32]

    Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600– 612, 2004

    Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600– 612, 2004

  33. [33]

    Single image reflection removal exploiting misaligned training data and network enhancements

    Kaixuan Wei, Jiaolong Yang, Ying Fu, David Wipf, and Hua Huang. Single image reflection removal exploiting misaligned training data and network enhancements. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8178–8187, 2019

  34. [34]

    Flash-splat: 3d reflection removal with flash cues and gaussian splats

    Mingyang Xie, Haoming Cai, Sachin Shah, Yiran Xu, Brandon Y Feng, Jia-Bin Huang, and Christopher A Metzler. Flash-splat: 3d reflection removal with flash cues and gaussian splats. InEuropean Conference on Computer Vision, pages 122–139. Springer, 2024

  35. [35]

    Envgs: Modeling view-dependent appearance with environment gaussian

    Tao Xie, Xi Chen, Zhen Xu, Yiman Xie, Yudong Jin, Yujun Shen, Sida Peng, Hujun Bao, and Xiaowei Zhou. Envgs: Modeling view-dependent appearance with environment gaussian. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 5742–5751, 2025

  36. [36]

    A computational approach for obstruction-free photography.ACM Transactions on Graphics (TOG), 34(4):1–11, 2015

    Tianfan Xue, Michael Rubinstein, Ce Liu, and William T Freeman. A computational approach for obstruction-free photography.ACM Transactions on Graphics (TOG), 34(4):1–11, 2015

  37. [37]

    Reflective gaussian splatting

    Yuxuan Yao, Zixuan Zeng, Chun Gu, Xiatian Zhu, and Li Zhang. Reflective gaussian splatting. InThe Thirteenth International Conference on Learning Representations, 2025

  38. [38]

    Stablenormal: Reducing diffusion variance for stable and sharp normal.ACM Transactions on Graphics (ToG), 43(6):1–18, 2024

    Chongjie Ye, Lingteng Qiu, Xiaodong Gu, Qi Zuo, Yushuang Wu, Zilong Dong, Liefeng Bo, Yuliang Xiu, and Xiaoguang Han. Stablenormal: Reducing diffusion variance for stable and sharp normal.ACM Transactions on Graphics (ToG), 43(6):1–18, 2024

  39. [39]

    3d gaussian splatting with deferred reflection

    Keyang Ye, Qiming Hou, and Kun Zhou. 3d gaussian splatting with deferred reflection. In ACM SIGGRAPH 2024 Conference Papers, pages 1–10, 2024

  40. [40]

    Multi-space neural radiance fields

    Ze-Xin Yin, Jiaxiong Qiu, Ming-Ming Cheng, and Bo Ren. Multi-space neural radiance fields. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12407–12416, 2023

  41. [41]

    Rt-gs: Gaussian splatting with reflection and transmittance primitives.arXiv preprint arXiv:2604.00509, 2026

    Kunnong Zeng, Chensheng Peng, Yichen Xie, Masayoshi Tomizuka, and Cem Yuksel. Rt-gs: Gaussian splatting with reflection and transmittance primitives.arXiv preprint arXiv:2604.00509, 2026

  42. [42]

    Neilf++: Inter-reflectable light fields for geometry and material estimation

    Jingyang Zhang, Yao Yao, Shiwei Li, Jingbo Liu, Tian Fang, David McKinnon, Yanghai Tsin, and Long Quan. Neilf++: Inter-reflectable light fields for geometry and material estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3601–3610, 2023

  43. [43]

    Physg: Inverse render- ing with spherical gaussians for physics-based material editing and relighting

    Kai Zhang, Fujun Luan, Qianqian Wang, Kavita Bala, and Noah Snavely. Physg: Inverse render- ing with spherical gaussians for physics-based material editing and relighting. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5453–5462, 2021

  44. [44]

    The unrea- sonable effectiveness of deep features as a perceptual metric

    Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unrea- sonable effectiveness of deep features as a perceptual metric. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018

  45. [45]

    Mate- rialrefgs: Reflective gaussian splatting with multi-view consistent material inference

    Wenyuan Zhang, Jimin Tang, Weiqi Zhang, Yi Fang, Yu-Shen Liu, and Zhizhong Han. Mate- rialrefgs: Reflective gaussian splatting with multi-view consistent material inference. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems

  46. [46]

    Nerfactor: Neural factorization of shape and reflectance under an unknown illumination.ACM Transactions on Graphics (ToG), 40(6):1–18, 2021

    Xiuming Zhang, Pratul P Srinivasan, Boyang Deng, Paul Debevec, William T Freeman, and Jonathan T Barron. Nerfactor: Neural factorization of shape and reflectance under an unknown illumination.ACM Transactions on Graphics (ToG), 40(6):1–18, 2021

  47. [47]

    Ref-gs: Directional factorization for 2d gaussian splatting

    Youjia Zhang, Anpei Chen, Yumin Wan, Zikai Song, Junqing Yu, Yawei Luo, and Wei Yang. Ref-gs: Directional factorization for 2d gaussian splatting. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 26483–26492, 2025. 14

  48. [48]

    Rtr-gs: 3d gaussian splatting for inverse rendering with radiance transfer and reflection

    Yongyang Zhou, Fanglue Zhang, Zichen Wang, and Lei Zhang. Rtr-gs: 3d gaussian splatting for inverse rendering with radiance transfer and reflection. InProceedings of the 33rd ACM International Conference on Multimedia, pages 6888–6897, 2025. 15