Recognition: no theorem link
TransmissiveGS: Residual-Guided Disentangled Gaussian Splatting for Transmissive Scene Reconstruction and Rendering
Pith reviewed 2026-05-12 04:36 UTC · model grok-4.3
The pith
TransmissiveGS disentangles reflections from transmitted content in scenes by modeling them as separate Gaussian sets guided by multi-view reconstruction residuals.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We present TransmissiveGS, a novel framework for disentangled reconstruction and rendering of transmissive scenes. Specifically, we model the scene with a dual-Gaussian representation and introduce a deferred shading function to jointly render the two Gaussian components. To separate reflection and transmission, we exploit the inherent multi-view inconsistency of reflections and leverage the residuals from reconstructing multi-view consistent content as cues for disentangled geometry and appearance modeling. We further propose a reflection light field that enables high-fidelity estimation of near-field reflections. During training, we introduce a high-frequency regularization to preserve the
What carries the argument
Dual-Gaussian representation that uses residuals from multi-view inconsistency of reflections as cues to disentangle geometry and appearance, together with a reflection light field and deferred shading for joint rendering.
If this is right
- The dual-Gaussian model with residual guidance produces higher-fidelity reconstructions and renderings of both the reflected environment and the transmitted background than prior single-representation Gaussian splatting methods.
- The reflection light field component enables accurate capture of near-field reflection effects without requiring dense sampling or additional inputs.
- High-frequency regularization during training maintains fine surface details that would otherwise be lost in the disentanglement process.
- A new synthetic dataset is provided that can serve as a benchmark for evaluating future transmissive scene methods on both geometry and appearance quality.
Where Pith is reading between the lines
- The residual cue idea might extend naturally to other view-dependent effects such as specular surfaces or thin-film interference where consistency varies with viewpoint.
- Real-time applications like augmented reality could use the separated components to insert virtual objects realistically behind or in front of transparent real-world surfaces.
- Applying the framework to scenes with multiple layered transmissive elements, such as double-glazed windows, would test whether the dual representation scales without introducing new ambiguities.
Load-bearing premise
The method assumes that residuals from multi-view inconsistencies in reflections supply enough reliable information to separate the reflection and transmission components without any extra supervision or ground-truth labels.
What would settle it
A controlled experiment on a transmissive scene where the surrounding environment produces identical reflections from all camera viewpoints would show whether the residual-based separation still succeeds or breaks down.
Figures
read the original abstract
Transmissive scenes are ubiquitous in daily life, yet reconstructing and rendering them remains highly challenging due to the inherent entanglement between near-field reflections from the surrounding environment on the transmissive surface, and the transmitted content of the scene behind it. This coupling gives rise to dual surface geometries and dual radiance components within each observation, posing ambiguities for standard methods. We present TransmissiveGS, a novel framework for disentangled reconstruction and rendering of transmissive scenes. Specifically, we model the scene with a dual-Gaussian representation and introduce a deferred shading function to jointly render the two Gaussian components. To separate reflection and transmission, we exploit the inherent multi-view inconsistency of reflections and leverage the residuals from reconstructing multi-view consistent content as cues for disentangled geometry and appearance modeling. We further propose a reflection light field that enables high-fidelity estimation of near-field reflections. During training, we introduce a high-frequency regularization to preserve fine details. We also contribute a new synthetic dataset for evaluating transmissive surface reconstruction. Experiments on both synthetic and real-world scenes demonstrate that TransmissiveGS consistently outperforms prior Gaussian Splatting-based methods in both reconstruction and rendering quality for transmissive scenes.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper presents TransmissiveGS, a Gaussian Splatting framework for transmissive scene reconstruction and rendering. It models scenes using a dual-Gaussian representation, applies deferred shading to jointly render components, exploits multi-view inconsistency residuals to disentangle reflection from transmission, introduces a reflection light field for near-field effects, and adds high-frequency regularization. A new synthetic dataset is contributed, with experiments claiming consistent outperformance over prior GS-based methods on both synthetic and real-world transmissive scenes.
Significance. If the disentanglement and rendering claims hold with robust validation, the work would advance novel view synthesis for common but challenging transmissive surfaces (e.g., glass with environment reflections), extending Gaussian Splatting to handle dual geometry and radiance. The contributed synthetic dataset is a clear strength, providing a benchmark for future methods. The residual-guided and reflection light field components offer practical modeling innovations for light transport ambiguities.
major comments (2)
- [§3.2] §3.2 (Residual-Guided Disentanglement): The central claim that residuals from an initial multi-view-consistent reconstruction provide reliable cues to separate reflection and transmission components is load-bearing for the dual-Gaussian model and outperformance results. The approach starts from entangled observations without an explicit separation loss or ground-truth anchoring; if transmitted high-frequency details or partially view-consistent reflections produce noisy residuals, Gaussians may be misassigned. The high-frequency regularization and reflection light field address symptoms but not the root ambiguity. An ablation removing the residual cue or reporting separation accuracy metrics on the synthetic dataset is required to support the disentanglement.
- [Experiments] Experiments section (results tables): The abstract and claims assert consistent outperformance in reconstruction and rendering quality, but the provided description lacks detailed quantitative tables, per-scene breakdowns, ablation studies on the dual-Gaussian and deferred shading components, or error analysis (e.g., PSNR/SSIM deltas with standard deviations). Without these, the magnitude and reliability of improvements over baselines cannot be assessed, especially given the unverified experimental outcomes noted in the review.
minor comments (2)
- [Method] The notation and formulation of the reflection light field and deferred shading function should include explicit equations with variable definitions for clarity.
- [Figures] Figure captions for qualitative results on real-world scenes could better highlight the disentangled components (reflection vs. transmission) to aid reader interpretation.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed review of our manuscript. We address each major comment below with clarifications and commit to specific revisions that will strengthen the presentation and validation of our method.
read point-by-point responses
-
Referee: [§3.2] §3.2 (Residual-Guided Disentanglement): The central claim that residuals from an initial multi-view-consistent reconstruction provide reliable cues to separate reflection and transmission components is load-bearing for the dual-Gaussian model and outperformance results. The approach starts from entangled observations without an explicit separation loss or ground-truth anchoring; if transmitted high-frequency details or partially view-consistent reflections produce noisy residuals, Gaussians may be misassigned. The high-frequency regularization and reflection light field address symptoms but not the root ambiguity. An ablation removing the residual cue or reporting separation accuracy metrics on the synthetic dataset is required to support the disentanglement.
Authors: We appreciate the referee's focus on the reliability of the residual cue, which is indeed central to our disentanglement strategy. Our approach exploits the inherent multi-view inconsistency of near-field reflections (as opposed to the more consistent transmission) to derive residuals from an initial multi-view-consistent reconstruction; these residuals then guide Gaussian assignment without requiring an explicit separation loss during training. The reflection light field and high-frequency regularization further stabilize the process by modeling near-field effects and preserving details. To directly validate this mechanism, we will add an ablation that removes the residual-guided component and measures the resulting drop in performance. We will also report separation accuracy metrics (e.g., precision/recall of reflective vs. transmissive Gaussian assignment) on our synthetic dataset, which provides ground-truth component labels. These results will be included in the revised manuscript. revision: yes
-
Referee: [Experiments] Experiments section (results tables): The abstract and claims assert consistent outperformance in reconstruction and rendering quality, but the provided description lacks detailed quantitative tables, per-scene breakdowns, ablation studies on the dual-Gaussian and deferred shading components, or error analysis (e.g., PSNR/SSIM deltas with standard deviations). Without these, the magnitude and reliability of improvements over baselines cannot be assessed, especially given the unverified experimental outcomes noted in the review.
Authors: We agree that expanded experimental reporting is necessary for a thorough assessment of our claims. In the revised manuscript we will present complete quantitative tables containing per-scene PSNR, SSIM, and LPIPS values for all compared methods on both synthetic and real scenes. We will add dedicated ablation studies that isolate the dual-Gaussian representation and the deferred shading function, reporting their individual contributions. Error analysis will include mean metrics accompanied by standard deviations across scenes. All numerical results will be re-verified and presented with full implementation details to ensure transparency and reproducibility. revision: yes
Circularity Check
New modeling components introduced independently; no reduction of claims to fitted inputs or self-definitions
full rationale
The derivation introduces dual-Gaussian representation, deferred shading, residual cues from multi-view inconsistency, reflection light field, and high-frequency regularization as distinct architectural choices. These are not defined in terms of the target disentangled outputs or fitted parameters that would reproduce the input observations by construction. The use of residuals as cues relies on an external physical property (view-inconsistency of reflections) rather than circularly assuming the separation result. No self-citation chains, uniqueness theorems from prior author work, or ansatzes smuggled via citation are load-bearing in the provided description. The central reconstruction/rendering claims therefore retain independent content beyond the inputs.
Axiom & Free-Parameter Ledger
invented entities (2)
-
dual-Gaussian representation
no independent evidence
-
reflection light field
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Nerd: Neural reflectance decomposition from image collections
Mark Boss, Raphael Braun, Varun Jampani, Jonathan T Barron, Ce Liu, and Hendrik Lensch. Nerd: Neural reflectance decomposition from image collections. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 12684–12694, 2021
work page 2021
-
[2]
Junming Cao, Jiadi Cui, and Sören Schwertfeger. Glassgaussian: extending 3d gaussian splatting for realistic imperfections and glass materials.IEEE Access, 2025
work page 2025
-
[3]
A generic deep architecture for single image reflection removal and image smoothing
Qingnan Fan, Jiaolong Yang, Gang Hua, Baoquan Chen, and David Wipf. A generic deep architecture for single image reflection removal and image smoothing. InProceedings of the IEEE international conference on computer vision, pages 3238–3247, 2017
work page 2017
-
[4]
Data-driven 3d primitives for single image understanding
David F Fouhey, Abhinav Gupta, and Martial Hebert. Data-driven 3d primitives for single image understanding. InProceedings of the IEEE International Conference on Computer Vision, pages 3392–3399, 2013
work page 2013
-
[5]
Planar reflection- aware neural radiance fields
Chen Gao, Yipeng Wang, Changil Kim, Jia-Bin Huang, and Johannes Kopf. Planar reflection- aware neural radiance fields. InSIGGRAPH Asia 2024 Conference Papers, pages 1–10, 2024
work page 2024
-
[6]
Irgs: Inter-reflective gaussian splatting with 2d gaussian ray tracing
Chun Gu, Xiaofei Wei, Zixuan Zeng, Yuxuan Yao, and Li Zhang. Irgs: Inter-reflective gaussian splatting with 2d gaussian ray tracing. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 10943–10952, 2025
work page 2025
-
[7]
Robust separation of reflection from multiple images
Xiaojie Guo, Xiaochun Cao, and Yi Ma. Robust separation of reflection from multiple images. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 2187–2194, 2014
work page 2014
-
[8]
Nerfren: Neural radiance fields with reflections
Yuan-Chen Guo, Di Kang, Linchao Bao, Yu He, and Song-Hai Zhang. Nerfren: Neural radiance fields with reflections. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18409–18418, 2022
work page 2022
-
[9]
2d gaussian splatting for geometrically accurate radiance fields
Binbin Huang, Zehao Yu, Anpei Chen, Andreas Geiger, and Shenghua Gao. 2d gaussian splatting for geometrically accurate radiance fields. InACM SIGGRAPH 2024 conference papers, pages 1–11, 2024
work page 2024
-
[10]
Letian Huang, Dongwei Ye, Jialin Dan, Chengzhi Tao, Huiwen Liu, Kun Zhou, Bo Ren, Yuanqi Li, Yanwen Guo, and Jie Guo. Transparentgs: Fast inverse rendering of transparent objects with gaussians.ACM Transactions on Graphics (TOG), 44(4):1–17, 2025
work page 2025
-
[11]
Gaussianshader: 3d gaussian splatting with shading functions for reflective surfaces
Yingwenqi Jiang, Jiadong Tu, Yuan Liu, Xifeng Gao, Xiaoxiao Long, Wenping Wang, and Yuexin Ma. Gaussianshader: 3d gaussian splatting with shading functions for reflective surfaces. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5322–5332, 2024
work page 2024
-
[12]
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering.ACM Transactions on Graphics, 42(4):1–14, 2023
work page 2023
-
[13]
Polarized reflection removal with perfect alignment in the wild
Chenyang Lei, Xuhua Huang, Mengdi Zhang, Qiong Yan, Wenxiu Sun, and Qifeng Chen. Polarized reflection removal with perfect alignment in the wild. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1750–1758, 2020
work page 2020
-
[14]
Anat Levin and Yair Weiss. User assisted separation of reflections from a single image using a sparsity prior.IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(9):1647– 1654, 2007
work page 2007
-
[15]
Single image reflection removal through cascaded refinement
Chao Li, Yixiao Yang, Kun He, Stephen Lin, and John E Hopcroft. Single image reflection removal through cascaded refinement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3565–3574, 2020
work page 2020
-
[16]
Mingwei Li, Pu Pang, Hehe Fan, Hua Huang, and Yi Yang. Tsgs: Improving gaussian splatting for transparent surface reconstruction via normal and de-lighting priors. InProceedings of the 33rd ACM International Conference on Multimedia, pages 7220–7229, 2025. 12
work page 2025
-
[17]
Single image layer separation using relative smoothness
Yu Li and Michael S Brown. Single image layer separation using relative smoothness. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2752– 2759, 2014
work page 2014
-
[18]
Gs-ir: 3d gaussian splatting for inverse rendering
Zhihao Liang, Qi Zhang, Ying Feng, Ying Shan, and Kui Jia. Gs-ir: 3d gaussian splatting for inverse rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 21644–21653, 2024
work page 2024
-
[19]
Yong Liu, Keyang Ye, Tianjia Shao, and Kun Zhou. Tr-gaussians: High-fidelity real-time rendering of planar transmission and reflection with 3d gaussian splatting.IEEE Transactions on Visualization and Computer Graphics, 2026
work page 2026
-
[20]
Yuan Liu, Peng Wang, Cheng Lin, Xiaoxiao Long, Jiepeng Wang, Lingjie Liu, Taku Komura, and Wenping Wang. Nero: Neural geometry and brdf reconstruction of reflective objects from multiview images.ACM Transactions on Graphics (ToG), 42(4):1–22, 2023
work page 2023
-
[21]
Nerf: Representing scenes as neural radiance fields for view synthesis
Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. InEuropean Conference on Computer Vision, pages 405–421. Springer, 2020
work page 2020
-
[22]
Deepsdf: Learning continuous signed distance functions for shape representation
Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. InProceed- ings of the IEEE/CVF conference on computer vision and pattern recognition, pages 165–174, 2019
work page 2019
-
[23]
Looking through the glass: Neural surface reconstruction against high specular reflections
Jiaxiong Qiu, Peng-Tao Jiang, Yifan Zhu, Ze-Xin Yin, Ming-Ming Cheng, and Bo Ren. Looking through the glass: Neural surface reconstruction against high specular reflections. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20823–20833, 2023
work page 2023
-
[24]
On the spectral bias of neural networks
Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred Hamprecht, Yoshua Bengio, and Aaron Courville. On the spectral bias of neural networks. InInternational conference on machine learning, pages 5301–5310. PMLR, 2019
work page 2019
-
[25]
Reflection removal using ghosting cues
YiChang Shih, Dilip Krishnan, Fredo Durand, and William T Freeman. Reflection removal using ghosting cues. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 3193–3201, 2015
work page 2015
-
[26]
Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan Barron, and Ren Ng. Fourier features let networks learn high frequency functions in low dimensional domains.Advances in neural information processing systems, 33:7537–7547, 2020
work page 2020
-
[27]
Jiajun Tang, Fan Fei, Zhihao Li, Xiao Tang, Shiyong Liu, Youyu Chen, Binxiao Huang, Zhenyu Chen, Xiaofei Wu, and Boxin Shi. Spectre-gs: Modeling highly specular surfaces with reflected nearby objects by tracing rays in 3d gaussian splatting. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 16133–16142, 2025
work page 2025
-
[28]
3igs: Factorised tensorial illumination for 3d gaussian splatting
Zhe Jun Tang and Tat-Jen Cham. 3igs: Factorised tensorial illumination for 3d gaussian splatting. InEuropean Conference on Computer Vision, pages 143–159. Springer, 2024
work page 2024
-
[29]
Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T Barron, and Pratul P Srinivasan. Ref-nerf: Structured view-dependent appearance for neural radiance fields.IEEE Transactions on Pattern Analysis and Machine Intelligence, 47(11):9426–9437, 2024
work page 2024
-
[30]
Nerf-casting: Improved view-dependent appearance with consistent reflections
Dor Verbin, Pratul P Srinivasan, Peter Hedman, Ben Mildenhall, Benjamin Attal, Richard Szeliski, and Jonathan T Barron. Nerf-casting: Improved view-dependent appearance with consistent reflections. InSIGGRAPH Asia 2024 Conference Papers, pages 1–10, 2024
work page 2024
-
[31]
Linhan Wang, Kai Cheng, Shuo Lei, Shengkun Wang, Wei Yin, Chenyang Lei, Xiaoxiao Long, and Chang-Tien Lu. Dc-gaussian: Improving 3d gaussian splatting for reflective dash cam videos.Advances in Neural Information Processing Systems, 37:99898–99920, 2024. 13
work page 2024
-
[32]
Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600– 612, 2004
work page 2004
-
[33]
Single image reflection removal exploiting misaligned training data and network enhancements
Kaixuan Wei, Jiaolong Yang, Ying Fu, David Wipf, and Hua Huang. Single image reflection removal exploiting misaligned training data and network enhancements. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8178–8187, 2019
work page 2019
-
[34]
Flash-splat: 3d reflection removal with flash cues and gaussian splats
Mingyang Xie, Haoming Cai, Sachin Shah, Yiran Xu, Brandon Y Feng, Jia-Bin Huang, and Christopher A Metzler. Flash-splat: 3d reflection removal with flash cues and gaussian splats. InEuropean Conference on Computer Vision, pages 122–139. Springer, 2024
work page 2024
-
[35]
Envgs: Modeling view-dependent appearance with environment gaussian
Tao Xie, Xi Chen, Zhen Xu, Yiman Xie, Yudong Jin, Yujun Shen, Sida Peng, Hujun Bao, and Xiaowei Zhou. Envgs: Modeling view-dependent appearance with environment gaussian. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 5742–5751, 2025
work page 2025
-
[36]
Tianfan Xue, Michael Rubinstein, Ce Liu, and William T Freeman. A computational approach for obstruction-free photography.ACM Transactions on Graphics (TOG), 34(4):1–11, 2015
work page 2015
-
[37]
Yuxuan Yao, Zixuan Zeng, Chun Gu, Xiatian Zhu, and Li Zhang. Reflective gaussian splatting. InThe Thirteenth International Conference on Learning Representations, 2025
work page 2025
-
[38]
Chongjie Ye, Lingteng Qiu, Xiaodong Gu, Qi Zuo, Yushuang Wu, Zilong Dong, Liefeng Bo, Yuliang Xiu, and Xiaoguang Han. Stablenormal: Reducing diffusion variance for stable and sharp normal.ACM Transactions on Graphics (ToG), 43(6):1–18, 2024
work page 2024
-
[39]
3d gaussian splatting with deferred reflection
Keyang Ye, Qiming Hou, and Kun Zhou. 3d gaussian splatting with deferred reflection. In ACM SIGGRAPH 2024 Conference Papers, pages 1–10, 2024
work page 2024
-
[40]
Multi-space neural radiance fields
Ze-Xin Yin, Jiaxiong Qiu, Ming-Ming Cheng, and Bo Ren. Multi-space neural radiance fields. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12407–12416, 2023
work page 2023
-
[41]
Kunnong Zeng, Chensheng Peng, Yichen Xie, Masayoshi Tomizuka, and Cem Yuksel. Rt-gs: Gaussian splatting with reflection and transmittance primitives.arXiv preprint arXiv:2604.00509, 2026
-
[42]
Neilf++: Inter-reflectable light fields for geometry and material estimation
Jingyang Zhang, Yao Yao, Shiwei Li, Jingbo Liu, Tian Fang, David McKinnon, Yanghai Tsin, and Long Quan. Neilf++: Inter-reflectable light fields for geometry and material estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3601–3610, 2023
work page 2023
-
[43]
Kai Zhang, Fujun Luan, Qianqian Wang, Kavita Bala, and Noah Snavely. Physg: Inverse render- ing with spherical gaussians for physics-based material editing and relighting. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5453–5462, 2021
work page 2021
-
[44]
The unrea- sonable effectiveness of deep features as a perceptual metric
Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unrea- sonable effectiveness of deep features as a perceptual metric. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 586–595, 2018
work page 2018
-
[45]
Mate- rialrefgs: Reflective gaussian splatting with multi-view consistent material inference
Wenyuan Zhang, Jimin Tang, Weiqi Zhang, Yi Fang, Yu-Shen Liu, and Zhizhong Han. Mate- rialrefgs: Reflective gaussian splatting with multi-view consistent material inference. InThe Thirty-ninth Annual Conference on Neural Information Processing Systems
-
[46]
Xiuming Zhang, Pratul P Srinivasan, Boyang Deng, Paul Debevec, William T Freeman, and Jonathan T Barron. Nerfactor: Neural factorization of shape and reflectance under an unknown illumination.ACM Transactions on Graphics (ToG), 40(6):1–18, 2021
work page 2021
-
[47]
Ref-gs: Directional factorization for 2d gaussian splatting
Youjia Zhang, Anpei Chen, Yumin Wan, Zikai Song, Junqing Yu, Yawei Luo, and Wei Yang. Ref-gs: Directional factorization for 2d gaussian splatting. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 26483–26492, 2025. 14
work page 2025
-
[48]
Rtr-gs: 3d gaussian splatting for inverse rendering with radiance transfer and reflection
Yongyang Zhou, Fanglue Zhang, Zichen Wang, and Lei Zhang. Rtr-gs: 3d gaussian splatting for inverse rendering with radiance transfer and reflection. InProceedings of the 33rd ACM International Conference on Multimedia, pages 6888–6897, 2025. 15
work page 2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.