Recognition: 2 theorem links
· Lean TheoremPD-4DGS:Progressive Decomposition of 4D Gaussian Splatting for Bandwidth-Adaptive Dynamic Scene Streaming
Pith reviewed 2026-05-13 01:51 UTC · model grok-4.3
The pith
4D Gaussian Splatting models decompose into three layers that stream progressively, cutting data use by over 60 percent while allowing immediate rendering of any prefix.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
PD-4DGS introduces Hierarchical Deformation Decomposition to split the temporal deformation networks latent in 4DGS into a static scaffold layer, a global deformation layer, and a local refinement layer. These layers are transmitted independently so any initial segment of the bitstream is already renderable. A Gaussian-entropy attribute rate-distortion loss and a temporal mask consistency regulariser keep the base layer compact and free of flicker, while a capacity-weighted rollout schedule with learned activation rate prevents under-training without per-scene tuning.
What carries the argument
Hierarchical Deformation Decomposition (HDD) that extracts the coarse-to-fine motion hierarchy already present in 4DGS into three independently transmittable layers: static scaffold, global deformation, and local refinement.
If this is right
- The first transmitted layer alone produces a viewable image, so rendering begins after a few seconds rather than after the entire file arrives.
- Total streamed size falls by more than 60 percent at the same final rendering fidelity.
- First-frame latency on a 2 Mbps link drops from 73-930 seconds to roughly 1.7 seconds.
- The resulting bitstream is directly compatible with DASH and HLS adaptive streaming systems.
- A single training run yields one model that supports many different bandwidth conditions without retraining.
Where Pith is reading between the lines
- The same layer-separation idea may extend to other deformation-based dynamic rendering methods that store motion in networks.
- Viewers could receive only the layers their current bandwidth and device can handle, with higher layers fetched later if conditions improve.
- The approach could support multi-user sessions in which different clients request different numbers of layers depending on their individual links.
Load-bearing premise
The motion patterns inside 4DGS can be factored into three independent layers without losing render quality when only the earlier layers are received.
What would settle it
On the Dycheck iPhone benchmark, compare PSNR and perceptual quality of renders produced from only the first one or two layers against the full model at identical total bitrates; a consistent quality drop at low rates would falsify the claim that the decomposition preserves fidelity.
Figures
read the original abstract
4D Gaussian Splatting (4DGS) enables high-quality dynamic novel view synthesis, yet current models remain monolithic bitstreams that clients must download in full before any frame can be rendered, causing black-screen waits of tens to hundreds of seconds on mobile bandwidth and leaving 4DGS incompatible with modern adaptive-bitrate delivery. Progressive 3DGS compression alleviates this for static scenes, but it acts only on spatial anchors and cannot partition the temporal deformation networks that dominate dynamic-scene size. We present PD-4DGS, the first framework for progressive compression and on-demand transmission of 4DGS. Hierarchical Deformation Decomposition (HDD) externalises the coarse-to-fine motion hierarchy already latent in 4DGS into three independently transmittable layers -- a static scaffold, a global deformation, and a local refinement -- so that any prefix of the bitstream is already renderable, turning a single training run into a scalable, DASH/HLS-compatible bitstream. A Gaussian-entropy attribute rate-distortion loss together with a temporal mask consistency regulariser shrink the base layer while suppressing low-bitrate flicker; a capacity-weighted rollout schedule, gated online by a learnt activation rate rho, then prevents deformation-network under-training without any per-scene hyperparameter. On the Dycheck iPhone benchmark, PD-4DGS cuts the streamed bitstream by >60% at matched rendering fidelity and reduces first-frame latency from 73--930 s to ~1.7 s on a 2 Mbps link, uniquely enabling true on-demand progressive streaming for 4DGS.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript presents PD-4DGS, a framework for progressive compression and on-demand transmission of 4D Gaussian Splatting. It introduces Hierarchical Deformation Decomposition (HDD) to externalize the latent coarse-to-fine motion hierarchy into three independently transmittable layers (static scaffold, global deformation, local refinement), a Gaussian-entropy attribute rate-distortion loss, a temporal mask consistency regulariser, and a capacity-weighted rollout schedule gated by a learnt activation rate ρ. The central claims are that this produces a DASH/HLS-compatible bitstream from a single training run, cuts streamed size by >60% at matched fidelity, and reduces first-frame latency from 73-930 s to ~1.7 s on a 2 Mbps link, as evaluated on the Dycheck iPhone benchmark.
Significance. If the results hold, this would be a significant contribution to dynamic scene rendering by making 4DGS compatible with adaptive bitrate streaming protocols. The single-training-run progressive bitstream and elimination of per-scene hyperparameters via the rollout schedule are notable strengths that could enable practical on-demand applications on bandwidth-constrained devices. The decomposition approach bridges static progressive compression techniques with temporal deformation networks.
major comments (3)
- [§3.1] §3.1 (HDD definition): The headline claim that any prefix of the three-layer bitstream remains renderable at matched fidelity assumes the deformation field in 4DGS admits a clean additive coarse-to-fine factorization. If local refinements depend non-additively on global motion, partial streams will produce temporal inconsistencies or quality drops that the temporal mask consistency regulariser cannot fully correct; no ablations quantify render metrics (PSNR/SSIM, flicker) for each layer prefix on held-out scenes.
- [§4] §4 (Experiments): Quantitative claims of >60% size reduction and latency drop to ~1.7 s lack reported baselines (specific 4DGS variants or prior compression methods), error bars across runs, exact training protocols, and per-scene results. This prevents verification of the performance gains, particularly given the new losses and ρ that could affect metric independence.
- [§3.3] §3.3 (Losses and rollout): The Gaussian-entropy loss and capacity-weighted schedule with learnt ρ are asserted to shrink the base layer and prevent under-training without per-scene tuning, yet no analysis shows these steps are independent of the final reported metrics or that ρ does not introduce hidden fitting that undermines the 'parameter-free' aspect of the rollout.
minor comments (2)
- [Notation] The notation and optimization details for the activation rate ρ should be clarified, as it is both described as learnt and listed among free parameters.
- [Related Work] Related work could more explicitly contrast HDD with prior progressive 3DGS methods to highlight the temporal handling novelty.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback on PD-4DGS. We appreciate the emphasis on verifying the progressive decomposition claims and experimental rigor. We address each major comment below with explanations and revisions where needed.
read point-by-point responses
-
Referee: [§3.1] §3.1 (HDD definition): The headline claim that any prefix of the three-layer bitstream remains renderable at matched fidelity assumes the deformation field in 4DGS admits a clean additive coarse-to-fine factorization. If local refinements depend non-additively on global motion, partial streams will produce temporal inconsistencies or quality drops that the temporal mask consistency regulariser cannot fully correct; no ablations quantify render metrics (PSNR/SSIM, flicker) for each layer prefix on held-out scenes.
Authors: HDD is designed to externalize the latent coarse-to-fine hierarchy already present in 4DGS deformation networks, where global deformation captures large-scale temporal changes and local refinement adds fine details. This structure supports an approximately additive factorization, with the temporal mask consistency regulariser explicitly mitigating potential inconsistencies in partial streams. While the design rationale is grounded in the 4DGS architecture, we agree that quantitative validation would strengthen the claim. We will add ablations in the revised manuscript reporting PSNR, SSIM, and flicker metrics for each layer prefix on held-out scenes. revision: yes
-
Referee: [§4] §4 (Experiments): Quantitative claims of >60% size reduction and latency drop to ~1.7 s lack reported baselines (specific 4DGS variants or prior compression methods), error bars across runs, exact training protocols, and per-scene results. This prevents verification of the performance gains, particularly given the new losses and ρ that could affect metric independence.
Authors: Comparisons to vanilla 4DGS and prior dynamic compression methods are included in Section 4 on the Dycheck iPhone benchmark, with the >60% reduction and latency figures derived from those. Error bars from multiple runs, exact training protocols, and per-scene results appear in the supplementary material and appendix. To facilitate easier verification, we will incorporate key baseline tables, error bars, and per-scene breakdowns into the main text during revision. revision: partial
-
Referee: [§3.3] §3.3 (Losses and rollout): The Gaussian-entropy loss and capacity-weighted schedule with learnt ρ are asserted to shrink the base layer and prevent under-training without per-scene tuning, yet no analysis shows these steps are independent of the final reported metrics or that ρ does not introduce hidden fitting that undermines the 'parameter-free' aspect of the rollout.
Authors: The Gaussian-entropy rate-distortion loss and capacity-weighted rollout gated by learnt ρ are introduced to enable a single training run that produces a progressive bitstream without manual per-scene hyperparameter tuning. ρ adapts the rollout schedule online to prevent under-training of deformation layers. Supporting analysis of component contributions is provided in Section 3.3 and the supplement. To demonstrate independence from final metrics, we will add an explicit ablation study in the revision isolating the effect of each loss term and the ρ-gated schedule. revision: yes
Circularity Check
No significant circularity in PD-4DGS derivation; new decomposition and losses are introduced independently of fitted inputs.
full rationale
The paper proposes Hierarchical Deformation Decomposition (HDD) to externalize a latent coarse-to-fine hierarchy in 4DGS, along with a Gaussian-entropy loss, temporal mask consistency regulariser, and a capacity-weighted rollout gated by a learnt activation rate rho. These elements are presented as novel architectural and training choices validated on the Dycheck benchmark, without any load-bearing step that reduces by construction to a prior fit, self-citation, or renamed known result. The core claims (progressive bitstream, latency reduction) rest on the external separability assumption and empirical results rather than tautological re-use of inputs. No equations or sections exhibit self-definitional loops or fitted parameters renamed as independent predictions.
Axiom & Free-Parameter Ledger
free parameters (1)
- rho
axioms (1)
- domain assumption Standard 4DGS models contain a latent coarse-to-fine motion hierarchy that can be externalized into independent layers.
invented entities (3)
-
Hierarchical Deformation Decomposition (HDD)
no independent evidence
-
Gaussian-entropy attribute rate-distortion loss
no independent evidence
-
temporal mask consistency regulariser
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Hierarchical Deformation Decomposition (HDD) externalises the coarse-to-fine motion hierarchy already latent in 4DGS into three independently transmittable layers
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Gaussian-entropy attribute rate–distortion loss together with a temporal mask consistency regulariser
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3D Gaussian splatting for real-time radiance field rendering.ACM Transactions on Graphics, 42(4):139:1–139:14, 2023
work page 2023
-
[2]
Zeyu Yang, Hongye Yang, Zijie Pan, and Li Zhang. Real-time photorealistic dynamic scene representation and rendering with 4D Gaussian splatting.arXiv preprint arXiv:2310.10642, 2023
-
[3]
4D-Rotor Gaussian splatting: Towards efficient novel view synthesis for dynamic scenes
Yuanxing Duan, Fangyin Wei, Qiyu Dai, Yuhang He, Wenzheng Chen, and Baoquan Chen. 4D-Rotor Gaussian splatting: Towards efficient novel view synthesis for dynamic scenes. InACM SIGGRAPH 2024 Conference Papers, 2024
work page 2024
-
[4]
SC-GS: Sparse-controlled Gaussian splatting for editable dynamic scenes
Yi-Hua Huang, Yang-Tian Sun, Ziyi Yang, Xiaoyang Lyu, Yan-Pei Cao, and Xiaojuan Qi. SC-GS: Sparse-controlled Gaussian splatting for editable dynamic scenes. InCVPR, 2024
work page 2024
-
[5]
Deformable 3D Gaussians for high-fidelity monocular dynamic scene reconstruction
Ziyi Yang, Xinyu Gao, Wen Zhou, Shaohui Jiao, Yuqing Zhang, and Xiaogang Jin. Deformable 3D Gaussians for high-fidelity monocular dynamic scene reconstruction. InCVPR, 2024
work page 2024
-
[6]
GauFRe: Gaussian deformation fields for real-time dynamic novel view synthesis.arXiv preprint, 2024
Yiqing Liang, Numair Khan, Zhengqin Li, Thu Nguyen-Phuoc, Douglas Lanman, James Tompkin, and Lei Xiao. GauFRe: Gaussian deformation fields for real-time dynamic novel view synthesis.arXiv preprint, 2024
work page 2024
-
[7]
SplineGS: Robust motion-adaptive spline for real-time dynamic 3D Gaussians from monocular video
Jongmin Park, Minh-Quan Viet Bui, Juan Luis Gonzalez Bello, Jaeho Moon, Jihyong Oh, and Munchurl Kim. SplineGS: Robust motion-adaptive spline for real-time dynamic 3D Gaussians from monocular video. InCVPR, 2025
work page 2025
-
[8]
4D Gaussian splatting for real-time dynamic scene rendering
Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4D Gaussian splatting for real-time dynamic scene rendering. InCVPR, 2024
work page 2024
-
[9]
Bardienus P. Duisterhof, Zhao Mandi, Yunchao Yao, Jia-Wei Liu, Mike Zheng Shou, Shuran Song, and Jeffrey Ichnowski. MD-Splatting: Learning metric deformation from 4D Gaussians in highly deformable scenes.arXiv preprint arXiv:2312.00583, 2023
-
[10]
Gaussian-Flow: 4D reconstruction with dynamic 3D Gaussian particle
Youtian Lin, Zuozhuo Dai, Siyu Zhu, and Yao Yao. Gaussian-Flow: 4D reconstruction with dynamic 3D Gaussian particle. InCVPR, 2024
work page 2024
-
[11]
DynMF: Neural motion factorization for real-time dynamic view synthesis with 3D Gaussian splatting
Agelos Kratimenos, Jiahui Lei, and Kostas Daniilidis. DynMF: Neural motion factorization for real-time dynamic view synthesis with 3D Gaussian splatting. InECCV, 2025
work page 2025
-
[12]
Qiankun Gao, Yanmin Wu, Chengxiang Wen, Jiarui Meng, Luyang Tang, Jie Chen, Ronggang Wang, and Jian Zhang. RelayGS: Reconstructing dynamic scenes with large-scale and complex motions via relay Gaussians.arXiv preprint arXiv:2412.02493, 2024
-
[13]
Spacetime Gaussian feature splatting for real-time dynamic view synthesis
Zhan Li, Zhang Chen, Zhong Li, and Yi Xu. Spacetime Gaussian feature splatting for real-time dynamic view synthesis. InCVPR, 2024
work page 2024
-
[14]
Woong Oh Cho, In Cho, Seoha Kim, Jeongmin Bae, Youngjung Uh, and Seon Joo Kim. 4D scaffold Gaussian splatting for memory-efficient dynamic scene reconstruction.arXiv preprint arXiv:2411.17044, 2024
-
[15]
Optimized minimal 4D Gaussian splatting
Minseo Lee, Byeonghyeon Lee, Lucas Yunkyu Lee, Eunsoo Lee, Sangmin Kim, Seunghyeon Song, Joo Chan Lee, Jong Hwan Ko, Jaesik Park, and Eunbyung Park. Optimized minimal 4D Gaussian splatting. arXiv preprint arXiv:2510.03857, 2025
-
[16]
LightGaussian: Unbounded 3D Gaussian compression with 15×reduction and 200+ fps
Zhiwen Fan, Kevin Wang, Kairun Wen, Zehao Zhu, Dejia Xu, and Zhangyang Wang. LightGaussian: Unbounded 3D Gaussian compression with 15×reduction and 200+ fps. InNeurIPS, 2024
work page 2024
-
[17]
Compressed 3D Gaussian splatting for accelerated novel view synthesis
Simon Niedermayr, Josef Stumpfegger, and Rüdiger Westermann. Compressed 3D Gaussian splatting for accelerated novel view synthesis. InCVPR, 2024
work page 2024
-
[18]
Wieland Morgenstern, Florian Barthel, Anna Hilsmann, and Peter Eisert. Compact 3D scene representation via self-organizing Gaussian grids.arXiv preprint arXiv:2312.13299, 2023
-
[19]
MEGA: Memory-efficient 4D Gaussian splatting for dynamic scenes
Xinjie Zhang, Zhening Liu, Yifan Zhang, Xingtong Ge, Dailan He, Tongda Xu, Yan Wang, Zehong Lin, Shuicheng Yan, and Jun Zhang. MEGA: Memory-efficient 4D Gaussian splatting for dynamic scenes. In ICCV, 2025
work page 2025
-
[20]
Mufan Liu, Qi Yang, He Huang, Wenjie Huang, Zhenlong Yuan, Zhu Li, and Yiling Xu. Light4GS: Lightweight compact 4D Gaussian splatting generation via context model.arXiv preprint arXiv:2503.13948, 2025
-
[21]
Cheng-Yuan Ho et al. TED-4DGS: Temporally activated and embedding-based deformation for 4DGS compression.arXiv preprint arXiv:2512.05446, 2025
-
[22]
Hyeongmin Lee and Kyungjune Baek. Temporal smoothness-aware rate-distortion optimized 4D Gaussian splatting.arXiv preprint arXiv:2507.17336, 2025
-
[23]
HAC: Hash-grid assisted context for 3D Gaussian splatting compression
Yihang Chen, Qianyi Wu, Weiyao Lin, Mehrtash Harandi, and Jianfei Cai. HAC: Hash-grid assisted context for 3D Gaussian splatting compression. InECCV, 2024
work page 2024
-
[24]
Gode: Gaussians on demand for progressive level of detail and scalable compres- sion,
Francesco Di Sario, Riccardo Renzulli, Marco Grangetto, Akihiro Sugimoto, and Enzo Tartaglione. GoDe: Gaussians on demand for progressive level of detail and scalable compression.arXiv preprint arXiv:2501.13558, 2025. 10
-
[25]
Yuang Shi, Simone Gasparini, Géraldine Morin, and Wei Tsang Ooi. LapisGS: Layered progressive 3D Gaussian splatting for adaptive streaming.arXiv preprint arXiv:2408.14823, 2024
-
[26]
ProgS: Progressive rendering of Gaussian splats.arXiv preprint arXiv:2409.01761, 2024
Brent Zoomers, Maarten Wijnants, Ivan Molenaers, Joni Vanherck, Jeroen Put, Lode Jorissen, and Nick Michiels. ProgS: Progressive rendering of Gaussian splats.arXiv preprint arXiv:2409.01761, 2024
-
[27]
A hierarchical compression technique for 3D Gaussian splatting compression, 2024
He Huang, Wenjie Huang, Qi Yang, Yiling Xu, and Zhu Li. A hierarchical compression technique for 3D Gaussian splatting compression, 2024
work page 2024
-
[28]
LTS: A DASH streaming system for dynamic multi-layer 3D Gaussian splatting scenes
Yuan-Chun Sun, Yuang Shi, Cheng-Tse Lee, Mufeng Zhu, Wei Tsang Ooi, Yao Liu, Chun-Ying Huang, and Cheng-Hsin Hsu. LTS: A DASH streaming system for dynamic multi-layer 3D Gaussian splatting scenes. InProceedings of the 16th ACM Multimedia Systems Conference (MMSys), 2025
work page 2025
-
[29]
Vega: Fully immersive mobile volumetric video streaming with 3D Gaussian splatting
Geonsoo Kim, Seonghoon Park, Jeho Lee, Chanyoung Jung, Hyungchol Jun, and Hojung Cha. Vega: Fully immersive mobile volumetric video streaming with 3D Gaussian splatting. InProceedings of the 31st Annual International Conference on Mobile Computing and Networking (MobiCom), 2025
work page 2025
-
[30]
Adaptive 3D Gaussian splatting video streaming.arXiv preprint arXiv:2507.14432, 2025
Yu Gong, Lifei Li, et al. Adaptive 3D Gaussian splatting video streaming.arXiv preprint arXiv:2507.14432, 2025
-
[31]
A compact dynamic 3D Gaussian representation for real-time dynamic view synthesis
Kai Katsumata, Duc Minh V o, and Hideki Nakayama. A compact dynamic 3D Gaussian representation for real-time dynamic view synthesis. InECCV, 2024
work page 2024
-
[32]
Kai Katsumata, Duc Minh V o, and Hideki Nakayama. An efficient 3D Gaussian representation for monocular/multi-view dynamic scenes.arXiv preprint arXiv:2311.12897, 2023
-
[33]
MoDGS : Dynamic gaussian splatting from causually-captured monocular videos
Qingming Liu, Yuan Liu, Jiepeng Wang, Xianqiang Lv, Peng Wang, Wenping Wang, and Junhui Hou. MoDGS: Dynamic Gaussian splatting from casually-captured monocular videos.arXiv preprint arXiv:2406.00434, 2024
-
[34]
K. L. Navaneet, Kossar Pourahmadi Meibodi, Soroush Abbasi Koohpayegani, and Hamed Pirsiavash. Compact3D: Compressing Gaussian splat radiance field models with vector quantization. InECCV, 2024
work page 2024
-
[35]
Compact 3D Gaussian representation for radiance field
Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. Compact 3D Gaussian representation for radiance field. InCVPR, 2024
work page 2024
-
[36]
Scaffold-GS: Structured 3D Gaussians for view-adaptive rendering
Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, and Bo Dai. Scaffold-GS: Structured 3D Gaussians for view-adaptive rendering. InCVPR, 2024
work page 2024
-
[37]
Yufei Wang, Zhihao Li, Lanqing Guo, Wenhan Yang, Alex C. Kot, and Bihan Wen. ContextGS: Compact 3D Gaussian splatting with anchor-level context model. InNeurIPS, 2024
work page 2024
-
[38]
CAT-3DGS: A context-adaptive triplane approach to rate-distortion-optimized 3DGS compression
Yu-Ting Zhan, Cheng-Yuan Ho, Hebi Yang, Yi-Hsin Chen, Jui Chiu Chiang, Yu-Lun Liu, and Wen-Hsiao Peng. CAT-3DGS: A context-adaptive triplane approach to rate-distortion-optimized 3DGS compression. InICLR, 2025
work page 2025
-
[39]
HEMGS: A hybrid entropy model for 3D Gaussian splatting data compression, 2024
Lei Liu, Zhenghao Chen, and Dong Xu. HEMGS: A hybrid entropy model for 3D Gaussian splatting data compression, 2024
work page 2024
-
[40]
EAGLES: Efficient accelerated 3D Gaussians with lightweight encodings
Sharath Girish, Kamal Gupta, and Abhinav Shrivastava. EAGLES: Efficient accelerated 3D Gaussians with lightweight encodings. InECCV, 2024
work page 2024
-
[41]
Joo Chan Lee, Daniel Rho, Xiangyu Sun, Jong Hwan Ko, and Eunbyung Park. Compact 3D Gaussian splatting for static and dynamic radiance fields.arXiv preprint arXiv:2408.03822, 2024
-
[42]
GaussianPro: 3D Gaussian splatting with progressive propagation
Kai Cheng, Xiaoxiao Long, Kaizhi Yang, Yao Yao, Wei Yin, Yuexin Ma, Wenping Wang, and Xuejin Chen. GaussianPro: 3D Gaussian splatting with progressive propagation. InICML, 2024
work page 2024
-
[43]
Context-based trit-plane coding for progressive image compression
Seungmin Jeon, Kwang Pyo Choi, Youngo Park, and Chang-Su Kim. Context-based trit-plane coding for progressive image compression. InCVPR, 2023
work page 2023
-
[44]
Jiakai Sun, Han Jiao, Guangyuan Li, Zhanjie Zhang, Lei Zhao, and Wei Xing. 3DGStream: On-the-fly training of 3D Gaussians for efficient streaming of photo-realistic free-viewpoint videos. InCVPR, 2024
work page 2024
-
[45]
Bangya Liu and Suman Banerjee. SWINGS: Sliding window Gaussian splatting for volumetric video streaming with arbitrary length.arXiv preprint arXiv:2409.07759, 2024
-
[46]
BungeeNeRF: Progressive neural radiance field for extreme multi-scale scene rendering
Yuanbo Xiangli, Linning Xu, Xingang Pan, Nanxuan Zhao, Anyi Rao, Christian Theobalt, Bo Dai, and Dahua Lin. BungeeNeRF: Progressive neural radiance field for extreme multi-scale scene rendering. In ECCV, 2022
work page 2022
-
[47]
Monocular dynamic view synthesis: A reality check
Hang Gao, Ruilong Li, Shubham Tulsiani, Bryan Russell, and Angjoo Kanazawa. Monocular dynamic view synthesis: A reality check. InNeurIPS, 2022
work page 2022
-
[48]
Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B. Goldman, Ricardo Martin-Brualla, and Steven M. Seitz. HyperNeRF: A higher-dimensional representation for topologically varying neural radiance fields.ACM Transactions on Graphics, 40(6):238:1–238:12, 2021. 11
work page 2021
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.