pith. machine review for the scientific record. sign in

arxiv: 2605.11489 · v1 · submitted 2026-05-12 · 💻 cs.GR · cs.CV

Recognition: no theorem link

3DGS³: Joint Super Sampling and Frame Interpolation for Real-Time Large-Scale 3DGS Rendering

Authors on Pith no claims yet

Pith reviewed 2026-05-13 02:26 UTC · model grok-4.3

classification 💻 cs.GR cs.CV
keywords 3D Gaussian Splattingsuper samplingframe interpolationreal-time renderingpost-rendering processingdifferentiable rendering
0
0 comments X

The pith

A post-rendering framework jointly super-samples and interpolates low-resolution 3D Gaussian splats to enable real-time high-resolution rendering of large scenes.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes a unified post-processing approach for 3D Gaussian Splatting that takes low-resolution renders as input and produces both higher-resolution images and additional interpolated frames. It introduces two modules that use the differentiable nature of the base renderer to guide refinement networks. This matters because direct high-resolution rendering in dense scenes is computationally expensive, limiting real-time use in applications like virtual reality. By shifting the work to a lightweight post-process, the method aims to maintain visual quality while increasing efficiency and frame rates. Experiments claim it outperforms existing methods in both speed and quality while working alongside other optimizations.

Core claim

3DGS³ is a unified post-rendering framework that performs super sampling via a Gradient-Aware Super Sampling module and frame interpolation via a Lightweight Temporal Frame Interpolation module on low-resolution 3DGS outputs. The GASS module extracts gradients from the continuous differentiability of 3DGS to guide a GRU-based refinement network for high-fidelity upsampling. The LTFI module uses a compact U-Net-like backbone to fuse temporal and spatial cues from consecutive frames for coherent intermediates. This achieves high-resolution and high-frame-rate rendering without modifying the splatting pipeline itself.

What carries the argument

Gradient-Aware Super Sampling (GASS) module that leverages 3DGS differentiability for gradient-guided refinement, paired with Lightweight Temporal Frame Interpolation (LTFI) module for temporal coherence.

If this is right

  • Achieves superior rendering efficiency compared to state-of-the-art methods.
  • Delivers better visual quality on public datasets.
  • Remains compatible with existing 3DGS acceleration techniques.
  • Enables real-time rendering for ultra-dense and high-resolution scenes.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The method could extend to other neural rendering techniques that support differentiation.
  • It might allow deployment on lower-power hardware by reducing the base render resolution.
  • Further optimization of the modules could lead to even higher frame rates without quality loss.

Load-bearing premise

The low-resolution 3DGS outputs contain sufficient differentiable information that the proposed modules can use to reconstruct high-fidelity frames without introducing noticeable artifacts in large-scale scenes.

What would settle it

Rendering a complex large-scale scene directly at high resolution and comparing it side-by-side with the output of the method at the same resolution to check for introduced artifacts or quality degradation.

Figures

Figures reproduced from arXiv: 2605.11489 by Fan Gao, Ligang Liu, Yibo Zhao, Youcheng Cai.

Figure 1
Figure 1. Figure 1: We propose a novel post-rendering framework to accelerate 3DGS rendering. Leveraging spatio-temporal information, it jointly performs super sampling and frame interpolation over rendered outputs to achieve efficient acceleration while preserving high rendering quality. tations: methods such as GES [19] and DisC-GS [46] modify Gaussian primi￾tives, through generalized exponential splats or boundary-aware fo… view at source ↗
Figure 2
Figure 2. Figure 2: Overview of our approach. (1) The Gradient-Aware Super Sampling (GASS) module processes the low-resolution frames rendered by 3DGS by combining them with image gradients and the historical feature to generate high-resolution im￾ages. (2) The Lightweight Temporal Frame Interpolation (LTFI) module uses the high-resolution image together with the subsequent low-resolution frame to inter￾polate intermediate fr… view at source ↗
Figure 3
Figure 3. Figure 3: The detailed architecture of the (a) GASS module, which consists of a gradient￾aware interpolation module and a geometry-temporal recurrent refinement network; and (b) LTFI module, which consists of a Forward–Backward Warping block, a Feature Fusion Network, and a Temporal Recurrent Unit. coefficient matrix M: \mathrm {Vec}(\hat {I}_{S\times S}(\hat {x}_0, \hat {y}_0)) = \mathbf {M} \begin {bmatrix} \mathr… view at source ↗
Figure 4
Figure 4. Figure 4: Qualitative comparison of super sampling and frame interpolation results. Our method achieves superior detail preservation in super sampling and smoother temporal consistency in frame interpolation compared to existing approaches. To thoroughly evaluate the effectiveness of our approach, we compare against the following categories of methods: (1) Traditional Interpolation: Bicubic in￾terpolation; (2) Learn… view at source ↗
read the original abstract

3D Gaussian Splatting (3DGS) enables high-quality real-time 3D rendering but faces challenges in efficiently scaling to ultra-dense scenes and high-resolution due to computational bottlenecks that limit its use in latency-sensitive applications. Instead of optimizing the splatting pipeline itself, we propose \textbf{3DGS$^3$}, a unified post-rendering framework that jointly performs super sampling and frame interpolation through differentiable processing of low-resolution outputs to achieve both high-resolution and high-frame-rate rendering. Our \textbf{Gradient\- \-Aware Super Sampling (GASS)} module leverages the continuous differentiability of 3DGS to extract image gradients that guide a GRU-based refinement network to enable high-fidelity super sampling. Furthermore, a \textbf{Lightweight Temporal Frame Interpolation (LTFI)} module based on a compact U-Net-like backbone fuses temporal and differentiable spatial cues from consecutive frames to synthesize temporally coherent intermediate frames. Experiments on public datasets demonstrate that 3DGS$^3$ achieves superior rendering efficiency and visual quality when compared with state-of-the-art methods and remains compatible with existing 3DGS acceleration techniques. The code will be publicly released upon acceptance.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes 3DGS³, a post-rendering framework for 3D Gaussian Splatting that jointly performs super-sampling and frame interpolation via differentiable processing of low-resolution outputs. It introduces a Gradient-Aware Super Sampling (GASS) module that extracts image gradients from 3DGS to guide a GRU-based refinement network, and a Lightweight Temporal Frame Interpolation (LTFI) module using a compact U-Net-like backbone to fuse temporal and spatial cues for coherent intermediate frames. The central claim is that this achieves superior rendering efficiency and visual quality over state-of-the-art methods on public datasets while remaining compatible with existing 3DGS acceleration techniques.

Significance. If the quantitative results and ablations support the claims, the work provides a practical post-processing solution to scale 3DGS to high-resolution and high-frame-rate rendering in large scenes without modifying the core splatting pipeline. This has clear significance for latency-sensitive applications in graphics, VR, and simulation. The explicit use of 3DGS differentiability for gradient guidance and the compatibility with accelerations are notable strengths that could enable broader adoption.

major comments (3)
  1. [Abstract] Abstract: The claim that '3DGS³ achieves superior rendering efficiency and visual quality' is presented without any quantitative metrics, error bars, ablation details, baseline descriptions, or dataset specifics, making it impossible to verify whether the data support the central claim of superiority.
  2. [Method (GASS)] GASS module (method description): The reliance on gradients extracted from low-resolution 3DGS rasterization to recover high-fidelity details assumes that sufficient continuous high-frequency information survives the averaging over overlapping Gaussians; this is load-bearing for the super-sampling claim but is not validated in ultra-dense or large-scale regimes where such averaging could erase recoverable cues.
  3. [Method (LTFI) and Experiments] LTFI module and experiments: The assertion that the compact U-Net fuses temporal and differentiable spatial cues to produce coherent frames without artifacts requires explicit testing and metrics in complex/large-scale scenes; public-dataset results as described do not necessarily address the risk of hallucination or temporal inconsistency under high Gaussian density.
minor comments (2)
  1. [Abstract] The abstract contains a clear LaTeX formatting artifact ('Gradient- -Aware') that should be corrected to 'Gradient-Aware'.
  2. [Experiments] The manuscript would benefit from explicit statements of the specific public datasets used, the exact baselines compared, and the hardware/setup for efficiency measurements to improve reproducibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment point-by-point below. Where the comments identify opportunities to strengthen validation or presentation, we have made corresponding revisions to the manuscript.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The claim that '3DGS³ achieves superior rendering efficiency and visual quality' is presented without any quantitative metrics, error bars, ablation details, baseline descriptions, or dataset specifics, making it impossible to verify whether the data support the central claim of superiority.

    Authors: We agree that the abstract would benefit from explicit quantitative support. We have revised the abstract to include key results from our experiments on public datasets (referencing the specific benchmarks, baselines, and ablations), along with average improvements in rendering quality and efficiency. Full metrics, error bars, and dataset details remain in the experiments section. revision: yes

  2. Referee: [Method (GASS)] GASS module (method description): The reliance on gradients extracted from low-resolution 3DGS rasterization to recover high-fidelity details assumes that sufficient continuous high-frequency information survives the averaging over overlapping Gaussians; this is load-bearing for the super-sampling claim but is not validated in ultra-dense or large-scale regimes where such averaging could erase recoverable cues.

    Authors: This is a substantive concern about the limits of gradient-based guidance. Our original experiments already cover large-scale scenes with high Gaussian densities from public benchmarks and show consistent GASS gains. To directly validate cue survival, we have added a targeted analysis and ablation in the revised method and supplementary material examining performance across density regimes, including ultra-dense configurations. revision: yes

  3. Referee: [Method (LTFI) and Experiments] LTFI module and experiments: The assertion that the compact U-Net fuses temporal and differentiable spatial cues to produce coherent frames without artifacts requires explicit testing and metrics in complex/large-scale scenes; public-dataset results as described do not necessarily address the risk of hallucination or temporal inconsistency under high Gaussian density.

    Authors: We acknowledge the value of more explicit checks for temporal artifacts in high-density settings. The evaluated public datasets include complex large-scale scenes with dense Gaussians. In the revision we have added quantitative temporal-consistency metrics, artifact analysis, and a limitations discussion focused on hallucination risk under high density, with results confirming coherence in the tested regimes. revision: yes

Circularity Check

0 steps flagged

No circularity; post-processing pipeline is independent of its outputs

full rationale

The paper describes 3DGS³ as an external post-rendering framework that applies GASS (GRU-based gradient-guided super-sampling) and LTFI (U-Net temporal fusion) to low-resolution 3DGS rasterizations. These modules exploit the pre-existing differentiability of 3DGS rather than defining it; performance claims rest on direct comparisons against external baselines on public datasets. No equations redefine fitted parameters as predictions, no self-citation chains carry the central argument, and no ansatz or uniqueness result is smuggled in. The derivation chain therefore remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The abstract supplies no explicit free parameters, axioms, or invented entities; the method relies on standard neural-network training assumptions and the differentiability of 3DGS, which are treated as background.

pith-pipeline@v0.9.0 · 5518 in / 1248 out tokens · 40437 ms · 2026-05-13T02:26:37.311498+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

63 extracted references · 63 canonical work pages · 1 internal anchor

  1. [1]

    In: Annual Conference on Computer Graphics and Interactive Techniques

    Akeley, K.: Reality engine graphics. In: Annual Conference on Computer Graphics and Interactive Techniques. pp. 109–116 (1993)

  2. [2]

    AMD: Amd fidelityfx super resolution.https://www.amd.com/en/technologies/ fidelityfx-super-resolution(2021)

  3. [3]

    com / GPUOpen - Effects/FidelityFX-FSR2(2022)

    AMD: Amd fidelityfx super resolution 2.1.https : / / github . com / GPUOpen - Effects/FidelityFX-FSR2(2022)

  4. [4]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Bao, W., Lai, W.S., Ma, C., Zhang, X., Gao, Z., Yang, M.H.: Depth-aware video frame interpolation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 3703–3712 (2019)

  5. [5]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5470–5479 (2022)

  6. [6]

    ACM Transactions on Graphics 40(6), 1–13 (2021)

    Briedis, K.M., Djelouah, A., Meyer, M., McGonigal, I., Gross, M., Schroers, C.: Neural frame interpolation for rendered content. ACM Transactions on Graphics 40(6), 1–13 (2021)

  7. [7]

    In: SIGGRAPH Conference Papers

    Briedis, K.M., Djelouah, A., Ortiz, R., Meyer, M., Gross, M., Schroers, C.: Kernel-based frame interpolation for spatio-temporally adaptive rendering. In: SIGGRAPH Conference Papers. pp. 1–11 (2023)

  8. [8]

    International Journal of Computer Vision 61(3), 211–231 (2005)

    Bruhn, A., Weickert, J., Schnörr, C.: Lucas/kanade meets horn/schunck: Combin- ing local and global optic flow methods. International Journal of Computer Vision 61(3), 211–231 (2005)

  9. [9]

    Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation

    Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder for sta- tistical machine translation. arXiv preprint arXiv:1406.1078 (2014)

  10. [10]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Ding, T., Liang, L., Zhu, Z., Zharkov, I.: Cdfi: Compression-driven network de- sign for frame interpolation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8001–8011 (2021)

  11. [11]

    Advances in Neural Information Processing Systems37, 140138–140158 (2024)

    Fan, Z., Wang, K., Wen, K., Zhu, Z., Xu, D., Wang, Z., et al.: Lightgaussian: Unbounded 3d gaussian compression with 15x reduction and 200+ fps. Advances in Neural Information Processing Systems37, 140138–140158 (2024)

  12. [12]

    In: European Conference on Computer Vision

    Fang,G.,Wang,B.:Mini-splatting:Representingsceneswithaconstrainednumber of gaussians. In: European Conference on Computer Vision. pp. 165–181 (2024)

  13. [13]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Feng, G., Chen, S., Fu, R., Liao, Z., Wang, Y., Liu, T., Hu, B., Xu, L., Pei, Z., Li, H., et al.: Flashgs: Efficient 3d gaussian splatting for large-scale and high- resolution rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 26652–26662 (2025)

  14. [14]

    ACM on Computer Graphics and Interactive Techniques8(1), 1–21 (2025)

    Franke, L., Fink, L., Stamminger, M.: Vr-splatting: Foveated radiance field render- ing via 3d gaussian splatting and neural points. ACM on Computer Graphics and Interactive Techniques8(1), 1–21 (2025)

  15. [15]

    In: European Conference on Computer Vision

    Girish, S., Gupta, K., Shrivastava, A.: Eagles: Efficient accelerated 3d gaussians with lightweight encodings. In: European Conference on Computer Vision. pp. 54–71 (2024)

  16. [16]

    https://github.com/Coloquinte/torchSR/blob/main/doc/NinaSR.md(2021)

    Gouvine, G.: NinaSR: Efficient small and large convnets for super-resolution. https://github.com/Coloquinte/torchSR/blob/main/doc/NinaSR.md(2021). https://doi.org/10.5281/zenodo.4868308

  17. [17]

    Zhao et al

    Guo, J., Fu, X., Lin, L., Ma, H., Guo, Y., Liu, S., Yan, L.Q.: Extranet: Real-time extrapolatedrenderingforlow-latencytemporalsupersampling.ACMTransactions on Graphics40(6), 1–16 (2021) 16 Y. Zhao et al

  18. [18]

    Advances in Neural Information Processing Systems37, 63747– 63770 (2024)

    Guo, Z., Li, W., Loy, C.C.: Generalizable implicit motion modeling for video frame interpolation. Advances in Neural Information Processing Systems37, 63747– 63770 (2024)

  19. [19]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Hamdi, A., Melas-Kyriazi, L., Mai, J., Qian, G., Liu, R., Vondrick, C., Ghanem, B., Vedaldi, A.: Ges: Generalized exponential splatting for efficient radiance field ren- dering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 19812–19822 (2024)

  20. [20]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Hanson, A., Tu, A., Lin, G., Singla, V., Zwicker, M., Goldstein, T.: Speedy-splat: Fast 3d gaussian splatting with sparse pixels and sparse primitives. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 21537–21546 (2025)

  21. [21]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Hanson, A., Tu, A., Singla, V., Jayawardhana, M., Zwicker, M., Goldstein, T.: Pup 3d-gs: Principled uncertainty pruning for 3d gaussian splatting. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5949–5958 (2025)

  22. [22]

    ACM Transactions on Graphics 37(6), 1–15 (2018)

    Hedman, P., Philip, J., Price, T., Frahm, J.M., Drettakis, G., Brostow, G.: Deep blending for free-viewpoint image-based rendering. ACM Transactions on Graphics 37(6), 1–15 (2018)

  23. [23]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Huang, Y.H., Lin, M.X., Sun, Y.T., Yang, Z., Lyu, X., Cao, Y.P., Qi, X.: De- formable radial kernel splatting. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 21513–21523 (2025)

  24. [24]

    Intel: Intel Arc Xe Super Sampling.https://www.intel.cn/content/www/cn/zh/ developer/topic-technology/gamedev/xess2.html(2023)

  25. [25]

    In: SIGGRAPH Conference Papers

    Jiang, Y., Yu, C., Xie, T., Li, X., Feng, Y., Wang, H., Li, M., Lau, H., Gao, F., Yang, Y., et al.: Vr-gs: A physical dynamics-aware interactive gaussian splatting system in virtual reality. In: SIGGRAPH Conference Papers. pp. 1–1 (2024)

  26. [26]

    In: Computer Graphics Forum

    Jimenez, J., Echevarria, J.I., Sousa, T., Gutierrez, D.: Smaa: Enhanced subpixel morphological antialiasing. In: Computer Graphics Forum. vol. 31, pp. 355–364 (2012)

  27. [27]

    In: European Conference on Computer Vision

    Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision. pp. 694–711 (2016)

  28. [28]

    In: Winter Conference on Applications of Computer Vision

    Kalluri, T., Pathak, D., Chandraker, M., Tran, D.: Flavr: Flow-agnostic video representations for fast frame interpolation. In: Winter Conference on Applications of Computer Vision. pp. 2071–2082 (2023)

  29. [29]

    ACM Transactions on Graphics42(4), 139–1 (2023)

    Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics42(4), 139–1 (2023)

  30. [30]

    Kincaid, D.R., Cheney, E.W.: Numerical Analysis: Mathematics of Scientific Com- puting, vol. 2. American Mathematical Society (2009)

  31. [31]

    ACM Transactions on Graphics36(4) (2017)

    Knapitsch, A., Park, J., Zhou, Q.Y., Koltun, V.: Tanks and temples: Benchmarking large-scale scene reconstruction. ACM Transactions on Graphics36(4) (2017)

  32. [32]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Kong, L., Jiang, B., Luo, D., Chu, W., Huang, X., Tai, Y., Wang, C., Yang, J.: Ifrnet: Intermediate feature refine network for efficient frame interpolation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1969– 1978 (2022)

  33. [33]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Lee, H., Kim, T., Chung, T.y., Pak, D., Ban, Y., Lee, S.: Adacof: Adaptive col- laboration of flows for video frame interpolation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5316–5325 (2020)

  34. [34]

    In: ACM International Conference on Architectural Support for Programming Languages and Operating Systems

    Lee, J., Lee, S., Lee, J., Park, J., Sim, J.: Gscore: Efficient radiance field ren- dering via architectural support for 3d gaussian splatting. In: ACM International Conference on Architectural Support for Programming Languages and Operating Systems. vol. 3, pp. 497–511 (2024) 3DGS3 17

  35. [35]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Li, J., Chen, Z., Wu, X., Wang, L., Wang, B., Zhang, L.: Neural super-resolution for real-time rendering with radiance demodulation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4357–4367 (2024)

  36. [36]

    In: Annual International Symposium on Computer Architecture

    Li, S., Li, C., Zhu, W., Yu, B., Zhao, Y., Wan, C., You, H., Shi, H., Lin, Y.: Instant- 3d:Instantneuralradiancefieldtrainingtowardson-devicear/vr3dreconstruction. In: Annual International Symposium on Computer Architecture. pp. 1–13 (2023)

  37. [37]

    In: Graphics Interface

    Li, Z., Marshall, C.S., Vembar, D.S., Liu, F.: Future frame synthesis for fast monte carlo rendering. In: Graphics Interface. pp. 74–83 (2022)

  38. [38]

    In: GPU Technology Conference

    Liu, E.: Dlss 2.0: Image reconstruction for real-time rendering with deep learning. In: GPU Technology Conference. vol. 1 (2020)

  39. [39]

    In: IEEE/CVF International Conference on Computer Vision

    Liu, Z., Yeh, R.A., Tang, X., Liu, Y., Agarwala, A.: Video frame synthesis using deep voxel flow. In: IEEE/CVF International Conference on Computer Vision. pp. 4463–4471 (2017)

  40. [40]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Lu, T., Yu, M., Xu, L., Xiangli, Y., Wang, L., Lin, D., Dai, B.: Scaffold-gs: Struc- tured 3d gaussians for view-adaptive rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 20654–20664 (2024)

  41. [41]

    In: Symposium on Interactive 3D Graphics

    Mark, W.R., McMillan, L., Bishop, G.: Post-rendering 3d warping. In: Symposium on Interactive 3D Graphics. pp. 7–ff (1997)

  42. [42]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Meyer, S., Wang, O., Zimmer, H., Grosse, M., Sorkine-Hornung, A.: Phase-based frame interpolation for video. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1410–1418 (2015)

  43. [43]

    In: IEEE/CVF International Conference on Computer Vision

    Niedermayr, S., Neuhauser, C., Westermann, R.: Lightweight gradient-aware up- scaling of 3d gaussian splatting images. In: IEEE/CVF International Conference on Computer Vision. pp. 25862–25871 (2025)

  44. [44]

    In: International Conference on 3D Vision

    Niemeyer, M., Manhardt, F., Rakotosaona, M.J., Oechsle, M., Duckworth, D., Gosula, R., Tateno, K., Bates, J., Kaeser, D., Tombari, F.: Radsplat: Radiance field-informed gaussian splatting for robust real-time rendering with 900+ fps. In: International Conference on 3D Vision. pp. 134–144 (2025)

  45. [45]

    In: IEEE/CVF International Conference on Computer Vision

    Niklaus, S., Mai, L., Liu, F.: Video frame interpolation via adaptive separable convolution. In: IEEE/CVF International Conference on Computer Vision. pp. 261–270 (2017)

  46. [46]

    Advances in Neural Information Processing Systems37, 112284–112309 (2024)

    Qu, H., Li, Z., Rahmani, H., Cai, Y., Liu, J.: Disc-gs: Discontinuity-aware gaussian splatting. Advances in Neural Information Processing Systems37, 112284–112309 (2024)

  47. [47]

    In: Conference on High Performance Graphics

    Reshetov, A.: Morphological antialiasing. In: Conference on High Performance Graphics. pp. 109–116 (2009)

  48. [48]

    In: Computer Graphics Forum

    Scherzer, D., Yang, L., Mattausch, O., Nehab, D., Sander, P.V., Wimmer, M., Eisemann, E.: Temporal coherence methods in real-time rendering. In: Computer Graphics Forum. vol. 31, pp. 2378–2408 (2012)

  49. [49]

    arXiv preprint arXiv:2501.05757 (2025)

    Shin, S., Park, J., Cho, S.: Locality-aware gaussian compression for fast and high- quality rendering. arXiv preprint arXiv:2501.05757 (2025)

  50. [50]

    In: SIGGRAPH Asia Conference Papers

    Wang, X., Yi, R., Ma, L.: Adr-gaussian: Accelerating gaussian splatting with adap- tive radius. In: SIGGRAPH Asia Conference Papers. pp. 1–10 (2024)

  51. [51]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Wu, M., Dai, H., Yao, K., Tuytelaars, T., Yu, J.: Bg-triangle: Bézier gaussian tri- angle for 3d vectorization and rendering. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 16197–16207 (2025)

  52. [52]

    Patterns (2024) 18 Y

    Wu, R., Liu, Y., Ning, G., Liang, P., Chang, Q.: Ultralight vm-unet: Parallel vi- sion mamba significantly reduces parameters for skin lesion segmentation. Patterns (2024) 18 Y. Zhao et al

  53. [53]

    In: SIGGRAPH Asia Conference Papers

    Wu, S., Kim, S., Zeng, Z., Vembar, D., Jha, S., Kaplanyan, A., Yan, L.Q.: Ex- trass: A framework for joint spatial super sampling and frame extrapolation. In: SIGGRAPH Asia Conference Papers. pp. 1–11 (2023)

  54. [54]

    In: SIGGRAPH Asia Conference Papers

    Wu, Z., Zuo, C., Huo, Y., Yuan, Y., Peng, Y., Pu, G., Wang, R., Bao, H.: Adaptive recurrent frame prediction with learnable motion vectors. In: SIGGRAPH Asia Conference Papers. pp. 1–11 (2023)

  55. [55]

    ACM Transactions on Graphics39(4), 142– 1 (2020)

    Xiao, L., Nouri, S., Chapman, M., Fix, A., Lanman, D., Kaplanyan, A.: Neural supersampling for real-time rendering. ACM Transactions on Graphics39(4), 142– 1 (2020)

  56. [56]

    ACM Transactions on Graphics28(5), 1–12 (2009)

    Yang, L., Nehab, D., Sander, P.V., Sitthi-Amorn, P., Lawrence, J., Hoppe, H.: Amortized supersampling. ACM Transactions on Graphics28(5), 1–12 (2009)

  57. [57]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition Work- shops

    Zamfir,E.,Conde,M.V.,Timofte,R.:Towardsreal-time4kimagesuper-resolution. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition Work- shops. pp. 1522–1532 (2023)

  58. [58]

    arXiv preprint arXiv:2406.01467 (2024)

    Zhang, B., Fang, C., Shrestha, R., Liang, Y., Long, X., Tan, P.: Rade-gs: Raster- izing depth in gaussian splatting. arXiv preprint arXiv:2406.01467 (2024)

  59. [59]

    In: ACM International Conference on Multimedia

    Zhang, X., Zeng, H., Zhang, L.: Edge-oriented convolution block for real-time super resolution on mobile devices. In: ACM International Conference on Multimedia. pp. 4034–4043 (2021)

  60. [60]

    In: IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Zheng, M., Sun, L., Dong, J., Pan, J.: Efficient video super-resolution for real- time rendering with decoupled g-buffer guidance. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11328–11337 (2025)

  61. [61]

    In: European Conference on Computer Vision

    Zhong, Z., Krishnan, G., Sun, X., Qiao, Y., Ma, S., Wang, J.: Clearer frames, anytime: Resolving velocity ambiguity in video frame interpolation. In: European Conference on Computer Vision. pp. 346–363 (2024)

  62. [62]

    In: SIGGRAPH Asia Conference Papers

    Zhong, Z., Zhu, J., Dai, Y., Zheng, C., Chen, G., Huo, Y., Bao, H., Wang, R.: Fusesr: Super resolution for real-time rendering through efficient multi-resolution fusion. In: SIGGRAPH Asia Conference Papers. pp. 1–10 (2023)

  63. [63]

    Zwicker,M.,Pfister,H.,VanBaar,J.,Gross,M.:Ewasplatting.IEEETransactions on Visualization and Computer Graphics8(3), 223–238 (2002)