pith. machine review for the scientific record. sign in

arxiv: 2604.10393 · v1 · submitted 2026-04-12 · 💻 cs.GR · physics.optics

Recognition: unknown

CV-HoloSR: Hologram to hologram super-resolution through volume-upsampling three-dimensional scenes

Authors on Pith no claims yet

Pith reviewed 2026-05-10 16:38 UTC · model grok-4.3

classification 💻 cs.GR physics.optics
keywords hologram super-resolutionvolumetric upsamplingcomplex-valued networksdepth preservationholographic displaysLoRA adaptation3D scene reconstructioninterference patterns
0
0 comments X

The pith

CV-HoloSR performs hologram super-resolution for volumetric upsampling while preserving linear depth scaling in 3D scenes.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that standard hologram super-resolution techniques, tuned for angle-of-view expansion, produce quadratic depth distortion when applied to spatial volume upsampling. This distortion ruins focal accuracy in reconstructed 3D scenes. CV-HoloSR counters it with a complex-valued residual dense network and a depth-aware perceptual loss that keep depth scaling physically linear. The result matters for any holographic display that must render sharp focus at multiple depths without artifacts. The method further shows that a tailored complex LoRA adaptation can retrain the model on new depth ranges using only 200 samples.

Core claim

CV-HoloSR is a complex-valued hologram super-resolution framework built on a Complex-Valued Residual Dense Network and optimized with a depth-aware perceptual reconstruction loss; it preserves physically consistent linear depth scaling during volume up-sampling, recovers sharp high-frequency interference patterns, and adapts to unseen depth ranges and display configurations through complex-valued Low-Rank Adaptation.

What carries the argument

Complex-Valued Residual Dense Network (CV-RDN) with depth-aware perceptual loss, which processes complex-valued hologram data to suppress over-smoothing and quadratic depth distortion.

If this is right

  • Delivers 32 percent better perceptual realism (LPIPS 0.2001) than prior baselines.
  • Adapts a pre-trained backbone to new depth ranges and display setups with only 200 samples.
  • Cuts training time by more than 75 percent, from 22.5 hours to 5.2 hours.
  • Supports datasets covering large depth ranges at resolutions up to 4K.
  • Recovers high-frequency interference patterns without over-smoothing.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same complex-valued backbone could be tested on other wave-based imaging tasks such as radar or acoustic holography.
  • If inference speed is further optimized, the method might support real-time upsampling for live holographic video.
  • Scaling the approach to even larger target volumes would test whether the linear-depth property holds without additional regularization.
  • The large-depth-range dataset introduced here could serve as a shared benchmark for future holographic upsampling work.

Load-bearing premise

Complex-valued operations together with the depth-aware loss are enough to remove quadratic depth distortion and produce physically consistent linear depth scaling.

What would settle it

Real optical reconstructions in which the measured focal planes deviate from the expected linear depth positions after volume upsampling.

Figures

Figures reproduced from arXiv: 2604.10393 by Daejun Choi, Dae Youl Park, Duksu Kim, Jaehong Lee, Youchan No.

Figure 1
Figure 1. Figure 1: The overview of hologram SR dataset with a pair of LR and HR. [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The overview of proposed network for hologram super-resolution. [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Cropping-induced ringing artifacts in ASM reconstruction and the effect of the [PITH_FULL_IMAGE:figures/full_fig_p011_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Analysis of depth bias in the pretrained encoder. (Left) Numerical reconstruc [PITH_FULL_IMAGE:figures/full_fig_p014_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Optical system configuration for physical holographic reconstruction. The setup [PITH_FULL_IMAGE:figures/full_fig_p016_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative comparison of reconstructed planes. The LR reference is divided [PITH_FULL_IMAGE:figures/full_fig_p019_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Visual quality comparison of focused and out-of-focus regions. The green and [PITH_FULL_IMAGE:figures/full_fig_p020_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Continuous volumetric reconstruction sweep from the hologram plane (0.0 mm) [PITH_FULL_IMAGE:figures/full_fig_p021_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Qualitative and quantitative comparison of different loss function configurations [PITH_FULL_IMAGE:figures/full_fig_p021_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Optical and numerical reconstruction results across the HologramSR, Big Buck [PITH_FULL_IMAGE:figures/full_fig_p023_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Comparisons to evaluate depth-range adaptation under LoRA-based fine [PITH_FULL_IMAGE:figures/full_fig_p024_11.png] view at source ↗
read the original abstract

Existing hologram super-resolution (HSR) methods primarily focus on angle-of-view expansion. Adapting them for volumetric spatial up-sampling introduces severe quadratic depth distortion, degrading 3D focal accuracy. We propose CV-HoloSR, a complex-valued HSR framework specifically designed to preserve physically consistent linear depth scaling during volume up-sampling. Built upon a Complex-Valued Residual Dense Network (CV-RDN) and optimized with a novel depth-aware perceptual reconstruction loss, our model effectively suppresses over-smoothing to recover sharp, high-frequency interference patterns. To support this, we introduce a comprehensive large-depth-range dataset with resolutions up to 4K. Furthermore, to overcome the inherent depth bias of pre-trained encoders when scaling to massive target volumes, we integrate a parameter-efficient fine-tuning strategy utilizing complex-valued Low-Rank Adaptation (LoRA). Extensive numerical and physical optical experiments demonstrate our method's superiority. CV-HoloSR achieves a 32% improvement in perceptual realism (LPIPS of 0.2001) over state-of-the-art baselines. Additionally, our tailored LoRA strategy requires merely 200 samples, reducing training time by over 75% (from 22.5 to 5.2 hours) while successfully adapting the pre-trained backbone to unseen depth ranges and novel display configurations.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces CV-HoloSR, a complex-valued framework for hologram super-resolution focused on volumetric spatial up-sampling of 3D scenes. It proposes a Complex-Valued Residual Dense Network (CV-RDN) trained with a depth-aware perceptual reconstruction loss to suppress quadratic depth distortion and over-smoothing, a new large-depth-range dataset up to 4K resolution, and complex-valued LoRA for parameter-efficient adaptation to unseen depths and display configurations. The central claims are a 32% LPIPS improvement (to 0.2001) over state-of-the-art baselines plus over 75% training-time reduction (to 5.2 hours with 200 samples), validated via numerical and physical optical experiments.

Significance. If the physical-consistency claims hold, the work could advance holographic 3D displays by enabling accurate high-resolution volumetric reconstructions without depth warping. The parameter-efficient LoRA adaptation and new dataset are practical strengths that could support reproducible follow-on research in computer graphics and optics.

major comments (2)
  1. [Abstract and experimental results] Abstract and experimental results: the central claim that CV-RDN plus the depth-aware loss 'preserves physically consistent linear depth scaling' and 'suppresses quadratic depth distortion' in real optical experiments lacks any reported quantitative metric for depth fidelity (e.g., measured-vs-target depth slope, R² of linearity, focal-plane error, or residual quadratic term). Only LPIPS is provided, which addresses perceptual quality rather than geometric accuracy and therefore does not directly substantiate the load-bearing physical-consistency assertion.
  2. [Experimental evaluation] Experimental evaluation: insufficient detail is given on baseline implementations, dataset construction (size, depth-range sampling, hologram generation method), error bars, and ablation studies isolating the contribution of complex-valued operations versus the depth-aware loss. These omissions make it impossible to verify the reported 32% LPIPS gain or the LoRA efficiency claims under controlled conditions.
minor comments (2)
  1. [Method] Notation for complex-valued operations and the precise formulation of the depth-aware loss should be clarified with explicit equations to aid reproducibility.
  2. [Figures] Figure captions and axis labels in the optical reconstruction results could be expanded to indicate the exact depth ranges and display parameters tested.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We address each major comment below and indicate the revisions planned for the manuscript.

read point-by-point responses
  1. Referee: [Abstract and experimental results] Abstract and experimental results: the central claim that CV-RDN plus the depth-aware loss 'preserves physically consistent linear depth scaling' and 'suppresses quadratic depth distortion' in real optical experiments lacks any reported quantitative metric for depth fidelity (e.g., measured-vs-target depth slope, R² of linearity, focal-plane error, or residual quadratic term). Only LPIPS is provided, which addresses perceptual quality rather than geometric accuracy and therefore does not directly substantiate the load-bearing physical-consistency assertion.

    Authors: We acknowledge that the manuscript relies on visual inspection of focused reconstructions in the physical experiments to support claims of linear depth scaling and suppression of quadratic distortion, without providing explicit quantitative depth-fidelity metrics such as slope, R², or focal-plane error. LPIPS was selected to quantify perceptual improvements in hologram quality, but we agree it does not directly measure geometric accuracy. In the revised version we will add quantitative depth analysis from the optical setup, including measured-versus-target depth slopes and linearity statistics computed across multiple focal planes. revision: yes

  2. Referee: [Experimental evaluation] Experimental evaluation: insufficient detail is given on baseline implementations, dataset construction (size, depth-range sampling, hologram generation method), error bars, and ablation studies isolating the contribution of complex-valued operations versus the depth-aware loss. These omissions make it impossible to verify the reported 32% LPIPS gain or the LoRA efficiency claims under controlled conditions.

    Authors: We agree that additional implementation and evaluation details are required for reproducibility and verification of the reported gains. The revised manuscript will expand the experimental section to specify: (i) exact adaptations made to baseline HSR methods for volumetric up-sampling, (ii) dataset size, depth-range sampling procedure, and hologram generation parameters (angular spectrum method with given wavelength and pixel pitch), (iii) error bars computed over multiple independent runs, and (iv) ablation tables that isolate complex-valued operations from the depth-aware loss. These additions will allow direct verification of the LPIPS improvement and LoRA training-time reduction. revision: yes

Circularity Check

0 steps flagged

No circularity: derivation relies on new architecture, loss, and data-driven evaluation

full rationale

The paper proposes CV-RDN with a depth-aware perceptual loss and LoRA adaptation, trained on a new large-depth-range dataset, then reports LPIPS gains and training-time reductions from numerical and optical experiments. No load-bearing step reduces a claimed result to a fitted parameter, self-citation chain, or input by construction; the central claims rest on empirical metrics rather than algebraic equivalence to the method's own definitions.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 3 invented entities

The central claim rests on standard deep-learning training plus several new components whose effectiveness is shown empirically rather than derived from first principles.

free parameters (2)
  • CV-RDN network weights
    Learned from the introduced large-depth-range dataset
  • Complex LoRA adaptation parameters
    Chosen for efficient fine-tuning on 200 samples
axioms (2)
  • domain assumption Complex-valued representations preserve the phase and interference patterns required for physically accurate holograms
    Invoked in the design of CV-RDN to avoid depth distortion
  • domain assumption The depth-aware perceptual reconstruction loss correctly penalizes deviations from linear depth scaling
    Central to the optimization strategy described
invented entities (3)
  • CV-RDN (Complex-Valued Residual Dense Network) no independent evidence
    purpose: To process complex hologram data for super-resolution while preserving phase
    Newly proposed architecture
  • Depth-aware perceptual reconstruction loss no independent evidence
    purpose: To suppress over-smoothing and maintain 3D focal accuracy
    Novel loss function introduced in the paper
  • Complex-valued LoRA no independent evidence
    purpose: Parameter-efficient adaptation to new depth ranges and display configurations
    Adapted version of LoRA for complex values

pith-pipeline@v0.9.0 · 5546 in / 1588 out tokens · 57661 ms · 2026-05-10T16:38:54.846572+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

70 extracted references · 4 canonical work pages · 2 internal anchors

  1. [1]

    Slinger, C

    C. Slinger, C. Cameron, M. Stanley, Computer-generated holography as a generic display technology, Computer 38 (8) (2005) 46–53

  2. [3]

    H. G. Kim, Y. Man Ro, Ultrafast layer based computer-generated holo- gram calculation with sparse template holographic fringe pattern for 3-d object, Optics Express 25 (24) (2017) 30418–30427

  3. [4]

    P. Su, W. Cao, J. Ma, B. Cheng, X. Liang, L. Cao, G. Jin, Fast computer-generated hologram generation method for three-dimensional point cloud model, Journal of Display Technology 12 (12) (2016) 1688– 1694

  4. [5]

    Maimone, A

    A. Maimone, A. Georgiou, J. S. Kollin, Holographic near-eye displays for virtual and augmented reality, ACM Transactions on Graphics (Tog) 36 (4) (2017) 1–16. 26

  5. [6]

    Symeonidou, D

    A. Symeonidou, D. Blinder, P. Schelkens, Colour computer-generated holography for point clouds utilizing the phong illumination model, Op- tics express 26 (8) (2018) 10282–10298

  6. [7]

    Ju, J.-H

    M.Askari, S.-B.Kim, K.-S.Shin, S.-B.Ko, S.-H.Kim, D.-Y.Park, Y.-G. Ju, J.-H. Park, Occlusion handling using angular spectrum convolution in fully analytical mesh based computer generated hologram, Optics Express 25 (21) (2017) 25867–25878

  7. [8]

    Ko, J.-H

    S.-B. Ko, J.-H. Park, Speckle reduction using angular spectrum inter- leaving for triangular mesh based computer generated hologram, Optics Express 25 (24) (2017) 29788–29797

  8. [9]

    H.-J. Yeom, S. Cheon, K. Choi, J. Park, Efficient mesh-based realistic computer-generated hologram synthesis with polygon resolution adjust- ment, ETRI Journal 44 (1) (2022) 85–93

  9. [10]

    J.-H. Park, M. Askari, Non-hogel-based computer generated hologram from light field using complex field recovery technique from wigner dis- tribution function, Optics express 27 (3) (2019) 2562–2574

  10. [11]

    Park, J.-H

    D.-Y. Park, J.-H. Park, Hologram conversion for speckle free reconstruc- tion using light field extraction and deep learning, Optics Express 28 (4) (2020) 5393–5409

  11. [12]

    Y. Zhao, L. Cao, H. Zhang, D. Kong, G. Jin, Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method, Optics express 23 (20) (2015) 25440–25449

  12. [13]

    Z. Wang, G. Lv, Q. Feng, A. Wang, H. Ming, Simple and fast calculation algorithm for computer-generated hologram based on integral imaging using look-up table, Optics express 26 (10) (2018) 13322–13330

  13. [14]

    J. Jia, Y. Wang, J. Liu, X. Li, Y. Pan, Z. Sun, B. Zhang, Q. Zhao, W. Jiang, Reducing the memory usage for effectivecomputer-generated hologram calculation using compressed look-up table in full-color holo- graphic display, Applied optics 52 (7) (2013) 1404–1412

  14. [15]

    Shimobaba, T

    T. Shimobaba, T. Takahashi, Y. Yamamoto, T. Nishitsuji, A. Shiraki, N. Hoshikawa, T. Kakue, T. Ito, Efficient diffraction calculations using implicit convolution, OSA Continuum 1 (2) (2018) 642–650. 27

  15. [16]

    Shimobaba, T

    T. Shimobaba, T. Ito, Computer Holography: Acceleration Algorithms and Hardware Implementations, CRC press, 2019

  16. [17]

    Blinder, T

    D. Blinder, T. Shimobaba, Efficient algorithms for the accurate propa- gation of extreme-resolution holograms, Optics Express 27 (21) (2019) 29905–29915

  17. [18]

    J. Lee, H. Kang, H.-j. Yeom, S. Cheon, J. Park, D. Kim, Out-of-core gpu 2d-shift-fft algorithm for ultra-high-resolution hologram generation, Optics Express 29 (12) (2021) 19094–19112

  18. [19]

    J. Lee, D. Kim, Out-of-core diffraction algorithm using multiple ssds for ultra-high-resolution hologram generation, Optics Express 31 (18) (2023) 28683–28700

  19. [20]

    J. Lee, D. Kim, Combo: compressed block-wise out-of-core diffraction computation for tera-scale holography, Optics Express 32 (27) (2024) 47993–48008

  20. [21]

    Y. Peng, S. Choi, N. Padmanaban, G. Wetzstein, Neural hologra- phy with camera-in-the-loop training, ACM Transactions on Graphics (TOG) 39 (6) (2020) 1–14

  21. [22]

    Ishii, F

    Y. Ishii, F. Wang, H. Shiomi, T. Kakue, T. Ito, T. Shimobaba, Multi- depth hologram generation from two-dimensional images by deep learn- ing, Optics and Lasers in Engineering 170 (2023) 107758

  22. [23]

    Kim, P.Kellnhofer, W.Matusik, Towardsreal-timepho- torealistic 3d holography with deep neural networks, Nature 591 (7849) (2021) 234–239

    L.Shi, B.Li, C. Kim, P.Kellnhofer, W.Matusik, Towardsreal-timepho- torealistic 3d holography with deep neural networks, Nature 591 (7849) (2021) 234–239

  23. [24]

    T. Yang, Z. Lu, Holo-u2net for high-fidelity 3d hologram generation, Sensors 24 (17) (2024) 5505

  24. [25]

    Q. Fang, H. Zheng, X. Xia, J. Peng, T. Zhang, X. Lin, Y. Yu, Diffraction model-driven neural network with semi-supervised training strategy for real-world 3d holographic photography, Optics Express 32 (26) (2024) 45406–45420

  25. [26]

    Y. Endo, M. Oikawa, T. D. Wilkinson, T. Shimobaba, T. Ito, Quantized neural network for complex hologram generation, Applied Optics 64 (5) (2024) A12–A18. 28

  26. [27]

    Zhang, D

    Y. Zhang, D. Cheng, Y. Wang, Y. Wang, Y. Shan, T. Yang, Y. Wang, Real-time multi-depth holographic display using complex-valued neural network, Optics Express 33 (4) (2025) 7380–7395

  27. [28]

    M. Jee, H. Kim, M. Yoon, C. Kim, Hologram super-resolution using dual-generator gan, in: 2022 IEEE International Conference on Image Processing (ICIP), IEEE, 2022, pp. 2596–2600

  28. [29]

    Lee, S.-W

    S. Lee, S.-W. Nam, J. Lee, Y. Jeong, B. Lee, Holosr: deep learning- based super-resolution for real-time high-resolution computer-generated holograms, Optics Express 32 (7) (2024) 11107–11122

  29. [30]

    Y. No, J. Lee, H. Yeom, S. Kwon, D. Kim, H2hsr: Hologram-to- hologram super-resolution with deep neural network, IEEE Access 12 (2024) 90900–90914

  30. [31]

    Park, J.-H

    D.-Y. Park, J.-H. Park, Generation of distortion-free scaled holograms using light field data conversion, Optics Express 29 (1) (2020) 487–508

  31. [32]

    H. Chen, C. Cao, P. He, Y. Xiong, T. Qi, D. Li, Z. Gaopeng, C. Fan, Z. Zhao, Noise-resistant and aberration-free synthetic aperture digital holographic microscopy for chip topography reconstruction, Optics Ex- press 33 (19) (2025) 40392–40406

  32. [33]

    Abbasian, T

    V. Abbasian, T. Pahl, L. Hüser, S. Lecler, P. Montgomery, P. Lehmann, A. Darafsheh, Microsphere-assisted quantitative phase microscopy: a review, Light: Advanced Manufacturing 5 (1) (2024) 133–152

  33. [34]

    M. G. Gustafsson, Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy, Journal of microscopy 198 (2) (2000) 82–87

  34. [35]

    H. Lee, J. Kim, J. Kim, P. Jeon, S. A. Lee, D. Kim, Noniterative sub- pixelshiftingsuper-resolutionlenslessdigitalholography, OpticsExpress 29 (19) (2021) 29996–30006

  35. [36]

    M. J. Rust, M. Bates, X. Zhuang, Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (storm), Nature methods 3 (10) (2006) 793–796. 29

  36. [37]

    Betzig, G

    E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, H. F. Hess, Imaging intracellular fluorescent proteins at nanometer resolution, sci- ence 313 (5793) (2006) 1642–1645

  37. [38]

    K. Wang, L. Song, C. Wang, Z. Ren, G. Zhao, J. Dou, J. Di, G. Barbas- tathis, R. Zhou, J. Zhao, et al., On the use of deep learning for phase recovery, Light: Science & Applications 13 (1) (2024) 4

  38. [39]

    H. Wang, Y. Rivenson, Y. Jin, Z. Wei, R. Gao, H. Günaydın, L. A. Ben- tolila, C. Kural, A. Ozcan, Deep learning enables cross-modality super- resolution in fluorescence microscopy, Nature methods 16 (1) (2019) 103–110

  39. [40]

    Nehme, L

    E. Nehme, L. E. Weiss, T. Michaeli, Y. Shechtman, Deep-storm: super- resolution single-molecule microscopy by deep learning, Optica 5 (4) (2018) 458–464

  40. [41]

    J. Lee, Y. C. No, Y. Kim, D. Kim, A large-depth-range layer-based holo- gram dataset for machine learning-based 3d computer-generated holog- raphy (2025). arXiv:2512.21040. URLhttps://arxiv.org/abs/2512.21040

  41. [42]

    H. Yu, Y. Kim, D. Yang, W. Seo, Y. Kim, J.-Y. Hong, H. Song, G. Sung, Y. Sung, S.-W. Min, et al., Deep learning-based incoherent holographic camera enabling acquisition of real-world holograms for holographic streaming system, Nature Communications 14 (1) (2023) 3534

  42. [43]

    Zhong, X

    C. Zhong, X. Sang, B. Yan, H. Li, X. Xie, X. Qin, S. Chen, Real-time 4k computer-generated hologram based on encoding conventional neural network with learned layered phase, Scientific Reports 13 (1) (2023) 19372

  43. [44]

    Z. Jin, Q. Ren, T. Chen, Z. Dai, F. Shu, B. Fang, Z. Hong, C. Shen, S. Mei, Vision transformer empowered physics-driven deep learning for omnidirectional three-dimensional holography, Optics Express 32 (8) (2024) 14394–14404

  44. [45]

    N. Liu, K. Liu, Y. Yang, Y. Peng, L. Cao, Propagation-adaptive 4k computer-generated holography using physics-constrained spatial and fourier neural operator, Nature Communications 16 (1) (2025) 7761. 30

  45. [46]

    Matsushima, Introduction to Computer Holography: Creating Computer-Generated Holograms as the Ultimate 3D Image, Springer Nature, 2020

    K. Matsushima, Introduction to Computer Holography: Creating Computer-Generated Holograms as the Ultimate 3D Image, Springer Nature, 2020

  46. [47]

    Z. He, X. Sui, G. Jin, D. Chu, L. Cao, Optimal quantization for am- plitude and phase in computer-generated holography, Optics Express 29 (1) (2020) 119–133

  47. [48]

    1120–1128

    M.Arjovsky, A.Shah, Y.Bengio, Unitaryevolutionrecurrentneuralnet- works, in: International conference on machine learning, PMLR, 2016, pp. 1120–1128

  48. [49]

    Guberman, On complex valued convolutional neural networks, arXiv preprint arXiv:1602.09046 (2016)

    N. Guberman, On complex valued convolutional neural networks, arXiv preprint arXiv:1602.09046 (2016)

  49. [50]

    Zhang, Y

    Y. Zhang, Y. Tian, Y. Kong, B. Zhong, Y. Fu, Residual dense network for image super-resolution, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2472–2481

  50. [51]

    B. Lim, S. Son, H. Kim, S. Nah, K. Mu Lee, Enhanced deep residual networks for single image super-resolution, in: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2017, pp. 136–144

  51. [52]

    W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D.Rueckert, Z.Wang, Real-timesingleimageandvideosuper-resolution using an efficient sub-pixel convolutional neural network, in: Proceed- ings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 1874–1883

  52. [53]

    J. Seo, J. Lee, J. Lee, H. Ko, Deep compression network for enhanc- ing numerical reconstruction quality of full-complex holograms, Optics Express 31 (15) (2023) 24573–24597

  53. [54]

    J. Barg, C. Lee, C. Lee, M. Jang, Adaptable deep learning for holo- graphic microscopy: a case study on tissue type and system variability in label-free histopathology, Advanced Photonics Nexus 4 (2) (2025) 026005–026005

  54. [55]

    J. Shi, X. Zhu, H. Wang, L. Song, Q. Guo, Label enhanced and patch baseddeeplearningforphaseretrievalfromsingleframefringepatternin 31 fringe projection 3d measurement, Optics express 27 (20) (2019) 28929– 28943

  55. [56]

    B. Chen, Z. Li, Y. Zhou, Y. Zhang, J. Jia, Y. Wang, Deep-learning mul- tiscale digital holographic intensity and phase reconstruction, Applied Sciences 13 (17) (2023) 9806

  56. [57]

    Zhang, J

    Y. Zhang, J. Zhao, Q. Fan, W. Zhang, S. Yang, Improving the recon- struction quality with extension and apodization of the digital hologram, Applied optics 48 (16) (2009) 3070–3074

  57. [58]

    Chang, D

    S. Chang, D. Wang, Y. Wang, J. Zhao, L. Rong, Improving the phase measurement by the apodization filter in the digital holography, in: Holography, Diffractive Optics, and Applications V, Vol. 8556, SPIE, 2012, pp. 342–348

  58. [59]

    Nagahama, Reducing ringing artifacts for hologram reconstruction by extracting patterns of ringing artifacts, Optics Continuum 2 (2) (2023) 361–369

    Y. Nagahama, Reducing ringing artifacts for hologram reconstruction by extracting patterns of ringing artifacts, Optics Continuum 2 (2) (2023) 361–369

  59. [60]

    Chakravarthula, Y

    P. Chakravarthula, Y. Peng, J. Kollin, H. Fuchs, F. Heide, Wirtinger holographyfornear-eyedisplays, ACMTransactionsonGraphics(TOG) 38 (6) (2019) 1–13

  60. [61]

    Zhang, N

    J. Zhang, N. Pégard, J. Zhong, H. Adesnik, L. Waller, 3d computer- generated holography by non-convex optimization, Optica 4 (10) (2017) 1306–1313

  61. [62]

    Matsushima, T

    K. Matsushima, T. Shimobaba, Band-limited angular spectrum method for numerical simulation of free-space propagation in far and near fields, Optics express 17 (22) (2009) 19662–19673

  62. [63]

    Zhang, P

    R. Zhang, P. Isola, A. A. Efros, E. Shechtman, O. Wang, The unreason- able effectiveness of deep features as a perceptual metric, in: Proceed- ings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 586–595

  63. [64]

    E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, W. Chen, et al., Lora: Low-rank adaptation of large language models., Iclr 1 (2) (2022) 3. 32

  64. [65]

    Roosendaal, Big buck bunny, in: ACM SIGGRAPH ASIA 2008 com- puter animation festival, 2008, pp

    T. Roosendaal, Big buck bunny, in: ACM SIGGRAPH ASIA 2008 com- puter animation festival, 2008, pp. 62–62

  65. [66]

    J. Cai, H. Zeng, H. Yong, Z. Cao, L. Zhang, Toward real-world single image super-resolution: A new benchmark and a new model, in: Pro- ceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 3086–3095

  66. [67]

    L. Yang, B. Kang, Z. Huang, Z. Zhao, X. Xu, J. Feng, H. Zhao, Depth anything v2, Advances in Neural Information Processing Systems 37 (2024) 21875–21911

  67. [68]

    S. Chen, H. Guo, S. Zhu, F. Zhang, Z. Huang, J. Feng, B. Kang, Video depth anything: Consistent depth estimation for super-long videos, in: Proceedings of the Computer Vision and Pattern Recognition Confer- ence, 2025, pp. 22831–22840

  68. [69]

    Decoupled Weight Decay Regularization

    I. Loshchilov, F. Hutter, Decoupled weight decay regularization, arXiv preprint arXiv:1711.05101 (2017)

  69. [70]

    SGDR: Stochastic Gradient Descent with Warm Restarts

    I. Loshchilov, F. Hutter, Sgdr: Stochastic gradient descent with warm restarts, arXiv preprint arXiv:1608.03983 (2016)

  70. [71]

    Ledig, L

    C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al., Photo-realistic single im- age super-resolution using agenerative adversarialnetwork, in: Proceed- ings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4681–4690. 33