pith. machine review for the scientific record. sign in

arxiv: 2605.07650 · v1 · submitted 2026-05-08 · 💻 cs.CV · eess.IV

Recognition: 2 theorem links

· Lean Theorem

Breaking Spatial Uniformity: Prior-Guided Mamba with Radial Serialization for Lens Flare Removal

Authors on Pith no claims yet

Pith reviewed 2026-05-11 02:41 UTC · model grok-4.3

classification 💻 cs.CV eess.IV
keywords lens flare removalMambastate space modelsimage restorationradial serializationadaptive restorationprior estimationnight photography
0
0 comments X

The pith

Prior-guided Mamba with radial serialization removes lens flares by adapting restoration to different image regions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Lens flare scenes require different treatment across regions: saturated light sources must stay intact, flare artifacts must be cleared, and background details must be recovered. Most restoration methods apply the same operations everywhere and therefore fall short. The paper introduces a Flare Prior Network to estimate these region-specific needs and a radial serialization step that samples the image along radial lines to break uniform processing and strengthen long-range modeling inside state space models. The backbone then follows a dual-level adaptive scheme that protects light sources and applies graduated, pixel-calibrated restoration to the rest. Experiments indicate this combination delivers stronger results than prior methods while using fewer parameters.

Core claim

The paper claims that estimating flare priors with a dedicated network and applying radial serialization to enable targeted sampling allows a Mamba backbone to perform region-dependent restoration, preserving light sources while removing artifacts and recovering details, and that this yields state-of-the-art performance with a smaller parameter count.

What carries the argument

The Flare Prior Network that estimates region-dependent priors, combined with radial serialization that performs flare-aware targeted sampling to improve long-range modeling in state space models.

If this is right

  • Light-source regions are explicitly preserved instead of over-processed.
  • Contaminated areas receive curriculum-based restoration with pixel-level intensity calibration.
  • The overall approach reaches state-of-the-art accuracy on lens flare removal while using fewer parameters than earlier methods.
  • Spatially uniform processing is shown to be insufficient for scenes with varying degradation needs.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same prior-plus-radial-sampling pattern could be tested on other non-uniform degradations such as rain streaks or localized shadows.
  • If radial serialization proves effective here, it may benefit other state-space-model vision tasks that suffer from spatially homogeneous token ordering.
  • Accurate prior estimation appears necessary when the goal is selective preservation rather than blanket enhancement.

Load-bearing premise

The Flare Prior Network must reliably estimate the region-dependent priors and the radial serialization must improve long-range modeling for flare scenes.

What would settle it

Removing the prior network or the radial serialization step and finding that performance on standard flare-removal benchmarks stays equal to or exceeds the full model would show the central claim is not necessary.

Figures

Figures reproduced from arXiv: 2605.07650 by Beijing, Beijing Normal University, China), Hua Huang (School of Artificial Intelligence, Lizhi Wang, Yuanfei Huang, Zijia Fu.

Figure 1
Figure 1. Figure 1: Comparison of lens flare removal strategies: (a) Output from our [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The overall architecture of the proposed Mamba model for flare removal in night-time images (DeflareMambav2). The main network adopts a U-shaped [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Visualization of the radial attenuation characteristics of flares. This [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Visualizations of evaluation benchmarks. From left to right: input, [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Visual comparisons on challenging real-world scenes. The top three rows are Flare7K-real, and the bottom three rows are FlareX. Our DeflareMambav2 [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Visualizations of the explicit priors generated by our FPN and the corresponding ablation results. From left to right: (a) Input images; the three [PITH_FULL_IMAGE:figures/full_fig_p009_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Visual impact of different training strategies on prior extraction. [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
read the original abstract

Lens flares, caused by complex optical aberrations, severely degrade image quality especially in nighttime photography. Although recent restoration methods have made remarkable progress, most still rely on spatially uniform processing. They are failing to handle the region-dependent restoration demands of flare scenes, where saturated light sources should be preserved, flare artifacts removed, and background details recovered. To address this challenge, we propose DeflareMambav2, a prior-guided Mamba framework for lens flare removal. Specifically, we introduce a Flare Prior Network (FPN) to estimate flare priors and guide adaptive restoration. Besides, a novel radial serialization strategy breaks spatially homogeneous processing by performing flare-aware targeted sampling, and better supports long-range modeling in State Space Models (SSMs). Based on these priors, the backbone adopts a dual-level adaptive scheme. It explicitly preserves light-source regions to avoid over-processing, and applies curriculum-based restoration to the remaining contaminated areas while calibrating restoration intensity at the pixel level. Extensive experiments demonstrate that DeflareMambav2 achieves state-of-the-art performance with reduced parameter burden. Code is available at https://github.com/BNU-ERC-ITEA/DeflareMambav2.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 0 minor

Summary. The paper proposes DeflareMambav2, a prior-guided Mamba framework for lens flare removal. It introduces a Flare Prior Network (FPN) to estimate region-dependent flare priors that guide adaptive restoration, along with a novel radial serialization strategy that performs flare-aware targeted sampling to break spatial uniformity and improve long-range modeling in State Space Models. The backbone uses a dual-level adaptive scheme to preserve light-source regions and apply curriculum-based, pixel-level calibrated restoration to contaminated areas. The authors claim that extensive experiments show state-of-the-art performance with a reduced parameter burden, and code is publicly released.

Significance. If the performance claims and the contribution of radial serialization hold, the work would advance efficient, region-adaptive restoration for spatially varying degradations such as lens flares, offering a parameter-light alternative to CNN- or Transformer-based methods. The public code release at https://github.com/BNU-ERC-ITEA/DeflareMambav2 supports reproducibility and is a clear strength.

major comments (2)
  1. [Method (radial serialization strategy) and Experiments (ablation studies)] The central claim that radial serialization delivers a concrete gain in SSM long-range dependency capture for region-dependent flare scenes (thereby justifying the 'breaking spatial uniformity' contribution) lacks direct ablation support. No comparison of serialization orders (radial vs. raster vs. other curves) is provided while holding the Mamba backbone and FPN fixed; if standard serialization yields comparable PSNR/SSIM, the necessity of the new strategy is unsupported.
  2. [Abstract] The abstract asserts state-of-the-art results from extensive experiments, yet provides no quantitative metrics, dataset details, baseline comparisons, or error analysis. This leaves the central performance claim without verifiable support in the available text and makes it impossible to assess whether the dual-level adaptive scheme and FPN priors actually deliver the claimed gains.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments and the opportunity to improve the manuscript. We address each major point below and will incorporate revisions as indicated.

read point-by-point responses
  1. Referee: [Method (radial serialization strategy) and Experiments (ablation studies)] The central claim that radial serialization delivers a concrete gain in SSM long-range dependency capture for region-dependent flare scenes (thereby justifying the 'breaking spatial uniformity' contribution) lacks direct ablation support. No comparison of serialization orders (radial vs. raster vs. other curves) is provided while holding the Mamba backbone and FPN fixed; if standard serialization yields comparable PSNR/SSIM, the necessity of the new strategy is unsupported.

    Authors: We agree that a direct ablation isolating the serialization strategy (radial vs. raster vs. alternative curves) with fixed Mamba backbone and FPN would strengthen the claim. The current experiments focus on overall system performance and component contributions but do not include this specific controlled comparison. In the revised manuscript we will add an ablation table reporting PSNR/SSIM for radial serialization against raster order and at least one other curve-based ordering, using the same backbone and priors. This will provide quantitative evidence for the benefit in long-range modeling on flare scenes. revision: yes

  2. Referee: [Abstract] The abstract asserts state-of-the-art results from extensive experiments, yet provides no quantitative metrics, dataset details, baseline comparisons, or error analysis. This leaves the central performance claim without verifiable support in the available text and makes it impossible to assess whether the dual-level adaptive scheme and FPN priors actually deliver the claimed gains.

    Authors: We acknowledge that the abstract as written is qualitative and does not include numerical results. While abstracts in the field are often kept concise, we agree that adding key metrics would improve verifiability. In the revision we will update the abstract to include the main quantitative gains (e.g., average PSNR/SSIM improvements over baselines), the primary datasets used, and a brief mention of the dual-level scheme and FPN contribution, while preserving length constraints. revision: yes

Circularity Check

0 steps flagged

No circularity; novel components validated externally

full rationale

The paper introduces a Flare Prior Network (FPN) for estimating region-dependent priors and a radial serialization strategy to improve long-range modeling in State Space Models, followed by a dual-level adaptive restoration scheme. These elements are presented as new architectural contributions whose effectiveness is assessed via extensive experiments on standard benchmarks and public code release, rather than through self-referential definitions, fitted parameters renamed as predictions, or load-bearing self-citations. No equations or uniqueness theorems reduce the claims to their own inputs by construction, leaving the derivation chain self-contained against external validation.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 2 invented entities

The central claim rests on the effectiveness of two newly introduced components whose performance is demonstrated only through the paper's own experiments rather than independent external validation.

axioms (1)
  • domain assumption State Space Models benefit from radial serialization for modeling long-range dependencies in flare-contaminated images
    Invoked to justify the radial serialization strategy as better supporting SSM long-range modeling.
invented entities (2)
  • Flare Prior Network (FPN) no independent evidence
    purpose: Estimates flare priors to guide adaptive restoration
    New network component introduced to provide region-specific guidance.
  • Radial serialization strategy no independent evidence
    purpose: Breaks spatial uniformity via flare-aware targeted sampling
    Novel sampling method proposed to improve Mamba processing of flare scenes.

pith-pipeline@v0.9.0 · 5533 in / 1381 out tokens · 46333 ms · 2026-05-11T02:41:29.750004+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

44 extracted references · 44 canonical work pages

  1. [1]

    How to train neural networks for flare removal,

    Y . Wu, Q. He, T. Xue, R. Garg, J. Chen, A. Veeraraghavan, and J. T. Barron, “How to train neural networks for flare removal,” inIEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2021, pp. 2239–2247. 10

  2. [2]

    Flare7k: A phenomeno- logical nighttime flare removal dataset,

    Y . Dai, C. Li, S. Zhou, R. Feng, and C. C. Loy, “Flare7k: A phenomeno- logical nighttime flare removal dataset,”Adv. Neural Inform. Process. Syst. (NeurIPS), vol. 35, pp. 3926–3937, 2022

  3. [3]

    Flare7k++: Mixing synthetic and real datasets for nighttime flare removal and beyond,

    Y . Dai, C. Li, S. Zhou, R. Feng, Y . Luo, and C. C. Loy, “Flare7k++: Mixing synthetic and real datasets for nighttime flare removal and beyond,”IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 46, no. 11, pp. 7041–7055, 2024

  4. [4]

    Nighttime visibility enhancement by increas- ing the dynamic range and suppression of light effects,

    A. Sharma and R. T. Tan, “Nighttime visibility enhancement by increas- ing the dynamic range and suppression of light effects,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2021, pp. 11 972–11 981

  5. [5]

    Hinet: Half instance normalization network for image restoration,

    L. Chen, X. Lu, J. Zhang, X. Chu, and C. Chen, “Hinet: Half instance normalization network for image restoration,”IEEE/CVF Conf. Comput. Vis. Pattern Recog. Worksh., pp. 182–192, 2021

  6. [6]

    Multi-stage progressive image restoration,

    S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2021, pp. 14 816–14 826

  7. [7]

    To- ward blind flare removal using knowledge-driven flare-level estimator,

    H. Deng, L. Li, F. Zhang, Z. Li, B. Xu, Q. Lu, C. Gao, and N. Sang, “To- ward blind flare removal using knowledge-driven flare-level estimator,” IEEE Trans. Image Process. (TIP), vol. 33, pp. 6114–6128, 2024

  8. [8]

    Flare-free vision: Empowering uformer with depth insights,

    Y . Kotp and M. Torki, “Flare-free vision: Empowering uformer with depth insights,” inProc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), 2024, pp. 2565–2569

  9. [9]

    Lpfsformer: Location prior guided frequency and spatial interactive learning for nighttime flare removal,

    G.-Y . Chen, W. Dong, G. Fan, J.-N. Su, M. Gan, and C. L. Philip Chen, “Lpfsformer: Location prior guided frequency and spatial interactive learning for nighttime flare removal,”IEEE Trans. Circuit Syst. Video Technol. (TCSVT), vol. 35, no. 4, pp. 3706–3718, 2025

  10. [10]

    PBFG: A New Physically-Based Dataset and Removal of Lens Flares and Glares,

    J. Zhu and S. Lee, “PBFG: A New Physically-Based Dataset and Removal of Lens Flares and Glares,” inIEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2025, pp. 5448–5457

  11. [11]

    Self-prior guided spatial and fourier transformer for nighttime flare removal,

    T. Ma, Z. Kai, X. Miao, J. Liang, J. Peng, Y . Wang, H. Wang, and X. Liu, “Self-prior guided spatial and fourier transformer for nighttime flare removal,”IEEE Trans. Autom. Sci. Eng. (T-ASE), vol. 22, pp. 11 996– 12 011, 2025

  12. [12]

    Deflaremamba: Hierarchical vision mamba for contextually consistent lens flare removal,

    Y . Huang, Y . Huang, J. Lin, and H. Huang, “Deflaremamba: Hierarchical vision mamba for contextually consistent lens flare removal,” inACM Int. Conf. Multimedia (ACMMM), 2025, pp. 8028–8037

  13. [13]

    Geometry by deflaring,

    F. Koreban and Y . Y . Schechner, “Geometry by deflaring,” in2009 IEEE Int. Conf. Comput. Photography (ICCP), 2009, pp. 1–8

  14. [14]

    Stray light calibration of the Dawn Framing Camera,

    G. Kovacs, H. Sierks, A. Nathues, M. Richards, and P. Gutierrez- Marques, “Stray light calibration of the Dawn Framing Camera,” in Sensors, Systems, and Next-Generation Satellites XVII, vol. 8889, Inter- national Society for Optics and Photonics. SPIE, 2013, p. 888912

  15. [15]

    Auto removal of bright spot from images captured against flashing light source,

    C. S. Asha, S. Bhat, D. R. Nayak, and C. Bhat, “Auto removal of bright spot from images captured against flashing light source,”2019 IEEE Int. Conf. Distrib. Comput. VLSI Electr. Circuits Robot. (DISCOVER), pp. 1–6, 2019

  16. [16]

    Uformer: A general u-shaped transformer for image restoration,

    Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2022, pp. 17 662–17 672

  17. [17]

    Safaformer: Scale-aware frequency-adaptive guidance for nighttime flare removal,

    W. Dong, G. Fan, F. Zhang, M. Gan, G.-Y . Chen, and C. L. Philip Chen, “Safaformer: Scale-aware frequency-adaptive guidance for nighttime flare removal,”IEEE Trans. Circuit Syst. Video Technol. (TCSVT), vol. 36, no. 1, pp. 93–105, 2026

  18. [18]

    Beyond image prior: Embedding noise prior into latent space of conditional denoising transformer,

    Y . Huang and H. Huang, “Beyond image prior: Embedding noise prior into latent space of conditional denoising transformer,”Int. J. Comput. Vis. (IJCV), vol. 133, no. 11, pp. 7591–7611, 2025

  19. [19]

    Illumination-guided grouped attention and masked progressive denoising for low-light image enhancement,

    H. Da, Y . Niu, L. Qiu, F. Li, T. Zhao, and Y . Chen, “Illumination-guided grouped attention and masked progressive denoising for low-light image enhancement,”IEEE Transactions on Multimedia, pp. 1–13, 2026

  20. [20]

    A prior guided wavelet-spatial dual attention transformer framework for heavy rain image restoration,

    R. Zhang, J. Yu, J. Chen, G. Li, L. Lin, and D. Wang, “A prior guided wavelet-spatial dual attention transformer framework for heavy rain image restoration,”IEEE Trans. Multimedia (TMM), vol. 26, pp. 7043– 7057, 2024

  21. [21]

    Learning scene structure guidance via cross-task knowledge transfer for single depth super-resolution,

    B. Sun, X. Ye, B. Li, H. Li, Z. Wang, and R. Xu, “Learning scene structure guidance via cross-task knowledge transfer for single depth super-resolution,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2021, pp. 7788–7797

  22. [22]

    Learning detail-structure alternative optimization for blind super-resolution,

    F. Li, Y . Wu, H. Bai, W. Lin, R. Cong, and Y . Zhao, “Learning detail-structure alternative optimization for blind super-resolution,”IEEE Transactions on Multimedia, vol. 25, pp. 2825–2838, 2023

  23. [23]

    Transitional learning: Exploring the transition states of degradation for blind super-resolution,

    Y . Huang, J. Li, Y . Hu, X. Gao, and H. Huang, “Transitional learning: Exploring the transition states of degradation for blind super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 45, no. 5, pp. 6495–6510, 2022

  24. [24]

    Dacesr: Degradation-aware conditional embedding for real-world image super- resolution,

    X. Lei, W. Zhang, B. Luo, H. Liang, W. Cao, and Q. Lin, “Dacesr: Degradation-aware conditional embedding for real-world image super- resolution,”IEEE Trans. Image Process. (TIP), 2026

  25. [25]

    Promptir: Prompting for all-in-one image restoration,

    V . Potlapalli, S. W. Zamir, S. H. Khan, and F. Shahbaz Khan, “Promptir: Prompting for all-in-one image restoration,” inAdv. Neural Inform. Process. Syst. (NeurIPS), 2023

  26. [26]

    Pro- gressive prompt-driven low-light image enhancement with frequency aware learning,

    X. Sun, D. Cheng, Y . Li, N. Wang, D. Zhang, X. Gao, and J. Sun, “Pro- gressive prompt-driven low-light image enhancement with frequency aware learning,”IEEE Transactions on Multimedia, vol. 27, pp. 6620– 6634, 2025

  27. [27]

    Mamba: Linear-time sequence modeling with selective state spaces,

    A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with selective state spaces,” inConf. Lang. Model. (COLM), 2024

  28. [28]

    Vdmamba: Vector decomposition in vision mamba for image deraining and beyond,

    K. Jiang, J. Jiang, S. Wang, W. Ren, C.-W. Lin, and Z. Li, “Vdmamba: Vector decomposition in vision mamba for image deraining and beyond,” IEEE Transactions on Multimedia, pp. 1–13, 2026

  29. [29]

    Vision mamba: Efficient visual representation learning with bidirectional state space model,

    L. Zhu, B. Liao, Q. Zhang, X. Wang, W. Liu, and X. Wang, “Vision mamba: Efficient visual representation learning with bidirectional state space model,” inInt. Conf. Mach. Learn. (ICML), 2024, pp. 62 429– 62 442

  30. [30]

    Vmamba: visual state space model,

    Y . Liu, Y . Tian, Y . Zhao, H. Yu, L. Xie, Y . Wang, Q. Ye, J. Jiao, and Y . Liu, “Vmamba: visual state space model,” inAdv. Neural Inform. Process. Syst. (NeurIPS), 2024

  31. [31]

    Mambair: A simple baseline for image restoration with state-space model,

    H. Guo, J. Li, T. Dai, Z. Ouyang, X. Ren, and S.-T. Xia, “Mambair: A simple baseline for image restoration with state-space model,” inEur. Conf. Comput. Vis. (ECCV), 2025

  32. [32]

    Eamamba: Efficient all-around vision state space model for image restoration,

    Y .-C. Lin, Y .-S. Xu, H.-W. Chen, H.-K. Kuo, and C.-Y . Lee, “Eamamba: Efficient all-around vision state space model for image restoration,” IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2025

  33. [33]

    Mambairv2: Attentive state space restoration,

    H. Guo, Y . Guo, Y . Zha, Y . Zhang, W. Li, T. Dai, S.-T. Xia, and Y . Li, “Mambairv2: Attentive state space restoration,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2025

  34. [34]

    Deformable convolutional networks,

    J. Dai, H. Qi, Y . Xiong, Y . Li, G. Zhang, H. Hu, and Y . Wei, “Deformable convolutional networks,” inIEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2017, pp. 764–773

  35. [35]

    Swin transformer: Hierarchical vision transformer using shifted windows,

    Z. Liu, Y . Lin, Y . Cao, H. Hu, Y . Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,”IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pp. 9992–10 002, 2021

  36. [36]

    Improving lens flare removal with general-purpose pipeline and multiple light sources recovery,

    Y . Zhou, Y . Li, H. Lin, H. Qiaoet al., “Improving lens flare removal with general-purpose pipeline and multiple light sources recovery,” in IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2023, pp. 12 345–12 354

  37. [37]

    Objects as points,

    X. Zhou, D. Wang, and P. Kr ¨ahenb¨uhl, “Objects as points,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2019, pp. 4843–4851

  38. [38]

    Cornernet: Detecting objects as paired keypoints,

    H. Law and J. Deng, “Cornernet: Detecting objects as paired keypoints,” Int. J. Comput. Vis. (IJCV), vol. 128, pp. 642–656, 2020, publisher Copyright: © 2019, Springer Science+Business Media, LLC, part of Springer Nature

  39. [39]

    Flarex: A physics-informed dataset for lens flare removal via 2d synthesis and 3d rendering,

    L. Qu, Z. Liu, J. Pan, S. Zhou, J. Shi, D. Chen, and J. Yang, “Flarex: A physics-informed dataset for lens flare removal via 2d synthesis and 3d rendering,”Adv. Neural Inform. Process. Syst. (NeurIPS), 2025

  40. [40]

    Image quality assessment: from error visibility to structural similarity,

    Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,”IEEE Trans. Image Process. (TIP), vol. 13, no. 4, pp. 600–612, 2004

  41. [41]

    The unreasonable effectiveness of deep features as a perceptual metric,

    R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in IEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2018, pp. 586– 595

  42. [42]

    Adam: A method for stochastic optimization

    D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization.” inInt. Conf. Learn. Represent. (ICLR), 2015

  43. [43]

    Deep laplacian pyramid networks for fast and accurate super-resolution,

    W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2017, pp. 624–632

  44. [44]

    Perceptual losses for real-time style transfer and super-resolution,

    J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” inEur. Conf. Comput. Vis. (ECCV), 2016, pp. 694–711