Recognition: 2 theorem links
· Lean TheoremBreaking Spatial Uniformity: Prior-Guided Mamba with Radial Serialization for Lens Flare Removal
Pith reviewed 2026-05-11 02:41 UTC · model grok-4.3
The pith
Prior-guided Mamba with radial serialization removes lens flares by adapting restoration to different image regions.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper claims that estimating flare priors with a dedicated network and applying radial serialization to enable targeted sampling allows a Mamba backbone to perform region-dependent restoration, preserving light sources while removing artifacts and recovering details, and that this yields state-of-the-art performance with a smaller parameter count.
What carries the argument
The Flare Prior Network that estimates region-dependent priors, combined with radial serialization that performs flare-aware targeted sampling to improve long-range modeling in state space models.
If this is right
- Light-source regions are explicitly preserved instead of over-processed.
- Contaminated areas receive curriculum-based restoration with pixel-level intensity calibration.
- The overall approach reaches state-of-the-art accuracy on lens flare removal while using fewer parameters than earlier methods.
- Spatially uniform processing is shown to be insufficient for scenes with varying degradation needs.
Where Pith is reading between the lines
- The same prior-plus-radial-sampling pattern could be tested on other non-uniform degradations such as rain streaks or localized shadows.
- If radial serialization proves effective here, it may benefit other state-space-model vision tasks that suffer from spatially homogeneous token ordering.
- Accurate prior estimation appears necessary when the goal is selective preservation rather than blanket enhancement.
Load-bearing premise
The Flare Prior Network must reliably estimate the region-dependent priors and the radial serialization must improve long-range modeling for flare scenes.
What would settle it
Removing the prior network or the radial serialization step and finding that performance on standard flare-removal benchmarks stays equal to or exceeds the full model would show the central claim is not necessary.
Figures
read the original abstract
Lens flares, caused by complex optical aberrations, severely degrade image quality especially in nighttime photography. Although recent restoration methods have made remarkable progress, most still rely on spatially uniform processing. They are failing to handle the region-dependent restoration demands of flare scenes, where saturated light sources should be preserved, flare artifacts removed, and background details recovered. To address this challenge, we propose DeflareMambav2, a prior-guided Mamba framework for lens flare removal. Specifically, we introduce a Flare Prior Network (FPN) to estimate flare priors and guide adaptive restoration. Besides, a novel radial serialization strategy breaks spatially homogeneous processing by performing flare-aware targeted sampling, and better supports long-range modeling in State Space Models (SSMs). Based on these priors, the backbone adopts a dual-level adaptive scheme. It explicitly preserves light-source regions to avoid over-processing, and applies curriculum-based restoration to the remaining contaminated areas while calibrating restoration intensity at the pixel level. Extensive experiments demonstrate that DeflareMambav2 achieves state-of-the-art performance with reduced parameter burden. Code is available at https://github.com/BNU-ERC-ITEA/DeflareMambav2.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes DeflareMambav2, a prior-guided Mamba framework for lens flare removal. It introduces a Flare Prior Network (FPN) to estimate region-dependent flare priors that guide adaptive restoration, along with a novel radial serialization strategy that performs flare-aware targeted sampling to break spatial uniformity and improve long-range modeling in State Space Models. The backbone uses a dual-level adaptive scheme to preserve light-source regions and apply curriculum-based, pixel-level calibrated restoration to contaminated areas. The authors claim that extensive experiments show state-of-the-art performance with a reduced parameter burden, and code is publicly released.
Significance. If the performance claims and the contribution of radial serialization hold, the work would advance efficient, region-adaptive restoration for spatially varying degradations such as lens flares, offering a parameter-light alternative to CNN- or Transformer-based methods. The public code release at https://github.com/BNU-ERC-ITEA/DeflareMambav2 supports reproducibility and is a clear strength.
major comments (2)
- [Method (radial serialization strategy) and Experiments (ablation studies)] The central claim that radial serialization delivers a concrete gain in SSM long-range dependency capture for region-dependent flare scenes (thereby justifying the 'breaking spatial uniformity' contribution) lacks direct ablation support. No comparison of serialization orders (radial vs. raster vs. other curves) is provided while holding the Mamba backbone and FPN fixed; if standard serialization yields comparable PSNR/SSIM, the necessity of the new strategy is unsupported.
- [Abstract] The abstract asserts state-of-the-art results from extensive experiments, yet provides no quantitative metrics, dataset details, baseline comparisons, or error analysis. This leaves the central performance claim without verifiable support in the available text and makes it impossible to assess whether the dual-level adaptive scheme and FPN priors actually deliver the claimed gains.
Simulated Author's Rebuttal
We thank the referee for the constructive comments and the opportunity to improve the manuscript. We address each major point below and will incorporate revisions as indicated.
read point-by-point responses
-
Referee: [Method (radial serialization strategy) and Experiments (ablation studies)] The central claim that radial serialization delivers a concrete gain in SSM long-range dependency capture for region-dependent flare scenes (thereby justifying the 'breaking spatial uniformity' contribution) lacks direct ablation support. No comparison of serialization orders (radial vs. raster vs. other curves) is provided while holding the Mamba backbone and FPN fixed; if standard serialization yields comparable PSNR/SSIM, the necessity of the new strategy is unsupported.
Authors: We agree that a direct ablation isolating the serialization strategy (radial vs. raster vs. alternative curves) with fixed Mamba backbone and FPN would strengthen the claim. The current experiments focus on overall system performance and component contributions but do not include this specific controlled comparison. In the revised manuscript we will add an ablation table reporting PSNR/SSIM for radial serialization against raster order and at least one other curve-based ordering, using the same backbone and priors. This will provide quantitative evidence for the benefit in long-range modeling on flare scenes. revision: yes
-
Referee: [Abstract] The abstract asserts state-of-the-art results from extensive experiments, yet provides no quantitative metrics, dataset details, baseline comparisons, or error analysis. This leaves the central performance claim without verifiable support in the available text and makes it impossible to assess whether the dual-level adaptive scheme and FPN priors actually deliver the claimed gains.
Authors: We acknowledge that the abstract as written is qualitative and does not include numerical results. While abstracts in the field are often kept concise, we agree that adding key metrics would improve verifiability. In the revision we will update the abstract to include the main quantitative gains (e.g., average PSNR/SSIM improvements over baselines), the primary datasets used, and a brief mention of the dual-level scheme and FPN contribution, while preserving length constraints. revision: yes
Circularity Check
No circularity; novel components validated externally
full rationale
The paper introduces a Flare Prior Network (FPN) for estimating region-dependent priors and a radial serialization strategy to improve long-range modeling in State Space Models, followed by a dual-level adaptive restoration scheme. These elements are presented as new architectural contributions whose effectiveness is assessed via extensive experiments on standard benchmarks and public code release, rather than through self-referential definitions, fitted parameters renamed as predictions, or load-bearing self-citations. No equations or uniqueness theorems reduce the claims to their own inputs by construction, leaving the derivation chain self-contained against external validation.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption State Space Models benefit from radial serialization for modeling long-range dependencies in flare-contaminated images
invented entities (2)
-
Flare Prior Network (FPN)
no independent evidence
-
Radial serialization strategy
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/AlexanderDuality.leanalexander_duality_circle_linking unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
radial unfold strategy... sorting them descendingly according to D: D(π(1)) ≥ D(π(2)) ≥ ⋯ ≥ D(π(L))
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Radial State-space Equation (RSE) utilizes P_flare to dynamically modulate restoration intensity
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
How to train neural networks for flare removal,
Y . Wu, Q. He, T. Xue, R. Garg, J. Chen, A. Veeraraghavan, and J. T. Barron, “How to train neural networks for flare removal,” inIEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2021, pp. 2239–2247. 10
work page 2021
-
[2]
Flare7k: A phenomeno- logical nighttime flare removal dataset,
Y . Dai, C. Li, S. Zhou, R. Feng, and C. C. Loy, “Flare7k: A phenomeno- logical nighttime flare removal dataset,”Adv. Neural Inform. Process. Syst. (NeurIPS), vol. 35, pp. 3926–3937, 2022
work page 2022
-
[3]
Flare7k++: Mixing synthetic and real datasets for nighttime flare removal and beyond,
Y . Dai, C. Li, S. Zhou, R. Feng, Y . Luo, and C. C. Loy, “Flare7k++: Mixing synthetic and real datasets for nighttime flare removal and beyond,”IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 46, no. 11, pp. 7041–7055, 2024
work page 2024
-
[4]
Nighttime visibility enhancement by increas- ing the dynamic range and suppression of light effects,
A. Sharma and R. T. Tan, “Nighttime visibility enhancement by increas- ing the dynamic range and suppression of light effects,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2021, pp. 11 972–11 981
work page 2021
-
[5]
Hinet: Half instance normalization network for image restoration,
L. Chen, X. Lu, J. Zhang, X. Chu, and C. Chen, “Hinet: Half instance normalization network for image restoration,”IEEE/CVF Conf. Comput. Vis. Pattern Recog. Worksh., pp. 182–192, 2021
work page 2021
-
[6]
Multi-stage progressive image restoration,
S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2021, pp. 14 816–14 826
work page 2021
-
[7]
To- ward blind flare removal using knowledge-driven flare-level estimator,
H. Deng, L. Li, F. Zhang, Z. Li, B. Xu, Q. Lu, C. Gao, and N. Sang, “To- ward blind flare removal using knowledge-driven flare-level estimator,” IEEE Trans. Image Process. (TIP), vol. 33, pp. 6114–6128, 2024
work page 2024
-
[8]
Flare-free vision: Empowering uformer with depth insights,
Y . Kotp and M. Torki, “Flare-free vision: Empowering uformer with depth insights,” inProc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), 2024, pp. 2565–2569
work page 2024
-
[9]
G.-Y . Chen, W. Dong, G. Fan, J.-N. Su, M. Gan, and C. L. Philip Chen, “Lpfsformer: Location prior guided frequency and spatial interactive learning for nighttime flare removal,”IEEE Trans. Circuit Syst. Video Technol. (TCSVT), vol. 35, no. 4, pp. 3706–3718, 2025
work page 2025
-
[10]
PBFG: A New Physically-Based Dataset and Removal of Lens Flares and Glares,
J. Zhu and S. Lee, “PBFG: A New Physically-Based Dataset and Removal of Lens Flares and Glares,” inIEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2025, pp. 5448–5457
work page 2025
-
[11]
Self-prior guided spatial and fourier transformer for nighttime flare removal,
T. Ma, Z. Kai, X. Miao, J. Liang, J. Peng, Y . Wang, H. Wang, and X. Liu, “Self-prior guided spatial and fourier transformer for nighttime flare removal,”IEEE Trans. Autom. Sci. Eng. (T-ASE), vol. 22, pp. 11 996– 12 011, 2025
work page 2025
-
[12]
Deflaremamba: Hierarchical vision mamba for contextually consistent lens flare removal,
Y . Huang, Y . Huang, J. Lin, and H. Huang, “Deflaremamba: Hierarchical vision mamba for contextually consistent lens flare removal,” inACM Int. Conf. Multimedia (ACMMM), 2025, pp. 8028–8037
work page 2025
-
[13]
F. Koreban and Y . Y . Schechner, “Geometry by deflaring,” in2009 IEEE Int. Conf. Comput. Photography (ICCP), 2009, pp. 1–8
work page 2009
-
[14]
Stray light calibration of the Dawn Framing Camera,
G. Kovacs, H. Sierks, A. Nathues, M. Richards, and P. Gutierrez- Marques, “Stray light calibration of the Dawn Framing Camera,” in Sensors, Systems, and Next-Generation Satellites XVII, vol. 8889, Inter- national Society for Optics and Photonics. SPIE, 2013, p. 888912
work page 2013
-
[15]
Auto removal of bright spot from images captured against flashing light source,
C. S. Asha, S. Bhat, D. R. Nayak, and C. Bhat, “Auto removal of bright spot from images captured against flashing light source,”2019 IEEE Int. Conf. Distrib. Comput. VLSI Electr. Circuits Robot. (DISCOVER), pp. 1–6, 2019
work page 2019
-
[16]
Uformer: A general u-shaped transformer for image restoration,
Z. Wang, X. Cun, J. Bao, W. Zhou, J. Liu, and H. Li, “Uformer: A general u-shaped transformer for image restoration,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2022, pp. 17 662–17 672
work page 2022
-
[17]
Safaformer: Scale-aware frequency-adaptive guidance for nighttime flare removal,
W. Dong, G. Fan, F. Zhang, M. Gan, G.-Y . Chen, and C. L. Philip Chen, “Safaformer: Scale-aware frequency-adaptive guidance for nighttime flare removal,”IEEE Trans. Circuit Syst. Video Technol. (TCSVT), vol. 36, no. 1, pp. 93–105, 2026
work page 2026
-
[18]
Beyond image prior: Embedding noise prior into latent space of conditional denoising transformer,
Y . Huang and H. Huang, “Beyond image prior: Embedding noise prior into latent space of conditional denoising transformer,”Int. J. Comput. Vis. (IJCV), vol. 133, no. 11, pp. 7591–7611, 2025
work page 2025
-
[19]
H. Da, Y . Niu, L. Qiu, F. Li, T. Zhao, and Y . Chen, “Illumination-guided grouped attention and masked progressive denoising for low-light image enhancement,”IEEE Transactions on Multimedia, pp. 1–13, 2026
work page 2026
-
[20]
R. Zhang, J. Yu, J. Chen, G. Li, L. Lin, and D. Wang, “A prior guided wavelet-spatial dual attention transformer framework for heavy rain image restoration,”IEEE Trans. Multimedia (TMM), vol. 26, pp. 7043– 7057, 2024
work page 2024
-
[21]
B. Sun, X. Ye, B. Li, H. Li, Z. Wang, and R. Xu, “Learning scene structure guidance via cross-task knowledge transfer for single depth super-resolution,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2021, pp. 7788–7797
work page 2021
-
[22]
Learning detail-structure alternative optimization for blind super-resolution,
F. Li, Y . Wu, H. Bai, W. Lin, R. Cong, and Y . Zhao, “Learning detail-structure alternative optimization for blind super-resolution,”IEEE Transactions on Multimedia, vol. 25, pp. 2825–2838, 2023
work page 2023
-
[23]
Transitional learning: Exploring the transition states of degradation for blind super-resolution,
Y . Huang, J. Li, Y . Hu, X. Gao, and H. Huang, “Transitional learning: Exploring the transition states of degradation for blind super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI), vol. 45, no. 5, pp. 6495–6510, 2022
work page 2022
-
[24]
Dacesr: Degradation-aware conditional embedding for real-world image super- resolution,
X. Lei, W. Zhang, B. Luo, H. Liang, W. Cao, and Q. Lin, “Dacesr: Degradation-aware conditional embedding for real-world image super- resolution,”IEEE Trans. Image Process. (TIP), 2026
work page 2026
-
[25]
Promptir: Prompting for all-in-one image restoration,
V . Potlapalli, S. W. Zamir, S. H. Khan, and F. Shahbaz Khan, “Promptir: Prompting for all-in-one image restoration,” inAdv. Neural Inform. Process. Syst. (NeurIPS), 2023
work page 2023
-
[26]
Pro- gressive prompt-driven low-light image enhancement with frequency aware learning,
X. Sun, D. Cheng, Y . Li, N. Wang, D. Zhang, X. Gao, and J. Sun, “Pro- gressive prompt-driven low-light image enhancement with frequency aware learning,”IEEE Transactions on Multimedia, vol. 27, pp. 6620– 6634, 2025
work page 2025
-
[27]
Mamba: Linear-time sequence modeling with selective state spaces,
A. Gu and T. Dao, “Mamba: Linear-time sequence modeling with selective state spaces,” inConf. Lang. Model. (COLM), 2024
work page 2024
-
[28]
Vdmamba: Vector decomposition in vision mamba for image deraining and beyond,
K. Jiang, J. Jiang, S. Wang, W. Ren, C.-W. Lin, and Z. Li, “Vdmamba: Vector decomposition in vision mamba for image deraining and beyond,” IEEE Transactions on Multimedia, pp. 1–13, 2026
work page 2026
-
[29]
Vision mamba: Efficient visual representation learning with bidirectional state space model,
L. Zhu, B. Liao, Q. Zhang, X. Wang, W. Liu, and X. Wang, “Vision mamba: Efficient visual representation learning with bidirectional state space model,” inInt. Conf. Mach. Learn. (ICML), 2024, pp. 62 429– 62 442
work page 2024
-
[30]
Vmamba: visual state space model,
Y . Liu, Y . Tian, Y . Zhao, H. Yu, L. Xie, Y . Wang, Q. Ye, J. Jiao, and Y . Liu, “Vmamba: visual state space model,” inAdv. Neural Inform. Process. Syst. (NeurIPS), 2024
work page 2024
-
[31]
Mambair: A simple baseline for image restoration with state-space model,
H. Guo, J. Li, T. Dai, Z. Ouyang, X. Ren, and S.-T. Xia, “Mambair: A simple baseline for image restoration with state-space model,” inEur. Conf. Comput. Vis. (ECCV), 2025
work page 2025
-
[32]
Eamamba: Efficient all-around vision state space model for image restoration,
Y .-C. Lin, Y .-S. Xu, H.-W. Chen, H.-K. Kuo, and C.-Y . Lee, “Eamamba: Efficient all-around vision state space model for image restoration,” IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2025
work page 2025
-
[33]
Mambairv2: Attentive state space restoration,
H. Guo, Y . Guo, Y . Zha, Y . Zhang, W. Li, T. Dai, S.-T. Xia, and Y . Li, “Mambairv2: Attentive state space restoration,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2025
work page 2025
-
[34]
Deformable convolutional networks,
J. Dai, H. Qi, Y . Xiong, Y . Li, G. Zhang, H. Hu, and Y . Wei, “Deformable convolutional networks,” inIEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2017, pp. 764–773
work page 2017
-
[35]
Swin transformer: Hierarchical vision transformer using shifted windows,
Z. Liu, Y . Lin, Y . Cao, H. Hu, Y . Wei, Z. Zhang, S. Lin, and B. Guo, “Swin transformer: Hierarchical vision transformer using shifted windows,”IEEE/CVF Int. Conf. Comput. Vis. (ICCV), pp. 9992–10 002, 2021
work page 2021
-
[36]
Improving lens flare removal with general-purpose pipeline and multiple light sources recovery,
Y . Zhou, Y . Li, H. Lin, H. Qiaoet al., “Improving lens flare removal with general-purpose pipeline and multiple light sources recovery,” in IEEE/CVF Int. Conf. Comput. Vis. (ICCV), 2023, pp. 12 345–12 354
work page 2023
-
[37]
X. Zhou, D. Wang, and P. Kr ¨ahenb¨uhl, “Objects as points,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2019, pp. 4843–4851
work page 2019
-
[38]
Cornernet: Detecting objects as paired keypoints,
H. Law and J. Deng, “Cornernet: Detecting objects as paired keypoints,” Int. J. Comput. Vis. (IJCV), vol. 128, pp. 642–656, 2020, publisher Copyright: © 2019, Springer Science+Business Media, LLC, part of Springer Nature
work page 2020
-
[39]
Flarex: A physics-informed dataset for lens flare removal via 2d synthesis and 3d rendering,
L. Qu, Z. Liu, J. Pan, S. Zhou, J. Shi, D. Chen, and J. Yang, “Flarex: A physics-informed dataset for lens flare removal via 2d synthesis and 3d rendering,”Adv. Neural Inform. Process. Syst. (NeurIPS), 2025
work page 2025
-
[40]
Image quality assessment: from error visibility to structural similarity,
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,”IEEE Trans. Image Process. (TIP), vol. 13, no. 4, pp. 600–612, 2004
work page 2004
-
[41]
The unreasonable effectiveness of deep features as a perceptual metric,
R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in IEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2018, pp. 586– 595
work page 2018
-
[42]
Adam: A method for stochastic optimization
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization.” inInt. Conf. Learn. Represent. (ICLR), 2015
work page 2015
-
[43]
Deep laplacian pyramid networks for fast and accurate super-resolution,
W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog. (CVPR), 2017, pp. 624–632
work page 2017
-
[44]
Perceptual losses for real-time style transfer and super-resolution,
J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” inEur. Conf. Comput. Vis. (ECCV), 2016, pp. 694–711
work page 2016
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.