pith. machine review for the scientific record. sign in

arxiv: 2604.09145 · v1 · submitted 2026-04-10 · 💻 cs.CV

Recognition: no theorem link

Deep Light Pollution Removal in Night Cityscape Photographs

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:42 UTC · model grok-4.3

classification 💻 cs.CV
keywords light pollution removalnighttime image restorationphysically-based degradation modeldeep learningcityscape photographyskyglowanisotropic light spreadimage enhancement
0
0 comments X

The pith

A physically-based degradation model with anisotropic light spread and skyglow from invisible sources lets deep networks remove light pollution from night cityscape photos.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper sets out to restore the pristine appearance of night scenes in urban photographs by neutralizing the radiative effects of pervasive artificial lighting. It builds a degradation model that augments standard nighttime dehazing with two new physical elements: directional anisotropic spreading from visible light sources and skyglow generated by lights hidden behind skylines. To overcome scarce paired real data, the authors couple synthetic images with outputs from large generative models during training. Experiments indicate the combined formulation and training strategy reduces halos, glow, and washed-out star fields more effectively than earlier nighttime restoration techniques.

Core claim

The central claim is that a physically-based degradation model adding anisotropic spread of directional light sources and skyglow from invisible surface lights behind skylines, together with a training strategy that couples large generative models and synthetic-real pairs, substantially reduces light pollution artifacts and recovers more authentic night luminance than prior nighttime restoration methods.

What carries the argument

The physically-based degradation model that adds anisotropic directional spread and hidden-source skyglow to nighttime dehazing, paired with synthetic-real coupling training.

If this is right

  • Night cityscape images can be processed to show natural dark-sky luminance without glow artifacts around streetlights.
  • Celestial objects and stars become visible again in urban photographs after processing.
  • The same framework can be applied to other long-range scattering problems in nighttime imaging.
  • Training data scarcity for light-pollution removal is mitigated by generative-model augmentation.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach may integrate into consumer camera night modes for automatic pollution correction.
  • Similar physically-based additions could improve daytime dehazing or fog removal pipelines.
  • Controlled experiments with calibrated light sources would directly test the accuracy of the anisotropic and skyglow terms.

Load-bearing premise

The added anisotropic spread and skyglow terms in the degradation model match real-world light pollution physics and the synthetic-real training generalizes to unseen real night photographs.

What would settle it

Side-by-side comparison of the method's outputs against ground-truth pristine night photographs taken under controlled low-light conditions with known light sources would show whether halos, skyglow, and star visibility are restored as claimed.

Figures

Figures reproduced from arXiv: 2604.09145 by Baoqing Sun, Hao Wang, Xiaolin Wu, Xi Zhang.

Figure 1
Figure 1. Figure 1: Examples of light-polluted images and our restora [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the synthetic light-pollution generation pipeline used for dataset construction. From a clean image [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Examples of our Dataset. (a) Variations of one clean image with simulated light pollution under randomized APSF and [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Illustration of skyline-induced sky glow, where [PITH_FULL_IMAGE:figures/full_fig_p004_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Overview of our pipeline. The polluted image [PITH_FULL_IMAGE:figures/full_fig_p005_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Qualitative comparison of different methods. Specific tasks are marked in [PITH_FULL_IMAGE:figures/full_fig_p006_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Backbone ablation showing better overexposed de [PITH_FULL_IMAGE:figures/full_fig_p007_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Comparison with instruction-based image editing models, showing that task-specific fine-tuning is needed for severe [PITH_FULL_IMAGE:figures/full_fig_p008_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Top-1 ranking rates for different methods. [PITH_FULL_IMAGE:figures/full_fig_p008_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Visualization of the ALSF construction process. [PITH_FULL_IMAGE:figures/full_fig_p012_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: 3D bar chart illustrating the distribution of ranks [PITH_FULL_IMAGE:figures/full_fig_p012_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Qualitative comparison of different methods. Specific tasks are marked in [PITH_FULL_IMAGE:figures/full_fig_p013_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: Qualitative comparison of different methods. Specific tasks are marked in [PITH_FULL_IMAGE:figures/full_fig_p014_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: Qualitative comparison of different methods. Specific tasks are marked in [PITH_FULL_IMAGE:figures/full_fig_p015_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: Qualitative comparison of different methods. Specific tasks are marked in [PITH_FULL_IMAGE:figures/full_fig_p016_15.png] view at source ↗
read the original abstract

Nighttime photography is severely degraded by light pollution induced by pervasive artificial lighting in urban environments. After long-range scattering and spatial diffusion, unwanted artificial light overwhelms natural night luminance, generates skyglow that washes out the view of stars and celestial objects and produces halos and glow artifacts around light sources. Unlike nighttime dehazing, which aims to improve detail legibility through thick air, the objective of light pollution removal is to restore the pristine night appearance by neutralizing the radiative footprint of ground lighting. In this paper we introduce a physically-based degradation model that adds to the previous ones for nighttime dehazing two critical aspects; (i) anisotropic spread of directional light sources, and (ii) skyglow caused by invisible surface lights behind skylines. In addition, we construct a training strategy that leverages large generative model and synthetic-real coupling to compensate for the scarcity of paired real data and enhance generalization. Extensive experiments demonstrate that the proposed formulation and learning framework substantially reduce light pollution artifacts and better recover authentic night imagery than prior nighttime restoration methods.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper introduces a physically-based degradation model for light pollution removal in night cityscape photographs. It extends prior nighttime dehazing models by adding anisotropic spread of directional light sources and skyglow caused by invisible surface lights behind skylines. To address the scarcity of paired real data, the authors propose a training strategy that leverages large generative models combined with synthetic-real coupling. The central claim is that this formulation and learning framework substantially reduces light pollution artifacts and recovers more authentic night imagery than existing nighttime restoration methods, as demonstrated by extensive experiments.

Significance. If the physical modeling choices and generalization strategy hold, the work could meaningfully advance computational photography for nighttime scenes by providing a more complete radiative model of urban light pollution. The synthetic-real coupling approach is a pragmatic response to data limitations and could be reusable in related low-light restoration tasks. However, the absence of any quantitative metrics, baselines, or error analysis in the abstract (and the lack of explicit validation for the added physical terms) makes it difficult to determine whether the claimed improvements represent a genuine advance or are limited to the synthetic distribution.

major comments (3)
  1. [Abstract] The abstract asserts that 'extensive experiments demonstrate that the proposed formulation and learning framework substantially reduce light pollution artifacts and better recover authentic night imagery than prior nighttime restoration methods,' yet supplies no quantitative metrics, baseline comparisons, error analysis, or data-split details. This directly undermines verification of the headline performance claim.
  2. [Degradation Model] The two added physical components—anistropic spread of directional sources and skyglow from invisible surface lights—are load-bearing for the claim that the degradation model captures dominant real-world effects. No validation against measured radiance profiles, controlled real captures, or quantitative comparison to simpler isotropic models is referenced, leaving open the possibility that reported gains are artifacts of the synthetic data distribution rather than improved physics.
  3. [Training Strategy] The synthetic-real coupling training strategy is presented as the mechanism that enables generalization despite scarce paired real data. Without ablation studies isolating the contribution of this coupling (or explicit tests on held-out real photographs with ground-truth references), it is impossible to confirm that the strategy transfers beyond the generative-model distribution.
minor comments (2)
  1. [Abstract] The abstract would benefit from a single sentence summarizing the scale of the synthetic and real datasets used and the primary evaluation metrics (e.g., PSNR, SSIM, or perceptual scores).
  2. [Model Formulation] Notation for the new degradation terms (anisotropic spread function, skyglow intensity) should be introduced explicitly with equations in the main text to facilitate reproducibility.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address each major comment below and indicate the revisions we will make to strengthen the paper.

read point-by-point responses
  1. Referee: [Abstract] The abstract asserts that 'extensive experiments demonstrate that the proposed formulation and learning framework substantially reduce light pollution artifacts and better recover authentic night imagery than prior nighttime restoration methods,' yet supplies no quantitative metrics, baseline comparisons, error analysis, or data-split details. This directly undermines verification of the headline performance claim.

    Authors: We agree that the abstract should provide immediate quantitative support for the performance claims. In the revised version, we will update the abstract to include specific metrics (e.g., PSNR, SSIM, and LPIPS improvements averaged over the test sets) along with the number of images and baselines used, while keeping the abstract concise. revision: yes

  2. Referee: [Degradation Model] The two added physical components—anistropic spread of directional sources and skyglow from invisible surface lights—are load-bearing for the claim that the degradation model captures dominant real-world effects. No validation against measured radiance profiles, controlled real captures, or quantitative comparison to simpler isotropic models is referenced, leaving open the possibility that reported gains are artifacts of the synthetic data distribution rather than improved physics.

    Authors: The manuscript demonstrates the impact of these components through visual results and comparisons in the experiments. However, we acknowledge the absence of direct quantitative validation against real radiance measurements. We will add ablation studies that compare the full model against isotropic and no-skyglow variants using both synthetic error metrics and real-image perceptual scores to better isolate the contribution of each physical term. revision: yes

  3. Referee: [Training Strategy] The synthetic-real coupling training strategy is presented as the mechanism that enables generalization despite scarce paired real data. Without ablation studies isolating the contribution of this coupling (or explicit tests on held-out real photographs with ground-truth references), it is impossible to confirm that the strategy transfers beyond the generative-model distribution.

    Authors: We will include new ablation experiments that train identical networks with and without the synthetic-real coupling to quantify its isolated effect on both synthetic and real test images. Since pixel-perfect ground truth is unavailable for real light-polluted scenes, we will additionally report no-reference metrics and results from a small-scale user study on held-out real photographs to support generalization claims. revision: yes

Circularity Check

0 steps flagged

No circularity detected; derivation is self-contained

full rationale

The paper proposes a physically-based degradation model extending prior nighttime dehazing work by adding anisotropic spread of directional lights and skyglow from invisible surface lights, plus a training strategy using large generative models and synthetic-real coupling to address paired data scarcity. The central claim of improved restoration is supported by extensive experiments on real night cityscape photographs compared against prior methods. No equations, definitions, or claims reduce the output to fitted parameters or self-citations by construction; the added physical terms are motivated externally rather than defined in terms of the target result, and the training approach does not rename or force predictions from inputs. The approach remains independent of its own fitted values and is validated against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

Based solely on the abstract, the central claim rests on the accuracy of the proposed physical degradation model and the effectiveness of the data synthesis strategy; specific free parameters in the model or network are not detailed.

free parameters (1)
  • scaling factors for light spread and skyglow intensity
    Likely present in the physically-based degradation model to fit directional and hidden light effects, though exact values or fitting process not specified in abstract.
axioms (1)
  • domain assumption The physically-based degradation model with added anisotropic spread and skyglow accurately represents real nighttime light pollution in urban environments.
    Invoked as the foundation for the restoration objective and training.

pith-pipeline@v0.9.0 · 5476 in / 1202 out tokens · 109705 ms · 2026-05-10T17:42:11.316283+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

77 extracted references · 7 canonical work pages · 3 internal anchors

  1. [1]

    Kingma DP Ba J Adam et al. 2014. A method for stochastic optimization.arXiv preprint arXiv:1412.69801412, 6 (2014)

  2. [2]

    Cosmin Ancuti, Codruta O Ancuti, Christophe De Vleeschouwer, and Alan C Bovik. 2016. Night-time dehazing by fusion. In2016 IEEE international conference on image processing (ICIP). IEEE, 2256–2260

  3. [3]

    Cosmin Ancuti, Codruta O Ancuti, Christophe De Vleeschouwer, and Alan C Bovik. 2020. Day and night-time dehazing by local airlight estimation.IEEE Transactions on Image Processing29 (2020), 6264–6275

  4. [4]

    Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Timofte, and Yulun Zhang

  5. [5]

    InProceedings of the IEEE/CVF international conference on computer vision

    Retinexformer: One-stage retinex-based transformer for low-light image enhancement. InProceedings of the IEEE/CVF international conference on computer vision. 12504–12513

  6. [6]

    Peibei Cao, Haoyu Chen, Jingzhe Ma, Yu-Chieh Yuan, Zhiyong Xie, Xin Xie, Haiqing Bai, and Kede Ma. 2024. Learned HDR image compression for percep- tually optimal storage and display. InEuropean Conference on Computer Vision. Springer, 109–126

  7. [7]

    Xiaofeng Cong, Jie Gui, Jing Zhang, Junming Hou, and Hao Shen. 2024. A semi-supervised nighttime dehazing baseline with spatial-frequency aware and realistic brightness constraint. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2631–2640

  8. [8]

    Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yihang Luo, and Chen Change Loy. 2024. Flare7k++: Mixing synthetic and real datasets for nighttime flare removal and beyond.IEEE Transactions on Pattern Analysis and Machine Intelligence46, 11 (2024), 7041–7055

  9. [9]

    Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Qingpeng Zhu, Qian- hui Sun, Wenxiu Sun, Chen Change Loy, Jinwei Gu, Shuai Liu, et al. 2023. MIPI 2023 Challenge on Nighttime Flare Removal: Methods and Results. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2853–2863

  10. [10]

    Yuekun Dai, Yihang Luo, Shangchen Zhou, Chongyi Li, and Chen Change Loy

  11. [11]

    InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Nighttime smartphone reflective flare removal using optical center sym- metry prior. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20783–20791

  12. [12]

    Yuekun Dai, Dafeng Zhang, Xiaoming Li, Zongsheng Yue, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Peiqing Yang, Zhezhu Jin, Guanqun Liu, and Chen Change Loy. 2024. MIPI 2024 Challenge on Nighttime Flare Removal: Methods and Results. In2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1144–1152. doi:10.1109/CVPRW63382.2024.00121

  13. [13]

    Jingwen Deng, Patrick PK Chan, and Daniel S Yeung. 2025. Real-world nighttime image dehazing using contrastive and adversarial learning.Pattern Recognition 165 (2025), 111596

  14. [15]

    Daniel Feijoo, Juan C Benito, Alvaro Garcia, and Marcos V Conde. 2025. Darkir: Robust low-light image restoration. InProceedings of the Computer Vision and Pattern Recognition Conference. 10879–10889

  15. [16]

    Xueyang Fu, Delu Zeng, Yue Huang, Xiao-Ping Zhang, and Xinghao Ding. 2016. A weighted variational model for simultaneous reflectance and illumination estimation. InProceedings of the IEEE conference on computer vision and pattern recognition. 2782–2790

  16. [17]

    Zhenqi Fu, Yan Yang, Xiaotong Tu, Yue Huang, Xinghao Ding, and Kai-Kuang Ma. 2023. Learning a simple low-light image enhancer from paired low-light instances. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 22252–22261

  17. [18]

    Allabakash Ghodesawar, Vinod Patil, Ankit Raichur, Swaroop Adrashyappana- math, Sampada Malagi, Nikhil Akalwadi, Chaitra Desai, Ramesh Ashok Tabib, Ujwala Patil, and Uma Mudenagudi. 2023. DeFlare-Net: Flare detection and removal network. InInternational Conference on Pattern Recognition and Machine Intelligence. Springer, 465–472

  18. [19]

    Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. 2020. Zero-reference deep curve estimation for low- light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 1780–1789

  19. [20]

    Xiaojie Guo and Qiming Hu. 2023. Low-light image enhancement via breaking down the darkness.International Journal of Computer Vision131, 1 (2023), 48–66

  20. [21]

    Xiaojie Guo, Yu Li, and Haibin Ling. 2016. LIME: Low-light image enhancement via illumination map estimation.IEEE Transactions on image processing26, 2 (2016), 982–993

  21. [22]

    Jiang Hai, Yutong Hao, Fengzhu Zou, Fang Lin, and Songchen Han. 2023. Ad- vanced retinexnet: a fully convolutional network for low-light image enhance- ment.Signal Processing: Image Communication112 (2023), 116916

  22. [23]

    Hai Jiang, Ao Luo, Haoqiang Fan, Songchen Han, and Shuaicheng Liu. 2023. Low- light image enhancement with wavelet-based diffusion models.ACM Transactions on Graphics (TOG)42, 6 (2023), 1–14

  23. [24]

    Hai Jiang, Ao Luo, Xiaohong Liu, Songchen Han, and Shuaicheng Liu. 2024. Lightendiffusion: Unsupervised low-light image enhancement with latent-retinex diffusion models. InEuropean Conference on Computer Vision. Springer, 161–179

  24. [25]

    Hai Jiang, Yang Ren, and Songchen Han. 2024. Revisiting coarse-to-fine strategy for low-light image enhancement with deep decomposition guided training. Computer Vision and Image Understanding241 (2024), 103952

  25. [26]

    Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jian- chao Yang, Pan Zhou, and Zhangyang Wang. 2021. Enlightengan: Deep light enhancement without paired supervision.IEEE transactions on image processing 30 (2021), 2340–2349

  26. [29]

    Shiba Kuanar, Dwarikanath Mahapatra, Monalisa Bilas, and KR Rao. 2022. Multi- path dilated convolution network for haze and glow removal in nighttime images. The Visual Computer38, 3 (2022), 1121–1134

  27. [30]

    Black Forest Labs, Stephen Batifol, Andreas Blattmann, Frederic Boesel, Saksham Consul, Cyril Diagne, Tim Dockhorn, Jack English, Zion English, Patrick Esser, Sumith Kulal, Kyle Lacey, Yam Levi, Cheng Li, Dominik Lorenz, Jonas Müller, Dustin Podell, Robin Rombach, Harry Saini, Axel Sauer, and Luke Smith. 2025. FLUX.1 Kontext: Flow Matching for In-Context ...

  28. [31]

    Edwin H Land. 1977. The retinex theory of color vision.Scientific american237, 6 (1977), 108–129

  29. [32]

    Jooyoung Lee, Donghyun Kim, Younhee Kim, Hyoungjin Kwon, Jongho Kim, and Taejin Lee. 2020. A training method for image compression networks to improve perceptual quality of reconstructions. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 144–145

  30. [33]

    Sunhyeok Lee, Donggon Jang, and Dae-Shik Kim. 2023. Temporally averaged regression for semi-supervised low-light image enhancement. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4208–4217

  31. [34]

    Anat Levin, Dani Lischinski, and Yair Weiss. 2007. A closed-form solution to natural image matting.IEEE transactions on pattern analysis and machine intelligence30, 2 (2007), 228–242

  32. [35]

    Orly Liba, Longqi Cai, Yun-Ta Tsai, Elad Eban, Yair Movshovitz-Attias, Yael Pritch, Huizhong Chen, and Jonathan T Barron. 2020. Sky optimization: Semantically aware image processing of skies in low-light photography. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 526–527

  33. [38]

    Risheng Liu, Long Ma, Jiaao Zhang, Xin Fan, and Zhongxuan Luo. 2021. Retinex- inspired unrolling with cooperative prior architecture search for low-light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 10561–10570

  34. [39]

    Yun Liu, Zhongsheng Yan, Aimin Wu, Tian Ye, and Yuche Li. 2022. Nighttime image dehazing based on variational decomposition model. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 640–649

  35. [40]

    Kin Gwn Lore, Adedotun Akintayo, and Soumik Sarkar. 2017. LLNet: A deep au- toencoder approach to natural low-light image enhancement.Pattern Recognition 61 (2017), 650–662

  36. [41]

    Ailin Ma, Hai Jiang, Binbin Liang, and Songchen Han. 2025. Incorporating Fourier Transformation with Diffusion Models for Low-Light Image Enhancement.IEEE Signal Processing Letters(2025). Hao Wang, Xiaolin Wu, Xi Zhang, and Baoqing Sun

  37. [42]

    Long Ma, Tengyu Ma, Risheng Liu, Xin Fan, and Zhongxuan Luo. 2022. Toward fast, flexible, and robust low-light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 5637–5646

  38. [43]

    Tianlei Ma, Zhiqiang Kai, Xikui Miao, Jing Liang, Jinzhu Peng, Yaonan Wang, Hao Wang, and Xinhao Liu. 2025. Self-Prior Guided Spatial and Fourier Transformer for Nighttime Flare Removal.IEEE Transactions on Automation Science and Engineering(2025)

  39. [44]

    Srinivasa G Narasimhan and Shree K Nayar. 2003. Shedding light on the weather. In2003 IEEE Computer Society Conference on Computer Vision and Pattern Recog- nition, 2003. Proceedings., Vol. 1. IEEE, I–I

  40. [45]

    Yash Patel, Srikar Appalaraju, and R Manmatha. 2021. Saliency driven percep- tual image compression. InProceedings of the IEEE/CVF winter conference on applications of computer vision. 227–236

  41. [46]

    Xiaotian Qiao, Gerhard P Hancke, and Rynson WH Lau. 2021. Light source guided single-image flare removal from unpaired data. InProceedings of the IEEE/CVF International Conference on Computer Vision. 4177–4185

  42. [47]

    Eino-Ville Talvala, Andrew Adams, Mark Horowitz, and Marc Levoy. 2007. Veiling glare in high dynamic range imaging.ACM Transactions on Graphics (TOG)26, 3 (2007), 37–es

  43. [48]

    Qunfang Tang, Jie Yang, Xiangjian He, Wenjing Jia, Qingnian Zhang, and Haibo Liu. 2021. Nighttime image dehazing based on Retinex and dark channel prior using Taylor series expansion.Computer Vision and Image Understanding202 (2021), 103086

  44. [49]

    Patricia Vitoria and Coloma Ballester. 2019. Automatic flare spot artifact detection and removal in photographs.Journal of Mathematical Imaging and Vision61, 4 (2019), 515–533

  45. [50]

    Wenhui Wang, Anna Wang, and Chen Liu. 2022. Variational single nighttime image haze removal with a gray haze-line prior.IEEE Transactions on Image Processing31 (2022), 1349–1363

  46. [51]

    Yinglong Wang, Zhen Liu, Jianzhuang Liu, Songcen Xu, and Shuaicheng Liu. 2023. Low-light image enhancement with illumination-aware gamma correction and complete image modelling network. InProceedings of the IEEE/CVF International Conference on Computer Vision. 13128–13137

  47. [52]

    Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. 2018. Deep retinex de- composition for low-light enhancement.arXiv preprint arXiv:1808.04560(2018)

  48. [53]

    Chenfei Wu, Jiahao Li, Jingren Zhou, Junyang Lin, Kaiyuan Gao, Kun Yan, Sheng ming Yin, Shuai Bai, Xiao Xu, Yilei Chen, Yuxiang Chen, Zecheng Tang, Zekai Zhang, Zhengyi Wang, An Yang, Bowen Yu, Chen Cheng, Dayiheng Liu, Deqing Li, Hang Zhang, Hao Meng, Hu Wei, Jingyuan Ni, Kai Chen, Kuan Cao, Liang Peng, Lin Qu, Minggang Wu, Peng Wang, Shuting Yu, Tingkun...

  49. [54]

    Wenhui Wu, Jian Weng, Pingping Zhang, Xu Wang, Wenhan Yang, and Jianmin Jiang. 2022. Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 5901–5910

  50. [55]

    Yicheng Wu, Qiurui He, Tianfan Xue, Rahul Garg, Jiawen Chen, Ashok Veer- araghavan, and Jonathan T Barron. 2021. How to train neural networks for flare removal. InProceedings of the IEEE/CVF international conference on computer vision. 2239–2247

  51. [56]

    Yuhui Wu, Chen Pan, Guoqing Wang, Yang Yang, Jiwei Wei, Chongyi Li, and Heng Tao Shen. 2023. Learning semantic-aware knowledge guidance for low- light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 1662–1671

  52. [57]

    Xiaogang Xu, Ruixing Wang, Chi-Wing Fu, and Jiaya Jia. 2022. Snr-aware low- light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 17714–17724

  53. [58]

    Xiaogang Xu, Ruixing Wang, and Jiangbo Lu. 2023. Low-light image enhance- ment via structure modeling and guidance. InProceedings of the IEEE/CVF con- ference on computer vision and pattern recognition. 9893–9903

  54. [59]

    Qingsen Yan, Yixu Feng, Cheng Zhang, Guansong Pang, Kangbiao Shi, Peng Wu, Wei Dong, Jinqiu Sun, and Yanning Zhang. 2025. Hvi: A new color space for low-light image enhancement. InProceedings of the Computer Vision and Pattern Recognition Conference. 5678–5687

  55. [60]

    Wending Yan, Robby T Tan, and Dengxin Dai. 2020. Nighttime defogging using high-low frequency decomposition and grayscale-color networks. InEuropean Conference on Computer Vision. Springer, 473–488

  56. [61]

    Shuzhou Yang, Moxuan Ding, Yanmin Wu, Zihan Li, and Jian Zhang. 2023. Implicit neural representation for cooperative low-light image enhancement. In Proceedings of the IEEE/CVF international conference on computer vision. 12918– 12927

  57. [62]

    Wenhan Yang, Shiqi Wang, Yuming Fang, Yue Wang, and Jiaying Liu. 2020. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 3063–3072

  58. [63]

    Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. 2020. Learning enriched features for real image restoration and enhancement. InEuropean conference on computer vision. Springer, 492–511

  59. [64]

    Jing Zhang, Yang Cao, Shuai Fang, Yu Kang, and Chang Wen Chen. 2017. Fast haze removal for nighttime image using maximum reflectance prior. InProceedings of the IEEE conference on computer vision and pattern recognition. 7418–7426

  60. [65]

    Jing Zhang, Yang Cao, and Zengfu Wang. 2014. Nighttime haze removal based on a new imaging model. In2014 IEEE international conference on image processing (ICIP). IEEE, 4557–4561

  61. [66]

    Jing Zhang, Yang Cao, Zheng-Jun Zha, and Dacheng Tao. 2020. Nighttime de- hazing with a synthetic benchmark. InProceedings of the 28th ACM international conference on multimedia. 2355–2363

  62. [67]

    Xi Zhang and Xiaolin Wu. 2021. Attention-guided image compression by deep reconstruction of compressive sensed saliency skeleton. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13354–13364

  63. [68]

    Xi Zhang and Xiaolin Wu. 2022. Multi-modality deep restoration of extremely compressed face videos.IEEE Transactions on Pattern Analysis and Machine Intelligence45, 2 (2022), 2024–2037

  64. [69]

    Xi Zhang and Xiaolin Wu. 2023. Lvqac: Lattice vector quantization coupled with spatially adaptive companding for efficient learned image compression. InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10239–10248

  65. [70]

    Xi Zhang, Hanwei Zhu, Yan Zhong, Jiamang Wang, and Weisi Lin. 2025. BADiff: Bandwidth Adaptive Diffusion Model.arXiv preprint arXiv:2510.21366(2025)

  66. [71]

    Yonghua Zhang, Xiaojie Guo, Jiayi Ma, Wei Liu, and Jiawan Zhang. 2021. Beyond brightening low-light images.International Journal of Computer Vision129, 4 (2021), 1013–1037

  67. [72]

    Yonghua Zhang, Jiawan Zhang, and Xiaojie Guo. 2019. Kindling the darkness: A practical low-light image enhancer. InProceedings of the 27th ACM international conference on multimedia. 1632–1640

  68. [73]

    Han Zhou, Wei Dong, Xiaohong Liu, Shuaicheng Liu, Xiongkuo Min, Guangtao Zhai, and Jun Chen. 2024. Glare: Low light image enhancement via generative latent feature based codebook retrieval. InEuropean Conference on Computer Vision. Springer, 36–54. Deep Light Pollution Removal in Night Cityscape Photographs Supplementary Material In this document, we furt...

  69. [74]

    Xiaofeng Cong, Jie Gui, Jing Zhang, Junming Hou, and Hao Shen. 2024. A semi- supervised nighttime dehazing baseline with spatial-frequency aware and realistic brightness constraint. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2631–2640

  70. [75]

    Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yihang Luo, and Chen Change Loy. 2024. Flare7k++: Mixing synthetic and real datasets for night- time flare removal and beyond.IEEE Transactions on Pattern Analysis and Machine Intelligence46, 11 (2024), 7041–7055

  71. [76]

    Sebastian Dille, Chris Careaga, and Yağız Aksoy. 2024. Intrinsic single-image hdr reconstruction. InEuropean Conference on Computer Vision. Springer, 161–177

  72. [77]

    Yeying Jin, Beibei Lin, Wending Yan, Yuan Yuan, Wei Ye, and Robby T Tan. 2023. Enhancing visibility in nighttime haze images using guided apsf and gradient adaptive convolution. InProceedings of the 31st ACM international conference on multimedia. 2446–2457

  73. [78]

    Yeying Jin, Wenhan Yang, and Robby T Tan. 2022. Unsupervised night image enhancement: When layer decomposition meets light-effects suppression. In European Conference on Computer Vision. Springer, 404–421

  74. [79]

    Beibei Lin, Yeying Jin, Yan Wending, Wei Ye, Yuan Yuan, and Robby T Tan. 2025. Nighthaze: Nighttime image dehazing via self-prior learning. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 39. 5209–5217

  75. [80]

    Chang Liu and Xiaolin Wu. 2021. Light pollution reduction in nighttime photog- raphy.arXiv preprint arXiv:2106.10046(2021)

  76. [81]

    Srinivasa G Narasimhan and Shree K Nayar. 2003. Shedding light on the weather. In 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,

  77. [82]

    LES" means

    Proceedings., Vol. 1. IEEE, I–I. Deep Light Pollution Removal in Night Cityscape Photographs (a) (b) (c) Input Jin’s (ACM MM 23) Dehaze Cong’s (CVPR 24) Dehaze Lin’s (AAAI 25) Dehaze Jin’s (ECCV 22) LES Dille’s (ECCV 24) HDR Dai’s (TPMAI 24) Deflare Liu’s (2021) LPR Ours LPR Input Jin’s (ACM MM 23) Dehaze Cong’s (CVPR 24) Dehaze Lin’s (AAAI 25) Dehaze Jin...