Recognition: no theorem link
Deep Light Pollution Removal in Night Cityscape Photographs
Pith reviewed 2026-05-10 17:42 UTC · model grok-4.3
The pith
A physically-based degradation model with anisotropic light spread and skyglow from invisible sources lets deep networks remove light pollution from night cityscape photos.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that a physically-based degradation model adding anisotropic spread of directional light sources and skyglow from invisible surface lights behind skylines, together with a training strategy that couples large generative models and synthetic-real pairs, substantially reduces light pollution artifacts and recovers more authentic night luminance than prior nighttime restoration methods.
What carries the argument
The physically-based degradation model that adds anisotropic directional spread and hidden-source skyglow to nighttime dehazing, paired with synthetic-real coupling training.
If this is right
- Night cityscape images can be processed to show natural dark-sky luminance without glow artifacts around streetlights.
- Celestial objects and stars become visible again in urban photographs after processing.
- The same framework can be applied to other long-range scattering problems in nighttime imaging.
- Training data scarcity for light-pollution removal is mitigated by generative-model augmentation.
Where Pith is reading between the lines
- The approach may integrate into consumer camera night modes for automatic pollution correction.
- Similar physically-based additions could improve daytime dehazing or fog removal pipelines.
- Controlled experiments with calibrated light sources would directly test the accuracy of the anisotropic and skyglow terms.
Load-bearing premise
The added anisotropic spread and skyglow terms in the degradation model match real-world light pollution physics and the synthetic-real training generalizes to unseen real night photographs.
What would settle it
Side-by-side comparison of the method's outputs against ground-truth pristine night photographs taken under controlled low-light conditions with known light sources would show whether halos, skyglow, and star visibility are restored as claimed.
Figures
read the original abstract
Nighttime photography is severely degraded by light pollution induced by pervasive artificial lighting in urban environments. After long-range scattering and spatial diffusion, unwanted artificial light overwhelms natural night luminance, generates skyglow that washes out the view of stars and celestial objects and produces halos and glow artifacts around light sources. Unlike nighttime dehazing, which aims to improve detail legibility through thick air, the objective of light pollution removal is to restore the pristine night appearance by neutralizing the radiative footprint of ground lighting. In this paper we introduce a physically-based degradation model that adds to the previous ones for nighttime dehazing two critical aspects; (i) anisotropic spread of directional light sources, and (ii) skyglow caused by invisible surface lights behind skylines. In addition, we construct a training strategy that leverages large generative model and synthetic-real coupling to compensate for the scarcity of paired real data and enhance generalization. Extensive experiments demonstrate that the proposed formulation and learning framework substantially reduce light pollution artifacts and better recover authentic night imagery than prior nighttime restoration methods.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces a physically-based degradation model for light pollution removal in night cityscape photographs. It extends prior nighttime dehazing models by adding anisotropic spread of directional light sources and skyglow caused by invisible surface lights behind skylines. To address the scarcity of paired real data, the authors propose a training strategy that leverages large generative models combined with synthetic-real coupling. The central claim is that this formulation and learning framework substantially reduces light pollution artifacts and recovers more authentic night imagery than existing nighttime restoration methods, as demonstrated by extensive experiments.
Significance. If the physical modeling choices and generalization strategy hold, the work could meaningfully advance computational photography for nighttime scenes by providing a more complete radiative model of urban light pollution. The synthetic-real coupling approach is a pragmatic response to data limitations and could be reusable in related low-light restoration tasks. However, the absence of any quantitative metrics, baselines, or error analysis in the abstract (and the lack of explicit validation for the added physical terms) makes it difficult to determine whether the claimed improvements represent a genuine advance or are limited to the synthetic distribution.
major comments (3)
- [Abstract] The abstract asserts that 'extensive experiments demonstrate that the proposed formulation and learning framework substantially reduce light pollution artifacts and better recover authentic night imagery than prior nighttime restoration methods,' yet supplies no quantitative metrics, baseline comparisons, error analysis, or data-split details. This directly undermines verification of the headline performance claim.
- [Degradation Model] The two added physical components—anistropic spread of directional sources and skyglow from invisible surface lights—are load-bearing for the claim that the degradation model captures dominant real-world effects. No validation against measured radiance profiles, controlled real captures, or quantitative comparison to simpler isotropic models is referenced, leaving open the possibility that reported gains are artifacts of the synthetic data distribution rather than improved physics.
- [Training Strategy] The synthetic-real coupling training strategy is presented as the mechanism that enables generalization despite scarce paired real data. Without ablation studies isolating the contribution of this coupling (or explicit tests on held-out real photographs with ground-truth references), it is impossible to confirm that the strategy transfers beyond the generative-model distribution.
minor comments (2)
- [Abstract] The abstract would benefit from a single sentence summarizing the scale of the synthetic and real datasets used and the primary evaluation metrics (e.g., PSNR, SSIM, or perceptual scores).
- [Model Formulation] Notation for the new degradation terms (anisotropic spread function, skyglow intensity) should be introduced explicitly with equations in the main text to facilitate reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. We address each major comment below and indicate the revisions we will make to strengthen the paper.
read point-by-point responses
-
Referee: [Abstract] The abstract asserts that 'extensive experiments demonstrate that the proposed formulation and learning framework substantially reduce light pollution artifacts and better recover authentic night imagery than prior nighttime restoration methods,' yet supplies no quantitative metrics, baseline comparisons, error analysis, or data-split details. This directly undermines verification of the headline performance claim.
Authors: We agree that the abstract should provide immediate quantitative support for the performance claims. In the revised version, we will update the abstract to include specific metrics (e.g., PSNR, SSIM, and LPIPS improvements averaged over the test sets) along with the number of images and baselines used, while keeping the abstract concise. revision: yes
-
Referee: [Degradation Model] The two added physical components—anistropic spread of directional sources and skyglow from invisible surface lights—are load-bearing for the claim that the degradation model captures dominant real-world effects. No validation against measured radiance profiles, controlled real captures, or quantitative comparison to simpler isotropic models is referenced, leaving open the possibility that reported gains are artifacts of the synthetic data distribution rather than improved physics.
Authors: The manuscript demonstrates the impact of these components through visual results and comparisons in the experiments. However, we acknowledge the absence of direct quantitative validation against real radiance measurements. We will add ablation studies that compare the full model against isotropic and no-skyglow variants using both synthetic error metrics and real-image perceptual scores to better isolate the contribution of each physical term. revision: yes
-
Referee: [Training Strategy] The synthetic-real coupling training strategy is presented as the mechanism that enables generalization despite scarce paired real data. Without ablation studies isolating the contribution of this coupling (or explicit tests on held-out real photographs with ground-truth references), it is impossible to confirm that the strategy transfers beyond the generative-model distribution.
Authors: We will include new ablation experiments that train identical networks with and without the synthetic-real coupling to quantify its isolated effect on both synthetic and real test images. Since pixel-perfect ground truth is unavailable for real light-polluted scenes, we will additionally report no-reference metrics and results from a small-scale user study on held-out real photographs to support generalization claims. revision: yes
Circularity Check
No circularity detected; derivation is self-contained
full rationale
The paper proposes a physically-based degradation model extending prior nighttime dehazing work by adding anisotropic spread of directional lights and skyglow from invisible surface lights, plus a training strategy using large generative models and synthetic-real coupling to address paired data scarcity. The central claim of improved restoration is supported by extensive experiments on real night cityscape photographs compared against prior methods. No equations, definitions, or claims reduce the output to fitted parameters or self-citations by construction; the added physical terms are motivated externally rather than defined in terms of the target result, and the training approach does not rename or force predictions from inputs. The approach remains independent of its own fitted values and is validated against external benchmarks.
Axiom & Free-Parameter Ledger
free parameters (1)
- scaling factors for light spread and skyglow intensity
axioms (1)
- domain assumption The physically-based degradation model with added anisotropic spread and skyglow accurately represents real nighttime light pollution in urban environments.
Reference graph
Works this paper leans on
- [1]
-
[2]
Cosmin Ancuti, Codruta O Ancuti, Christophe De Vleeschouwer, and Alan C Bovik. 2016. Night-time dehazing by fusion. In2016 IEEE international conference on image processing (ICIP). IEEE, 2256–2260
2016
-
[3]
Cosmin Ancuti, Codruta O Ancuti, Christophe De Vleeschouwer, and Alan C Bovik. 2020. Day and night-time dehazing by local airlight estimation.IEEE Transactions on Image Processing29 (2020), 6264–6275
2020
-
[4]
Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Timofte, and Yulun Zhang
-
[5]
InProceedings of the IEEE/CVF international conference on computer vision
Retinexformer: One-stage retinex-based transformer for low-light image enhancement. InProceedings of the IEEE/CVF international conference on computer vision. 12504–12513
-
[6]
Peibei Cao, Haoyu Chen, Jingzhe Ma, Yu-Chieh Yuan, Zhiyong Xie, Xin Xie, Haiqing Bai, and Kede Ma. 2024. Learned HDR image compression for percep- tually optimal storage and display. InEuropean Conference on Computer Vision. Springer, 109–126
2024
-
[7]
Xiaofeng Cong, Jie Gui, Jing Zhang, Junming Hou, and Hao Shen. 2024. A semi-supervised nighttime dehazing baseline with spatial-frequency aware and realistic brightness constraint. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2631–2640
2024
-
[8]
Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yihang Luo, and Chen Change Loy. 2024. Flare7k++: Mixing synthetic and real datasets for nighttime flare removal and beyond.IEEE Transactions on Pattern Analysis and Machine Intelligence46, 11 (2024), 7041–7055
2024
-
[9]
Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Qingpeng Zhu, Qian- hui Sun, Wenxiu Sun, Chen Change Loy, Jinwei Gu, Shuai Liu, et al. 2023. MIPI 2023 Challenge on Nighttime Flare Removal: Methods and Results. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2853–2863
2023
-
[10]
Yuekun Dai, Yihang Luo, Shangchen Zhou, Chongyi Li, and Chen Change Loy
-
[11]
InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition
Nighttime smartphone reflective flare removal using optical center sym- metry prior. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 20783–20791
-
[12]
Yuekun Dai, Dafeng Zhang, Xiaoming Li, Zongsheng Yue, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Peiqing Yang, Zhezhu Jin, Guanqun Liu, and Chen Change Loy. 2024. MIPI 2024 Challenge on Nighttime Flare Removal: Methods and Results. In2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 1144–1152. doi:10.1109/CVPRW63382.2024.00121
-
[13]
Jingwen Deng, Patrick PK Chan, and Daniel S Yeung. 2025. Real-world nighttime image dehazing using contrastive and adversarial learning.Pattern Recognition 165 (2025), 111596
2025
-
[15]
Daniel Feijoo, Juan C Benito, Alvaro Garcia, and Marcos V Conde. 2025. Darkir: Robust low-light image restoration. InProceedings of the Computer Vision and Pattern Recognition Conference. 10879–10889
2025
-
[16]
Xueyang Fu, Delu Zeng, Yue Huang, Xiao-Ping Zhang, and Xinghao Ding. 2016. A weighted variational model for simultaneous reflectance and illumination estimation. InProceedings of the IEEE conference on computer vision and pattern recognition. 2782–2790
2016
-
[17]
Zhenqi Fu, Yan Yang, Xiaotong Tu, Yue Huang, Xinghao Ding, and Kai-Kuang Ma. 2023. Learning a simple low-light image enhancer from paired low-light instances. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 22252–22261
2023
-
[18]
Allabakash Ghodesawar, Vinod Patil, Ankit Raichur, Swaroop Adrashyappana- math, Sampada Malagi, Nikhil Akalwadi, Chaitra Desai, Ramesh Ashok Tabib, Ujwala Patil, and Uma Mudenagudi. 2023. DeFlare-Net: Flare detection and removal network. InInternational Conference on Pattern Recognition and Machine Intelligence. Springer, 465–472
2023
-
[19]
Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. 2020. Zero-reference deep curve estimation for low- light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 1780–1789
2020
-
[20]
Xiaojie Guo and Qiming Hu. 2023. Low-light image enhancement via breaking down the darkness.International Journal of Computer Vision131, 1 (2023), 48–66
2023
-
[21]
Xiaojie Guo, Yu Li, and Haibin Ling. 2016. LIME: Low-light image enhancement via illumination map estimation.IEEE Transactions on image processing26, 2 (2016), 982–993
2016
-
[22]
Jiang Hai, Yutong Hao, Fengzhu Zou, Fang Lin, and Songchen Han. 2023. Ad- vanced retinexnet: a fully convolutional network for low-light image enhance- ment.Signal Processing: Image Communication112 (2023), 116916
2023
-
[23]
Hai Jiang, Ao Luo, Haoqiang Fan, Songchen Han, and Shuaicheng Liu. 2023. Low- light image enhancement with wavelet-based diffusion models.ACM Transactions on Graphics (TOG)42, 6 (2023), 1–14
2023
-
[24]
Hai Jiang, Ao Luo, Xiaohong Liu, Songchen Han, and Shuaicheng Liu. 2024. Lightendiffusion: Unsupervised low-light image enhancement with latent-retinex diffusion models. InEuropean Conference on Computer Vision. Springer, 161–179
2024
-
[25]
Hai Jiang, Yang Ren, and Songchen Han. 2024. Revisiting coarse-to-fine strategy for low-light image enhancement with deep decomposition guided training. Computer Vision and Image Understanding241 (2024), 103952
2024
-
[26]
Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jian- chao Yang, Pan Zhou, and Zhangyang Wang. 2021. Enlightengan: Deep light enhancement without paired supervision.IEEE transactions on image processing 30 (2021), 2340–2349
2021
-
[29]
Shiba Kuanar, Dwarikanath Mahapatra, Monalisa Bilas, and KR Rao. 2022. Multi- path dilated convolution network for haze and glow removal in nighttime images. The Visual Computer38, 3 (2022), 1121–1134
2022
-
[30]
Black Forest Labs, Stephen Batifol, Andreas Blattmann, Frederic Boesel, Saksham Consul, Cyril Diagne, Tim Dockhorn, Jack English, Zion English, Patrick Esser, Sumith Kulal, Kyle Lacey, Yam Levi, Cheng Li, Dominik Lorenz, Jonas Müller, Dustin Podell, Robin Rombach, Harry Saini, Axel Sauer, and Luke Smith. 2025. FLUX.1 Kontext: Flow Matching for In-Context ...
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[31]
Edwin H Land. 1977. The retinex theory of color vision.Scientific american237, 6 (1977), 108–129
1977
-
[32]
Jooyoung Lee, Donghyun Kim, Younhee Kim, Hyoungjin Kwon, Jongho Kim, and Taejin Lee. 2020. A training method for image compression networks to improve perceptual quality of reconstructions. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 144–145
2020
-
[33]
Sunhyeok Lee, Donggon Jang, and Dae-Shik Kim. 2023. Temporally averaged regression for semi-supervised low-light image enhancement. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4208–4217
2023
-
[34]
Anat Levin, Dani Lischinski, and Yair Weiss. 2007. A closed-form solution to natural image matting.IEEE transactions on pattern analysis and machine intelligence30, 2 (2007), 228–242
2007
-
[35]
Orly Liba, Longqi Cai, Yun-Ta Tsai, Elad Eban, Yair Movshovitz-Attias, Yael Pritch, Huizhong Chen, and Jonathan T Barron. 2020. Sky optimization: Semantically aware image processing of skies in low-light photography. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 526–527
2020
-
[38]
Risheng Liu, Long Ma, Jiaao Zhang, Xin Fan, and Zhongxuan Luo. 2021. Retinex- inspired unrolling with cooperative prior architecture search for low-light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 10561–10570
2021
-
[39]
Yun Liu, Zhongsheng Yan, Aimin Wu, Tian Ye, and Yuche Li. 2022. Nighttime image dehazing based on variational decomposition model. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 640–649
2022
-
[40]
Kin Gwn Lore, Adedotun Akintayo, and Soumik Sarkar. 2017. LLNet: A deep au- toencoder approach to natural low-light image enhancement.Pattern Recognition 61 (2017), 650–662
2017
-
[41]
Ailin Ma, Hai Jiang, Binbin Liang, and Songchen Han. 2025. Incorporating Fourier Transformation with Diffusion Models for Low-Light Image Enhancement.IEEE Signal Processing Letters(2025). Hao Wang, Xiaolin Wu, Xi Zhang, and Baoqing Sun
2025
-
[42]
Long Ma, Tengyu Ma, Risheng Liu, Xin Fan, and Zhongxuan Luo. 2022. Toward fast, flexible, and robust low-light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 5637–5646
2022
-
[43]
Tianlei Ma, Zhiqiang Kai, Xikui Miao, Jing Liang, Jinzhu Peng, Yaonan Wang, Hao Wang, and Xinhao Liu. 2025. Self-Prior Guided Spatial and Fourier Transformer for Nighttime Flare Removal.IEEE Transactions on Automation Science and Engineering(2025)
2025
-
[44]
Srinivasa G Narasimhan and Shree K Nayar. 2003. Shedding light on the weather. In2003 IEEE Computer Society Conference on Computer Vision and Pattern Recog- nition, 2003. Proceedings., Vol. 1. IEEE, I–I
2003
-
[45]
Yash Patel, Srikar Appalaraju, and R Manmatha. 2021. Saliency driven percep- tual image compression. InProceedings of the IEEE/CVF winter conference on applications of computer vision. 227–236
2021
-
[46]
Xiaotian Qiao, Gerhard P Hancke, and Rynson WH Lau. 2021. Light source guided single-image flare removal from unpaired data. InProceedings of the IEEE/CVF International Conference on Computer Vision. 4177–4185
2021
-
[47]
Eino-Ville Talvala, Andrew Adams, Mark Horowitz, and Marc Levoy. 2007. Veiling glare in high dynamic range imaging.ACM Transactions on Graphics (TOG)26, 3 (2007), 37–es
2007
-
[48]
Qunfang Tang, Jie Yang, Xiangjian He, Wenjing Jia, Qingnian Zhang, and Haibo Liu. 2021. Nighttime image dehazing based on Retinex and dark channel prior using Taylor series expansion.Computer Vision and Image Understanding202 (2021), 103086
2021
-
[49]
Patricia Vitoria and Coloma Ballester. 2019. Automatic flare spot artifact detection and removal in photographs.Journal of Mathematical Imaging and Vision61, 4 (2019), 515–533
2019
-
[50]
Wenhui Wang, Anna Wang, and Chen Liu. 2022. Variational single nighttime image haze removal with a gray haze-line prior.IEEE Transactions on Image Processing31 (2022), 1349–1363
2022
-
[51]
Yinglong Wang, Zhen Liu, Jianzhuang Liu, Songcen Xu, and Shuaicheng Liu. 2023. Low-light image enhancement with illumination-aware gamma correction and complete image modelling network. InProceedings of the IEEE/CVF International Conference on Computer Vision. 13128–13137
2023
-
[52]
Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. 2018. Deep retinex de- composition for low-light enhancement.arXiv preprint arXiv:1808.04560(2018)
work page Pith review arXiv 2018
-
[53]
Chenfei Wu, Jiahao Li, Jingren Zhou, Junyang Lin, Kaiyuan Gao, Kun Yan, Sheng ming Yin, Shuai Bai, Xiao Xu, Yilei Chen, Yuxiang Chen, Zecheng Tang, Zekai Zhang, Zhengyi Wang, An Yang, Bowen Yu, Chen Cheng, Dayiheng Liu, Deqing Li, Hang Zhang, Hao Meng, Hu Wei, Jingyuan Ni, Kai Chen, Kuan Cao, Liang Peng, Lin Qu, Minggang Wu, Peng Wang, Shuting Yu, Tingkun...
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[54]
Wenhui Wu, Jian Weng, Pingping Zhang, Xu Wang, Wenhan Yang, and Jianmin Jiang. 2022. Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 5901–5910
2022
-
[55]
Yicheng Wu, Qiurui He, Tianfan Xue, Rahul Garg, Jiawen Chen, Ashok Veer- araghavan, and Jonathan T Barron. 2021. How to train neural networks for flare removal. InProceedings of the IEEE/CVF international conference on computer vision. 2239–2247
2021
-
[56]
Yuhui Wu, Chen Pan, Guoqing Wang, Yang Yang, Jiwei Wei, Chongyi Li, and Heng Tao Shen. 2023. Learning semantic-aware knowledge guidance for low- light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 1662–1671
2023
-
[57]
Xiaogang Xu, Ruixing Wang, Chi-Wing Fu, and Jiaya Jia. 2022. Snr-aware low- light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 17714–17724
2022
-
[58]
Xiaogang Xu, Ruixing Wang, and Jiangbo Lu. 2023. Low-light image enhance- ment via structure modeling and guidance. InProceedings of the IEEE/CVF con- ference on computer vision and pattern recognition. 9893–9903
2023
-
[59]
Qingsen Yan, Yixu Feng, Cheng Zhang, Guansong Pang, Kangbiao Shi, Peng Wu, Wei Dong, Jinqiu Sun, and Yanning Zhang. 2025. Hvi: A new color space for low-light image enhancement. InProceedings of the Computer Vision and Pattern Recognition Conference. 5678–5687
2025
-
[60]
Wending Yan, Robby T Tan, and Dengxin Dai. 2020. Nighttime defogging using high-low frequency decomposition and grayscale-color networks. InEuropean Conference on Computer Vision. Springer, 473–488
2020
-
[61]
Shuzhou Yang, Moxuan Ding, Yanmin Wu, Zihan Li, and Jian Zhang. 2023. Implicit neural representation for cooperative low-light image enhancement. In Proceedings of the IEEE/CVF international conference on computer vision. 12918– 12927
2023
-
[62]
Wenhan Yang, Shiqi Wang, Yuming Fang, Yue Wang, and Jiaying Liu. 2020. From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition. 3063–3072
2020
-
[63]
Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. 2020. Learning enriched features for real image restoration and enhancement. InEuropean conference on computer vision. Springer, 492–511
2020
-
[64]
Jing Zhang, Yang Cao, Shuai Fang, Yu Kang, and Chang Wen Chen. 2017. Fast haze removal for nighttime image using maximum reflectance prior. InProceedings of the IEEE conference on computer vision and pattern recognition. 7418–7426
2017
-
[65]
Jing Zhang, Yang Cao, and Zengfu Wang. 2014. Nighttime haze removal based on a new imaging model. In2014 IEEE international conference on image processing (ICIP). IEEE, 4557–4561
2014
-
[66]
Jing Zhang, Yang Cao, Zheng-Jun Zha, and Dacheng Tao. 2020. Nighttime de- hazing with a synthetic benchmark. InProceedings of the 28th ACM international conference on multimedia. 2355–2363
2020
-
[67]
Xi Zhang and Xiaolin Wu. 2021. Attention-guided image compression by deep reconstruction of compressive sensed saliency skeleton. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 13354–13364
2021
-
[68]
Xi Zhang and Xiaolin Wu. 2022. Multi-modality deep restoration of extremely compressed face videos.IEEE Transactions on Pattern Analysis and Machine Intelligence45, 2 (2022), 2024–2037
2022
-
[69]
Xi Zhang and Xiaolin Wu. 2023. Lvqac: Lattice vector quantization coupled with spatially adaptive companding for efficient learned image compression. InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10239–10248
2023
-
[70]
Xi Zhang, Hanwei Zhu, Yan Zhong, Jiamang Wang, and Weisi Lin. 2025. BADiff: Bandwidth Adaptive Diffusion Model.arXiv preprint arXiv:2510.21366(2025)
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[71]
Yonghua Zhang, Xiaojie Guo, Jiayi Ma, Wei Liu, and Jiawan Zhang. 2021. Beyond brightening low-light images.International Journal of Computer Vision129, 4 (2021), 1013–1037
2021
-
[72]
Yonghua Zhang, Jiawan Zhang, and Xiaojie Guo. 2019. Kindling the darkness: A practical low-light image enhancer. InProceedings of the 27th ACM international conference on multimedia. 1632–1640
2019
-
[73]
Han Zhou, Wei Dong, Xiaohong Liu, Shuaicheng Liu, Xiongkuo Min, Guangtao Zhai, and Jun Chen. 2024. Glare: Low light image enhancement via generative latent feature based codebook retrieval. InEuropean Conference on Computer Vision. Springer, 36–54. Deep Light Pollution Removal in Night Cityscape Photographs Supplementary Material In this document, we furt...
2024
-
[74]
Xiaofeng Cong, Jie Gui, Jing Zhang, Junming Hou, and Hao Shen. 2024. A semi- supervised nighttime dehazing baseline with spatial-frequency aware and realistic brightness constraint. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2631–2640
2024
-
[75]
Yuekun Dai, Chongyi Li, Shangchen Zhou, Ruicheng Feng, Yihang Luo, and Chen Change Loy. 2024. Flare7k++: Mixing synthetic and real datasets for night- time flare removal and beyond.IEEE Transactions on Pattern Analysis and Machine Intelligence46, 11 (2024), 7041–7055
2024
-
[76]
Sebastian Dille, Chris Careaga, and Yağız Aksoy. 2024. Intrinsic single-image hdr reconstruction. InEuropean Conference on Computer Vision. Springer, 161–177
2024
-
[77]
Yeying Jin, Beibei Lin, Wending Yan, Yuan Yuan, Wei Ye, and Robby T Tan. 2023. Enhancing visibility in nighttime haze images using guided apsf and gradient adaptive convolution. InProceedings of the 31st ACM international conference on multimedia. 2446–2457
2023
-
[78]
Yeying Jin, Wenhan Yang, and Robby T Tan. 2022. Unsupervised night image enhancement: When layer decomposition meets light-effects suppression. In European Conference on Computer Vision. Springer, 404–421
2022
-
[79]
Beibei Lin, Yeying Jin, Yan Wending, Wei Ye, Yuan Yuan, and Robby T Tan. 2025. Nighthaze: Nighttime image dehazing via self-prior learning. InProceedings of the AAAI Conference on Artificial Intelligence, Vol. 39. 5209–5217
2025
- [80]
-
[81]
Srinivasa G Narasimhan and Shree K Nayar. 2003. Shedding light on the weather. In 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
2003
-
[82]
LES" means
Proceedings., Vol. 1. IEEE, I–I. Deep Light Pollution Removal in Night Cityscape Photographs (a) (b) (c) Input Jin’s (ACM MM 23) Dehaze Cong’s (CVPR 24) Dehaze Lin’s (AAAI 25) Dehaze Jin’s (ECCV 22) LES Dille’s (ECCV 24) HDR Dai’s (TPMAI 24) Deflare Liu’s (2021) LPR Ours LPR Input Jin’s (ACM MM 23) Dehaze Cong’s (CVPR 24) Dehaze Lin’s (AAAI 25) Dehaze Jin...
2021
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.