Recognition: 2 theorem links
· Lean TheoremZeroIDIR: Zero-Reference Illumination Degradation Image Restoration with Perturbed Consistency Diffusion Models
Pith reviewed 2026-05-13 01:41 UTC · model grok-4.3
The pith
A zero-reference diffusion framework restores illumination-degraded images by training only on low-quality examples through decoupled correction and reconstruction.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors establish that illumination degradation image restoration can be achieved without reference images by first generating an illumination-corrected representation via adaptive gamma correction regularized by a histogram-guided loss, then applying a perturbed consistency diffusion model whose forward trajectory is constrained by a consistency loss to recover details and suppress noise, yielding outputs that surpass other unsupervised methods and approach supervised performance on public benchmarks.
What carries the argument
The perturbed consistency diffusion model, which treats the gamma-corrected image as an intermediate noisy state and enforces consistency of the restored image's forward diffusion path with the perturbed input to guide unsupervised reconstruction.
If this is right
- The method outperforms prior unsupervised competitors on standard benchmarks for illumination-degraded image restoration.
- Performance reaches levels comparable to supervised techniques that require paired clean and degraded training pairs.
- Generalization improves across varied scenes and degradation conditions compared with methods that rely on specific paired data.
- Training becomes possible using only collections of low-quality degraded images, eliminating the need for expensive paired datasets.
Where Pith is reading between the lines
- The same decoupling of a simple correction step from a consistency-constrained diffusion stage could be tested on other degradations such as blur or sensor noise.
- Real-world deployment in photography apps or surveillance systems might become cheaper because no paired clean data needs to be collected.
- Extreme low-light cases could expose whether the gamma correction step alone suffices or whether the diffusion stage must handle larger exposure gaps.
Load-bearing premise
The adaptive gamma correction plus histogram loss produces an intermediate image whose exposure is close enough to natural scenes that the subsequent diffusion process can add accurate details and remove noise without supervision.
What would settle it
If applying the full method to a new collection of real low-light photos produces no measurable improvement in detail recovery or noise levels over the gamma-corrected image alone, or if artifacts appear that are absent in supervised baselines, the central claim would be undermined.
Figures
read the original abstract
In this paper, we propose a zero-reference diffusion-based framework, named ZeroIDIR, for illumination degradation image restoration, which decouples the restoration process into adaptive illumination correction and diffusion-based reconstruction while being trained solely on low-quality degraded images. Specifically, we design an adaptive gamma correction module that performs spatially varying exposure correction to generate illumination-corrected only representations to mitigate exposure bias and serve as reliable inputs for subsequent diffusion processes, where a histogram-guided illumination correction loss is introduced to regularize the corrected illumination distribution toward that of natural scenes. Subsequently, the illumination-corrected image is treated as an intermediate noisy state for the proposed perturbed consistency diffusion model to reconstruct details and suppress noise. Moreover, a perturbed diffusion consistency loss is proposed to constrain the forward diffusion trajectory of the final restored image to remain consistent with the perturbed state, thus improving restoration fidelity and stability in the absence of supervision. Extensive experiments on publicly available benchmarks show that the proposed method outperforms state-of-the-art unsupervised competitors and is comparable to supervised methods while being more generalizable to various scenes. Code is available at https://github.com/JianghaiSCU/ZeroIDIR.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes ZeroIDIR, a zero-reference diffusion-based framework for illumination degradation image restoration. It decouples restoration into an adaptive gamma correction module (with histogram-guided loss regularizing corrected illumination toward natural scene distributions) followed by a perturbed consistency diffusion model that treats the corrected image as an intermediate noisy state for detail reconstruction and noise suppression. All components are trained solely on low-quality degraded images, with experiments claiming outperformance over unsupervised SOTA and comparability to supervised methods on public benchmarks plus better generalizability.
Significance. If the zero-reference premise holds without implicit external supervision, the combination of spatially adaptive correction and perturbed consistency losses could advance practical unsupervised restoration pipelines for illumination-degraded images, where clean references are unavailable. Code release aids reproducibility and allows verification of the diffusion trajectory constraints.
major comments (1)
- [Abstract / loss formulation] Abstract and method description: The histogram-guided illumination correction loss is introduced to regularize the corrected illumination distribution 'toward that of natural scenes,' yet the framework is repeatedly described as 'trained solely on low-quality degraded images.' Clarify the exact source and computation of the target histogram (e.g., precomputed from external clean images, statistics from the degraded training set, or otherwise); if external data is used, this introduces implicit supervision that directly conflicts with the zero-reference claim and the generalizability argument.
minor comments (2)
- The description of how the illumination-corrected image is injected as an 'intermediate noisy state' into the perturbed consistency diffusion process would benefit from an explicit equation or diagram showing the forward diffusion trajectory and the form of the perturbed consistency loss.
- Minor notation: ensure consistent use of symbols for the adaptive gamma correction parameters and the perturbation schedule across the method and loss sections.
Simulated Author's Rebuttal
We thank the referee for the careful reading and for highlighting the potential ambiguity between the zero-reference training claim and the histogram-guided loss. We address the comment directly below and will revise the manuscript for clarity.
read point-by-point responses
-
Referee: [Abstract / loss formulation] Abstract and method description: The histogram-guided illumination correction loss is introduced to regularize the corrected illumination distribution 'toward that of natural scenes,' yet the framework is repeatedly described as 'trained solely on low-quality degraded images.' Clarify the exact source and computation of the target histogram (e.g., precomputed from external clean images, statistics from the degraded training set, or otherwise); if external data is used, this introduces implicit supervision that directly conflicts with the zero-reference claim and the generalizability argument.
Authors: The target histogram is a fixed prior computed as the mean histogram over a large collection of natural-scene images drawn from public datasets (e.g., ImageNet) that are completely disjoint from the degraded training and test sets used in our experiments. This prior is precomputed once, never updated during training, and applied uniformly to all inputs. No paired clean reference images corresponding to any degraded input are ever used; the loss simply encourages the histogram of the adaptively gamma-corrected image to align with this general natural-scene distribution. We therefore maintain that the model is trained solely on low-quality degraded images. That said, we agree that the current wording is imprecise and could be read as implying external supervision. We will revise the abstract, Section 3.2, and the discussion of zero-reference training to explicitly state the source and computation of the target histogram, to note that it constitutes a fixed, dataset-independent prior, and to qualify the generalizability claim accordingly. These changes will be textual only and will not alter any results or the method itself. revision: yes
Circularity Check
No significant circularity; independent design choices and external priors
full rationale
The paper's framework decouples adaptive gamma correction, histogram-guided loss (regularizing toward natural scene distributions), and perturbed consistency diffusion loss as separately motivated components for zero-reference training on degraded images. No equations or steps reduce by construction to their inputs (e.g., no fitted parameter renamed as prediction, no self-definitional loop where output defines the loss target). No load-bearing self-citations, uniqueness theorems from authors, or ansatz smuggling are present in the provided description. The natural-scene histogram is an external prior, not a circular self-reference, and experimental claims rest on benchmarks rather than tautological derivation. This is a standard low-circularity case with self-contained assumptions.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
histogram-guided illumination correction loss L_hic = D_KL(H(L'_d) || H_prior) ... empirical prior distribution estimated from real-world well-illuminated images
-
IndisputableMonolith/Foundation/ArithmeticFromLogic.leanembed_strictMono_of_one_lt unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
perturbed diffusion consistency loss L_pdc ... constrain the forward diffusion trajectory
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Learning multi-scale photo exposure correction
Mahmoud Afifi, Konstantinos G Derpanis, Bjorn Ommer, and Michael S Brown. Learning multi-scale photo exposure correction. InCVPR, pages 9157–9167, 2021. 3, 4, 5, 6, 7
work page 2021
-
[2]
Alan C Bovik.Handbook of image and video processing. Academic press, 2010. 2
work page 2010
-
[3]
Learning photographic global tonal adjustment with a database of input/output image pairs
Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Fr ´edo Durand. Learning photographic global tonal adjustment with a database of input/output image pairs. InCVPR, pages 97– 104, 2011. 4, 5, 6
work page 2011
-
[4]
Learning a deep single image contrast enhancer from multi-exposure images
Jianrui Cai, Shuhang Gu, and Lei Zhang. Learning a deep single image contrast enhancer from multi-exposure images. IEEE TIP, 27(4):2049–2062, 2018. 4, 5, 7, 8
work page 2049
-
[5]
Retinexformer: One-stage retinex- based transformer for low-light image enhancement
Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Tim- ofte, and Yulun Zhang. Retinexformer: One-stage retinex- based transformer for low-light image enhancement. In ICCV, pages 12504–12513, 2023. 2
work page 2023
-
[6]
Over- exposure correction via exposure and scene information dis- entanglement
Yuhui Cao, Yurui Ren, Thomas H Li, and Ge Li. Over- exposure correction via exposure and scene information dis- entanglement. InACCV, 2020. 3
work page 2020
-
[7]
Cheuk-Yiu Chan, Wan-Chi Siu, Yuk-Hee Chan, and H An- thony Chan. Anlightendiff: Anchoring diffusion probabilis- tic model on low light image enhancement.IEEE TIP, 2024. 1, 6, 8
work page 2024
-
[8]
Shen Cheng, Haipeng Li, Haibin Huang, Xiaohong Liu, and Shuaicheng Liu. Blind-spot guided diffusion for self-supervised real-world denoising.arXiv preprint arXiv:2509.16091, 2025. 3
-
[9]
Hyungjin Chung, Byeongsu Sim, and Jong Chul Ye. Come-closer-diffuse-faster: Accelerating conditional diffu- sion models for inverse problems through stochastic contrac- tion. InCVPR, pages 12413–12422, 2022. 2, 3
work page 2022
-
[10]
Unsupervised expo- sure correction
Ruodai Cui, Li Niu, and Guosheng Hu. Unsupervised expo- sure correction. InECCV, pages 252–268, 2024. 1, 2, 3, 6, 7
work page 2024
-
[11]
Ziteng Cui, Kunchang Li, Lin Gu, Shenghan Su, Peng Gao, Zhengkai Jiang, Yu Qiao, and Tatsuya Harada. You only need 90k parameters to adapt light: a light weight trans- former for image enhancement and exposure correction. arXiv preprint arXiv:2205.14871, 2022. 6, 7
-
[12]
Generative dif- fusion prior for unified image restoration and enhancement
Ben Fei, Zhaoyang Lyu, Liang Pan, Junzhe Zhang, Weidong Yang, Tianyue Luo, Bo Zhang, and Bo Dai. Generative dif- fusion prior for unified image restoration and enhancement. InCVPR, pages 9935–9946, 2023. 2, 3
work page 2023
-
[13]
A weighted variational model for simultane- ous reflectance and illumination estimation
Xueyang Fu, Delu Zeng, Yue Huang, Xiao-Ping Zhang, and Xinghao Ding. A weighted variational model for simultane- ous reflectance and illumination estimation. InCVPR, pages 2782–2790, 2016. 1
work page 2016
-
[14]
Learning a simple low-light image enhancer from paired low-light instances
Zhenqi Fu, Yan Yang, Xiaotong Tu, Yue Huang, Xinghao Ding, and Kai-Kuang Ma. Learning a simple low-light image enhancer from paired low-light instances. InCVPR, pages 22252–22261, 2023. 2, 6
work page 2023
-
[15]
Rave: Residual vector embedding for clip-guided backlit im- age enhancement
Tatiana Gaintseva, Martin Benning, and Gregory Slabaugh. Rave: Residual vector embedding for clip-guided backlit im- age enhancement. InECCV, pages 412–428, 2024. 2, 3, 6
work page 2024
-
[16]
Zero-reference deep curve estimation for low-light image enhancement
Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. Zero-reference deep curve estimation for low-light image enhancement. In CVPR, pages 1780–1789, 2020. 2, 5, 6
work page 2020
-
[17]
Lime: Low-light im- age enhancement via illumination map estimation.IEEE TIP, 26(2):982–993, 2016
Xiaojie Guo, Yu Li, and Haibin Ling. Lime: Low-light im- age enhancement via illumination map estimation.IEEE TIP, 26(2):982–993, 2016. 1
work page 2016
-
[18]
Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu Zou, Fang Lin, and Songchen Han. R2rnet: Low-light image enhancement via real-low to real-normal network.Jour- nal of Visual Communication and Image Representation, 90: 103712, 2023. 4, 5, 6, 7
work page 2023
-
[19]
Chunming He, Kai Li, Guoxia Xu, Jiangpeng Yan, Longx- iang Tang, Yulun Zhang, Yaowei Wang, and Xiu Li. Hqg- net: Unpaired medical image enhancement with high-quality guidance.IEEE TNNLS, 35(12):18404–18418, 2023. 2
work page 2023
-
[20]
Reti-diff: Illumination degradation image restoration with retinex-based latent diffusion model
Chunming He, Chengyu Fang, Yulun Zhang, Kai Li, Longx- iang Tang, Chenyu You, Fengyang Xiao, Zhenhua Guo, and Xiu Li. Reti-diff: Illumination degradation image restoration with retinex-based latent diffusion model. InICLR, 2025. 1, 2, 3, 6, 8
work page 2025
-
[21]
Diffusion models in low-level vision: A survey.IEEE TPAMI, 2025
Chunming He, Yuqi Shen, Chengyu Fang, Fengyang Xiao, Longxiang Tang, Yulun Zhang, Wangmeng Zuo, Zhenhua Guo, and Xiu Li. Diffusion models in low-level vision: A survey.IEEE TPAMI, 2025. 3
work page 2025
-
[22]
Chunming He, Rihan Zhang, Zheng Chen, Bowen Yang, CHengyu Fang, Yunlong Lin, Fengyang Xiao, and Sina Farsiu. Unfoldldm: Deep unfolding-based blind image restoration with latent diffusion priors.arXiv preprint arXiv:2511.18152, 2025. 3
work page internal anchor Pith review arXiv 2025
-
[23]
Chunming He, Rihan Zhang, Fengyang Xiao, Chengyu Fang, Longxiang Tang, Yulun Zhang, and Sina Farsiu. Unfoldir: Rethinking deep unfolding network in illu- mination degradation image restoration.arXiv preprint arXiv:2505.06683, 2025. 3
-
[24]
Denoising diffu- sion probabilistic models.NeurIPS, 33:6840–6851, 2020
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffu- sion probabilistic models.NeurIPS, 33:6840–6851, 2020. 2, 4, 5
work page 2020
-
[25]
Global structure-aware diffusion pro- cess for low-light image enhancement.NeurIPS, 36, 2023
Jinhui Hou, Zhiyu Zhu, Junhui Hou, Hui Liu, Huanqiang Zeng, and Hui Yuan. Global structure-aware diffusion pro- cess for low-light image enhancement.NeurIPS, 36, 2023. 2
work page 2023
-
[26]
Exposure: A white-box photo post-processing framework.ACM TOG, 37(2):1–17, 2018
Yuanming Hu, Hao He, Chenxi Xu, Baoyuan Wang, and Stephen Lin. Exposure: A white-box photo post-processing framework.ACM TOG, 37(2):1–17, 2018. 3, 6, 7
work page 2018
-
[27]
Deep fourier-based exposure correction network with spatial- frequency interaction
Jie Huang, Yajing Liu, Feng Zhao, Keyu Yan, Jinghao Zhang, Yukun Huang, Man Zhou, and Zhiwei Xiong. Deep fourier-based exposure correction network with spatial- frequency interaction. InECCV, pages 163–180, 2022. 3, 6, 7
work page 2022
-
[28]
Dark channel prior-based spatially adaptive contrast 9 enhancement for back lighting compensation
Jaehyun Im, Inhye Yoon, Monson H Hayes, and Joonki Paik. Dark channel prior-based spatially adaptive contrast 9 enhancement for back lighting compensation. InICASSP, pages 2464–2468, 2013. 3
work page 2013
-
[29]
Low-light image enhancement with wavelet-based diffusion models.ACM TOG, 42(6):1–14,
Hai Jiang, Ao Luo, Songchen Han, Haoqiang Fan, and Shuaicheng Liu. Low-light image enhancement with wavelet-based diffusion models.ACM TOG, 42(6):1–14,
-
[30]
Lightendiffusion: Unsupervised low-light image enhancement with latent-retinex diffusion models
Hai Jiang, Ao Luo, Xiaohong Liu, Songchen Han, and Shuaicheng Liu. Lightendiffusion: Unsupervised low-light image enhancement with latent-retinex diffusion models. In ECCV, pages 161–179, 2024. 1, 2, 3, 6, 8
work page 2024
-
[31]
Hai Jiang, Yang Ren, and Songchen Han. Revisiting coarse- to-fine strategy for low-light image enhancement with deep decomposition guided training.Computer Vision and Image Understanding, 142:103952, 2024. 2
work page 2024
-
[32]
Learning to see in the extremely dark
Hai Jiang, Binhao Guan, Zhen Liu, Xiaohong Liu, Jian Yu, Zheng Liu, Songchen Han, and Shuaicheng Liu. Learning to see in the extremely dark. InICCV, pages 7676–7685, 2025. 3
work page 2025
-
[33]
Hai Jiang, Haipeng Li, Songchen Han, Bing Zeng, and Shuaicheng Liu. Supervised small-baseline and large- baseline homography learning with diffusion-based data generation.IEEE TPAMI, 2026. 3
work page 2026
-
[34]
Enlightengan: Deep light enhancement without paired supervision.IEEE TIP, 30:2340–2349, 2021
Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang. Enlightengan: Deep light enhancement without paired supervision.IEEE TIP, 30:2340–2349, 2021. 2, 6
work page 2021
-
[35]
Donggo Jung, Daehyun Kim, Guanghui Wang, and Tae Hyun Kim. Exposure-slot: Exposure-centric represen- tations learning with slot-in-slot attention for region-aware exposure correction. InCVPR, 2025. 1, 2, 3, 6, 7
work page 2025
-
[36]
Adam: A method for stochastic optimization
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. InICLR, 2015. 5
work page 2015
-
[37]
The retinex theory of color vision.Scientific American, 237(6):108–129, 1977
Edwin H Land. The retinex theory of color vision.Scientific American, 237(6):108–129, 1977. 2
work page 1977
-
[38]
Embedding fourier for ultra-high-definition low-light image enhancement
Chongyi Li, Chun-Le Guo, Man Zhou, Zhexin Liang, Shangchen Zhou, Ruicheng Feng, and Chen Change Loy. Embedding fourier for ultra-high-definition low-light image enhancement. InICLR, 2023. 1, 2, 6
work page 2023
-
[39]
Dmhomo: Learning ho- mography with diffusion models.ACM TOG, 43(3):1–16,
Haipeng Li, Hai Jiang, Ao Luo, Ping Tan, Haoqiang Fan, Bing Zeng, and Shuaicheng Liu. Dmhomo: Learning ho- mography with diffusion models.ACM TOG, 43(3):1–16,
-
[40]
Foundir: Unleashing million-scale training data to advance foundation models for image restoration
Hao Li, Xiang Chen, Jiangxin Dong, Jinhui Tang, and Jin- shan Pan. Foundir: Unleashing million-scale training data to advance foundation models for image restoration. InICCV, pages 12626–12636, 2025. 3
work page 2025
-
[41]
Real-time expo- sure correction via collaborative transformations and adap- tive sampling
Ziwen Li, Feng Zhang, Meng Cao, Jinpu Zhang, Yuanjie Shao, Yuehuan Wang, and Nong Sang. Real-time expo- sure correction via collaborative transformations and adap- tive sampling. InCVPR, pages 2984–2994, 2024. 1, 2, 3, 6, 7
work page 2024
-
[42]
Iterative prompt learning for unsupervised backlit image enhancement
Zhexin Liang, Chongyi Li, Shangchen Zhou, Ruicheng Feng, and Chen Change Loy. Iterative prompt learning for unsupervised backlit image enhancement. InICCV, pages 8094–8103, 2023. 1, 2, 3, 5, 6
work page 2023
-
[43]
Yunlong Lin, Tian Ye, Sixiang Chen, Zhenqi Fu, Yingying Wang, Wenhao Chai, Zhaohu Xing, Wenxue Li, Lei Zhu, and Xinghao Ding. Aglldiff: Guiding diffusion models to- wards unsupervised training-free real-world low-light image enhancement. InAAAI, pages 5307–5315, 2025. 1, 2, 3, 6
work page 2025
-
[44]
Risheng Liu, Long Ma, Jiaao Zhang, Xin Fan, and Zhongx- uan Luo. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In CVPR, pages 10561–10570, 2021. 2, 6
work page 2021
-
[45]
Coding-prior guided diffusion network for video deblurring
Yike Liu, Jianhui Zhang, Haipeng Li, Shuaicheng Liu, and Bing Zeng. Coding-prior guided diffusion network for video deblurring. InACM MM, pages 10268–10277, 2025. 3
work page 2025
-
[46]
Zhen Liu, Hai Jiang, Haipeng Li, Shuaicheng Liu, and Bing Zeng. Solving ill-posed regions in high dynamic range re- construction with uncertainty-aware diffusion models.IEEE TCSVT, 2025
work page 2025
-
[47]
Raw-flow: Advancing rgb-to-raw image reconstruction with deterministic latent flow matching
Zhen Liu, Diedong Feng, Hai Jiang, Liaoyuan Zeng, Hao Wang, Chaoyu Feng, Lei Lei, Bing Zeng, and Shuaicheng Liu. Raw-flow: Advancing rgb-to-raw image reconstruction with deterministic latent flow matching. 40(9):7431–7439,
-
[48]
Yuen Peng Loh and Chee Seng Chan. Getting to know low- light images with the exclusively dark dataset.Computer Vision and Image Understanding, 178:30–42, 2019. 1
work page 2019
-
[49]
Xiaoqian Lv, Shengping Zhang, Qinglin Liu, Haozhe Xie, Bineng Zhong, and Huiyu Zhou. Backlitnet: A dataset and network for backlit image enhancement.Computer Vision and Image Understanding, 218:103403, 2022. 3, 4, 5, 6, 7
work page 2022
-
[50]
Fourier priors-guided diffusion for zero-shot joint low-light enhance- ment and deblurring
Xiaoqian Lv, Shengping Zhang, Chenyang Wang, Yichen Zheng, Bineng Zhong, Chongyi Li, and Liqiang Nie. Fourier priors-guided diffusion for zero-shot joint low-light enhance- ment and deblurring. InCVPR, pages 25378–25388, 2024. 2, 3, 6
work page 2024
-
[51]
Ailin Ma, Hai Jiang, Binbin Liang, and Songchen Han. In- corporating fourier transformation with diffusion models for low-light image enhancement.IEEE Sign. Process. Letters,
-
[52]
Toward fast, flexible, and robust low-light image enhancement
Long Ma, Tengyu Ma, Risheng Liu, Xin Fan, and Zhongx- uan Luo. Toward fast, flexible, and robust low-light image enhancement. InCVPR, pages 5637–5646, 2022. 2, 6
work page 2022
-
[53]
Practical exposure correction via compensation
Long Ma, Tianjiao Ma, Xinwei Xue, Xin Fan, Zhongxuan Luo, and Risheng Liu. Practical exposure correction: Great truths are always simple.arXiv preprint arXiv:2212.14245,
work page internal anchor Pith review Pith/arXiv arXiv
-
[54]
Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. Mak- ing a “completely blind” image quality analyzer.IEEE Sign. Process. Letters, 20(3):209–212, 2012. 6
work page 2012
-
[55]
Psenet: Progressive self-enhancement network for unsuper- vised extreme-light image enhancement
Hue Nguyen, Diep Tran, Khoi Nguyen, and Rang Nguyen. Psenet: Progressive self-enhancement network for unsuper- vised extreme-light image enhancement. InWACV, pages 1756–1765, 2023. 1, 2, 3, 6, 7
work page 2023
-
[56]
Elucidating the exposure bias in diffusion models
Mang Ning, Mingxiao Li, Jianlin Su, Albert Ali Salah, and Itir Onal Ertugrul. Elucidating the exposure bias in diffusion models. InICLR, 2024. 2, 4
work page 2024
-
[57]
Learn- ing exposure correction via consistency modeling
Ntumba Elie Nsampi, Zhongyun Hu, and Qing Wang. Learn- ing exposure correction via consistency modeling. InBMVC, page 12, 2021. 3, 6, 7
work page 2021
-
[58]
Learn- ing transferable visual models from natural language super- vision
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, 10 Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn- ing transferable visual models from natural language super- vision. InICML, pages 8748–8763, 2021. 3
work page 2021
-
[59]
Yang Ren, Hai Jiang, Menglong Yang, Wei Li, and Shuaicheng Liu. Ispdiffuser: Learning raw-to-srgb mappings with texture-aware diffusion models and histogram-guided color consistency. InAAAI, pages 6722–6730, 2025. 3
work page 2025
-
[60]
U-net: Convolutional networks for biomedical image segmentation
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. InMICCAI, pages 234–241, 2015. 5
work page 2015
-
[61]
Denois- ing diffusion implicit models
Jiaming Song, Chenlin Meng, and Stefano Ermon. Denois- ing diffusion implicit models. InICLR, 2021. 2, 4
work page 2021
-
[62]
Chun-Ming Tsai and Zong-Mu Yeh. Contrast compensation by fuzzy classification and image illumination analysis for back-lit and front-lit color face images.IEEE TCE, 56(3): 1570–1578, 2010. 3
work page 2010
-
[63]
Ex- ploring clip for assessing the look and feel of images
Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. Ex- ploring clip for assessing the look and feel of images. In AAAI, pages 2555–2563, 2023. 6
work page 2023
-
[64]
Zero-reference low-light enhancement via physical quadru- ple priors
Wenjing Wang, Huan Yang, Jianlong Fu, and Jiaying Liu. Zero-reference low-light enhancement via physical quadru- ple priors. InCVPR, pages 26057–26066, 2024. 5, 6
work page 2024
-
[65]
Yinglong Wang, Zhen Liu, Jianzhuang Liu, Songcen Xu, and Shuaicheng Liu. Low-light image enhancement with illumination-aware gamma correction and complete image modelling network. InICCV, pages 13128–13137, 2023. 2
work page 2023
-
[66]
Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image quality assessment: from error visibility to structural similarity.IEEE TIP, 13(4):600–612, 2004. 5
work page 2004
-
[67]
Ziyi Wang, Haipeng Li, Lin Sui, Tianhao Zhou, Hai Jiang, Lang Nie, and Shuaicheng Liu. Stablemotion: Repurposing diffusion-based image priors for motion estimation.arXiv preprint arXiv:2505.06668, 2025. 3
-
[68]
Deep retinex decomposition for low-light enhancement
Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. Deep retinex decomposition for low-light enhancement. In BMVC, 2018. 4, 5, 6, 7, 8
work page 2018
-
[69]
Exploring image enhancement for salient object detection in low light images.ACM TOMM, 17(1s):1–19,
Xin Xu, Shiqin Wang, Zheng Wang, Xiaolong Zhang, and Ruimin Hu. Exploring image enhancement for salient object detection in low light images.ACM TOMM, 17(1s):1–19,
-
[70]
Snr-aware low-light image enhancement
Xiaogang Xu, Ruixing Wang, Chi-Wing Fu, and Jiaya Jia. Snr-aware low-light image enhancement. InCVPR, pages 17714–17724, 2022. 2
work page 2022
-
[71]
Low-light image enhancement via structure modeling and guidance
Xiaogang Xu, Ruixing Wang, and Jiangbo Lu. Low-light image enhancement via structure modeling and guidance. In CVPR, pages 9893–9903, 2023. 2
work page 2023
-
[72]
Implicit neural representation for coopera- tive low-light image enhancement
Shuzhou Yang, Moxuan Ding, Yanmin Wu, Zihan Li, and Jian Zhang. Implicit neural representation for coopera- tive low-light image enhancement. InICCV, pages 12918– 12927, 2023. 2, 6
work page 2023
-
[73]
Shuzhou Yang, Xuanyu Zhang, Yinhuai Wang, Jiwen Yu, Yuhan Wang, and Jian Zhang. Difflle: Diffusion-based do- main calibration for weak supervised low-light image en- hancement.IJCV, 133(5):2527–2546, 2025. 2, 3
work page 2025
-
[74]
Multi-frame rolling shut- ter correction with diffusion models.IEEE TCSVT, 2025
Zhanglei Yang, Haipeng Li, Shen Cheng, Mingbo Hong, Bing Zeng, and Shuaicheng Liu. Multi-frame rolling shut- ter correction with diffusion models.IEEE TCSVT, 2025. 3
work page 2025
-
[75]
Single image rolling shutter removal with diffusion models
Zhanglei Yang, Haipeng Li, Mingbo Hong, Chen-Lin Zhang, Jiajun Li, and Shuaicheng Liu. Single image rolling shutter removal with diffusion models. InAAAI, pages 9373–9381,
-
[76]
Xunpeng Yi, Han Xu, Hao Zhang, Linfeng Tang, and Ji- ayi Ma. Diff-retinex++: Retinex-driven reinforced diffu- sion model for low-light image enhancement.IEEE TPAMI,
-
[77]
Learning enriched features for real image restoration and enhancement
Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Learning enriched features for real image restoration and enhancement. InECCV, pages 492–511, 2020. 2
work page 2020
-
[78]
Zero-shot restoration of back- lit images using deep internal learning
Lin Zhang, Lijun Zhang, Xiao Liu, Ying Shen, Shaoming Zhang, and Shengjie Zhao. Zero-shot restoration of back- lit images using deep internal learning. InACM MM, pages 1623–1631, 2019. 3, 6, 7
work page 2019
-
[79]
High-quality exposure correction of under- exposed photos
Qing Zhang, Ganzhao Yuan, Chunxia Xiao, Lei Zhu, and Wei-Shi Zheng. High-quality exposure correction of under- exposed photos. InACM MM, pages 582–590, 2018. 3
work page 2018
-
[80]
Dual illumi- nation estimation for robust exposure correction
Qing Zhang, Yongwei Nie, and Wei-Shi Zheng. Dual illumi- nation estimation for robust exposure correction. InComput. Graph. Forum, pages 243–252, 2019. 3, 6, 7
work page 2019
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.