pith. machine review for the scientific record. sign in

arxiv: 2605.11435 · v1 · submitted 2026-05-12 · 💻 cs.CV

Recognition: 2 theorem links

· Lean Theorem

ZeroIDIR: Zero-Reference Illumination Degradation Image Restoration with Perturbed Consistency Diffusion Models

Authors on Pith no claims yet

Pith reviewed 2026-05-13 01:41 UTC · model grok-4.3

classification 💻 cs.CV
keywords zero-referenceimage restorationdiffusion modelslow-light enhancementunsupervised learningillumination correctionconsistency loss
0
0 comments X

The pith

A zero-reference diffusion framework restores illumination-degraded images by training only on low-quality examples through decoupled correction and reconstruction.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces ZeroIDIR, a method that restores images suffering from poor lighting or exposure problems without any clean reference images or paired training data. It first applies an adaptive gamma correction module to adjust exposure in a spatially varying way, using a histogram-guided loss to align the brightness distribution with typical natural scenes. The corrected image then serves as the starting point for a perturbed consistency diffusion model that reconstructs fine details and suppresses noise while a dedicated consistency loss keeps the output trajectory aligned with the input state. This separation lets the entire system learn from collections of degraded photos alone. If the approach holds, restoration becomes feasible in settings where ground-truth clean images are unavailable or expensive to obtain.

Core claim

The authors establish that illumination degradation image restoration can be achieved without reference images by first generating an illumination-corrected representation via adaptive gamma correction regularized by a histogram-guided loss, then applying a perturbed consistency diffusion model whose forward trajectory is constrained by a consistency loss to recover details and suppress noise, yielding outputs that surpass other unsupervised methods and approach supervised performance on public benchmarks.

What carries the argument

The perturbed consistency diffusion model, which treats the gamma-corrected image as an intermediate noisy state and enforces consistency of the restored image's forward diffusion path with the perturbed input to guide unsupervised reconstruction.

If this is right

  • The method outperforms prior unsupervised competitors on standard benchmarks for illumination-degraded image restoration.
  • Performance reaches levels comparable to supervised techniques that require paired clean and degraded training pairs.
  • Generalization improves across varied scenes and degradation conditions compared with methods that rely on specific paired data.
  • Training becomes possible using only collections of low-quality degraded images, eliminating the need for expensive paired datasets.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same decoupling of a simple correction step from a consistency-constrained diffusion stage could be tested on other degradations such as blur or sensor noise.
  • Real-world deployment in photography apps or surveillance systems might become cheaper because no paired clean data needs to be collected.
  • Extreme low-light cases could expose whether the gamma correction step alone suffices or whether the diffusion stage must handle larger exposure gaps.

Load-bearing premise

The adaptive gamma correction plus histogram loss produces an intermediate image whose exposure is close enough to natural scenes that the subsequent diffusion process can add accurate details and remove noise without supervision.

What would settle it

If applying the full method to a new collection of real low-light photos produces no measurable improvement in detail recovery or noise levels over the gamma-corrected image alone, or if artifacts appear that are absent in supervised baselines, the central claim would be undermined.

Figures

Figures reproduced from arXiv: 2605.11435 by Bing Zeng, Hai Jiang, Shuaicheng Liu, Songchen Han, Yinjie Lei, Zhen Liu.

Figure 1
Figure 1. Figure 1: Visual comparisons of our method with recent state-of-the-art supervised IDIR methods UHDFour [ [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The overall pipeline of our proposed ZeroIDIR framework. We first perform Retinex decomposition on the illumination degra [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: The illumination histogram distributions derived from [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Qualitative comparison on the LSRW [18] (row 1), BAID [49] (row 2), and SICE [4] (row 3) test sets for low-light image enhancement (LLIE), backlit image enhancement (BIE), and multiple exposure correction (MEC), respectively. parisons, we adopt the released weights pre-trained on the MSEC dataset of supervised methods for evaluation. As re￾ported in [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
Figure 6
Figure 6. Figure 6: Visual results of the ablation study about our PCDM. [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
read the original abstract

In this paper, we propose a zero-reference diffusion-based framework, named ZeroIDIR, for illumination degradation image restoration, which decouples the restoration process into adaptive illumination correction and diffusion-based reconstruction while being trained solely on low-quality degraded images. Specifically, we design an adaptive gamma correction module that performs spatially varying exposure correction to generate illumination-corrected only representations to mitigate exposure bias and serve as reliable inputs for subsequent diffusion processes, where a histogram-guided illumination correction loss is introduced to regularize the corrected illumination distribution toward that of natural scenes. Subsequently, the illumination-corrected image is treated as an intermediate noisy state for the proposed perturbed consistency diffusion model to reconstruct details and suppress noise. Moreover, a perturbed diffusion consistency loss is proposed to constrain the forward diffusion trajectory of the final restored image to remain consistent with the perturbed state, thus improving restoration fidelity and stability in the absence of supervision. Extensive experiments on publicly available benchmarks show that the proposed method outperforms state-of-the-art unsupervised competitors and is comparable to supervised methods while being more generalizable to various scenes. Code is available at https://github.com/JianghaiSCU/ZeroIDIR.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper proposes ZeroIDIR, a zero-reference diffusion-based framework for illumination degradation image restoration. It decouples restoration into an adaptive gamma correction module (with histogram-guided loss regularizing corrected illumination toward natural scene distributions) followed by a perturbed consistency diffusion model that treats the corrected image as an intermediate noisy state for detail reconstruction and noise suppression. All components are trained solely on low-quality degraded images, with experiments claiming outperformance over unsupervised SOTA and comparability to supervised methods on public benchmarks plus better generalizability.

Significance. If the zero-reference premise holds without implicit external supervision, the combination of spatially adaptive correction and perturbed consistency losses could advance practical unsupervised restoration pipelines for illumination-degraded images, where clean references are unavailable. Code release aids reproducibility and allows verification of the diffusion trajectory constraints.

major comments (1)
  1. [Abstract / loss formulation] Abstract and method description: The histogram-guided illumination correction loss is introduced to regularize the corrected illumination distribution 'toward that of natural scenes,' yet the framework is repeatedly described as 'trained solely on low-quality degraded images.' Clarify the exact source and computation of the target histogram (e.g., precomputed from external clean images, statistics from the degraded training set, or otherwise); if external data is used, this introduces implicit supervision that directly conflicts with the zero-reference claim and the generalizability argument.
minor comments (2)
  1. The description of how the illumination-corrected image is injected as an 'intermediate noisy state' into the perturbed consistency diffusion process would benefit from an explicit equation or diagram showing the forward diffusion trajectory and the form of the perturbed consistency loss.
  2. Minor notation: ensure consistent use of symbols for the adaptive gamma correction parameters and the perturbation schedule across the method and loss sections.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the careful reading and for highlighting the potential ambiguity between the zero-reference training claim and the histogram-guided loss. We address the comment directly below and will revise the manuscript for clarity.

read point-by-point responses
  1. Referee: [Abstract / loss formulation] Abstract and method description: The histogram-guided illumination correction loss is introduced to regularize the corrected illumination distribution 'toward that of natural scenes,' yet the framework is repeatedly described as 'trained solely on low-quality degraded images.' Clarify the exact source and computation of the target histogram (e.g., precomputed from external clean images, statistics from the degraded training set, or otherwise); if external data is used, this introduces implicit supervision that directly conflicts with the zero-reference claim and the generalizability argument.

    Authors: The target histogram is a fixed prior computed as the mean histogram over a large collection of natural-scene images drawn from public datasets (e.g., ImageNet) that are completely disjoint from the degraded training and test sets used in our experiments. This prior is precomputed once, never updated during training, and applied uniformly to all inputs. No paired clean reference images corresponding to any degraded input are ever used; the loss simply encourages the histogram of the adaptively gamma-corrected image to align with this general natural-scene distribution. We therefore maintain that the model is trained solely on low-quality degraded images. That said, we agree that the current wording is imprecise and could be read as implying external supervision. We will revise the abstract, Section 3.2, and the discussion of zero-reference training to explicitly state the source and computation of the target histogram, to note that it constitutes a fixed, dataset-independent prior, and to qualify the generalizability claim accordingly. These changes will be textual only and will not alter any results or the method itself. revision: yes

Circularity Check

0 steps flagged

No significant circularity; independent design choices and external priors

full rationale

The paper's framework decouples adaptive gamma correction, histogram-guided loss (regularizing toward natural scene distributions), and perturbed consistency diffusion loss as separately motivated components for zero-reference training on degraded images. No equations or steps reduce by construction to their inputs (e.g., no fitted parameter renamed as prediction, no self-definitional loop where output defines the loss target). No load-bearing self-citations, uniqueness theorems from authors, or ansatz smuggling are present in the provided description. The natural-scene histogram is an external prior, not a circular self-reference, and experimental claims rest on benchmarks rather than tautological derivation. This is a standard low-circularity case with self-contained assumptions.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

The central claim rests on the empirical effectiveness of the introduced adaptive gamma module, histogram-guided loss, and perturbed consistency loss; these are design choices rather than derived quantities. Without the full manuscript, specific free parameters (such as loss weights, diffusion schedule parameters, or gamma adaptation factors) and any domain assumptions about natural image histograms cannot be enumerated.

pith-pipeline@v0.9.0 · 5512 in / 1306 out tokens · 43384 ms · 2026-05-13T01:41:27.115754+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

85 extracted references · 85 canonical work pages · 2 internal anchors

  1. [1]

    Learning multi-scale photo exposure correction

    Mahmoud Afifi, Konstantinos G Derpanis, Bjorn Ommer, and Michael S Brown. Learning multi-scale photo exposure correction. InCVPR, pages 9157–9167, 2021. 3, 4, 5, 6, 7

  2. [2]

    Academic press, 2010

    Alan C Bovik.Handbook of image and video processing. Academic press, 2010. 2

  3. [3]

    Learning photographic global tonal adjustment with a database of input/output image pairs

    Vladimir Bychkovsky, Sylvain Paris, Eric Chan, and Fr ´edo Durand. Learning photographic global tonal adjustment with a database of input/output image pairs. InCVPR, pages 97– 104, 2011. 4, 5, 6

  4. [4]

    Learning a deep single image contrast enhancer from multi-exposure images

    Jianrui Cai, Shuhang Gu, and Lei Zhang. Learning a deep single image contrast enhancer from multi-exposure images. IEEE TIP, 27(4):2049–2062, 2018. 4, 5, 7, 8

  5. [5]

    Retinexformer: One-stage retinex- based transformer for low-light image enhancement

    Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Tim- ofte, and Yulun Zhang. Retinexformer: One-stage retinex- based transformer for low-light image enhancement. In ICCV, pages 12504–12513, 2023. 2

  6. [6]

    Over- exposure correction via exposure and scene information dis- entanglement

    Yuhui Cao, Yurui Ren, Thomas H Li, and Ge Li. Over- exposure correction via exposure and scene information dis- entanglement. InACCV, 2020. 3

  7. [7]

    Anlightendiff: Anchoring diffusion probabilis- tic model on low light image enhancement.IEEE TIP, 2024

    Cheuk-Yiu Chan, Wan-Chi Siu, Yuk-Hee Chan, and H An- thony Chan. Anlightendiff: Anchoring diffusion probabilis- tic model on low light image enhancement.IEEE TIP, 2024. 1, 6, 8

  8. [8]

    Blind-spot guided diffusion for self-supervised real-world denoising.arXiv preprint arXiv:2509.16091, 2025

    Shen Cheng, Haipeng Li, Haibin Huang, Xiaohong Liu, and Shuaicheng Liu. Blind-spot guided diffusion for self-supervised real-world denoising.arXiv preprint arXiv:2509.16091, 2025. 3

  9. [9]

    Come-closer-diffuse-faster: Accelerating conditional diffu- sion models for inverse problems through stochastic contrac- tion

    Hyungjin Chung, Byeongsu Sim, and Jong Chul Ye. Come-closer-diffuse-faster: Accelerating conditional diffu- sion models for inverse problems through stochastic contrac- tion. InCVPR, pages 12413–12422, 2022. 2, 3

  10. [10]

    Unsupervised expo- sure correction

    Ruodai Cui, Li Niu, and Guosheng Hu. Unsupervised expo- sure correction. InECCV, pages 252–268, 2024. 1, 2, 3, 6, 7

  11. [11]

    arXiv preprint arXiv:2205.14871 (2022) IG-Diff: Complex Night Scene Restoration with Illumination-Guided Diffusion Model 11

    Ziteng Cui, Kunchang Li, Lin Gu, Shenghan Su, Peng Gao, Zhengkai Jiang, Yu Qiao, and Tatsuya Harada. You only need 90k parameters to adapt light: a light weight trans- former for image enhancement and exposure correction. arXiv preprint arXiv:2205.14871, 2022. 6, 7

  12. [12]

    Generative dif- fusion prior for unified image restoration and enhancement

    Ben Fei, Zhaoyang Lyu, Liang Pan, Junzhe Zhang, Weidong Yang, Tianyue Luo, Bo Zhang, and Bo Dai. Generative dif- fusion prior for unified image restoration and enhancement. InCVPR, pages 9935–9946, 2023. 2, 3

  13. [13]

    A weighted variational model for simultane- ous reflectance and illumination estimation

    Xueyang Fu, Delu Zeng, Yue Huang, Xiao-Ping Zhang, and Xinghao Ding. A weighted variational model for simultane- ous reflectance and illumination estimation. InCVPR, pages 2782–2790, 2016. 1

  14. [14]

    Learning a simple low-light image enhancer from paired low-light instances

    Zhenqi Fu, Yan Yang, Xiaotong Tu, Yue Huang, Xinghao Ding, and Kai-Kuang Ma. Learning a simple low-light image enhancer from paired low-light instances. InCVPR, pages 22252–22261, 2023. 2, 6

  15. [15]

    Rave: Residual vector embedding for clip-guided backlit im- age enhancement

    Tatiana Gaintseva, Martin Benning, and Gregory Slabaugh. Rave: Residual vector embedding for clip-guided backlit im- age enhancement. InECCV, pages 412–428, 2024. 2, 3, 6

  16. [16]

    Zero-reference deep curve estimation for low-light image enhancement

    Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. Zero-reference deep curve estimation for low-light image enhancement. In CVPR, pages 1780–1789, 2020. 2, 5, 6

  17. [17]

    Lime: Low-light im- age enhancement via illumination map estimation.IEEE TIP, 26(2):982–993, 2016

    Xiaojie Guo, Yu Li, and Haibin Ling. Lime: Low-light im- age enhancement via illumination map estimation.IEEE TIP, 26(2):982–993, 2016. 1

  18. [18]

    R2rnet: Low-light image enhancement via real-low to real-normal network.Jour- nal of Visual Communication and Image Representation, 90: 103712, 2023

    Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu Zou, Fang Lin, and Songchen Han. R2rnet: Low-light image enhancement via real-low to real-normal network.Jour- nal of Visual Communication and Image Representation, 90: 103712, 2023. 4, 5, 6, 7

  19. [19]

    Hqg- net: Unpaired medical image enhancement with high-quality guidance.IEEE TNNLS, 35(12):18404–18418, 2023

    Chunming He, Kai Li, Guoxia Xu, Jiangpeng Yan, Longx- iang Tang, Yulun Zhang, Yaowei Wang, and Xiu Li. Hqg- net: Unpaired medical image enhancement with high-quality guidance.IEEE TNNLS, 35(12):18404–18418, 2023. 2

  20. [20]

    Reti-diff: Illumination degradation image restoration with retinex-based latent diffusion model

    Chunming He, Chengyu Fang, Yulun Zhang, Kai Li, Longx- iang Tang, Chenyu You, Fengyang Xiao, Zhenhua Guo, and Xiu Li. Reti-diff: Illumination degradation image restoration with retinex-based latent diffusion model. InICLR, 2025. 1, 2, 3, 6, 8

  21. [21]

    Diffusion models in low-level vision: A survey.IEEE TPAMI, 2025

    Chunming He, Yuqi Shen, Chengyu Fang, Fengyang Xiao, Longxiang Tang, Yulun Zhang, Wangmeng Zuo, Zhenhua Guo, and Xiu Li. Diffusion models in low-level vision: A survey.IEEE TPAMI, 2025. 3

  22. [22]

    UnfoldLDM: Degradation-Aware Unfolding with Iterative Latent Diffusion Priors for Blind Image Restoration

    Chunming He, Rihan Zhang, Zheng Chen, Bowen Yang, CHengyu Fang, Yunlong Lin, Fengyang Xiao, and Sina Farsiu. Unfoldldm: Deep unfolding-based blind image restoration with latent diffusion priors.arXiv preprint arXiv:2511.18152, 2025. 3

  23. [23]

    Unfoldir: Rethinking deep unfolding network in illu- mination degradation image restoration.arXiv preprint arXiv:2505.06683, 2025

    Chunming He, Rihan Zhang, Fengyang Xiao, Chengyu Fang, Longxiang Tang, Yulun Zhang, and Sina Farsiu. Unfoldir: Rethinking deep unfolding network in illu- mination degradation image restoration.arXiv preprint arXiv:2505.06683, 2025. 3

  24. [24]

    Denoising diffu- sion probabilistic models.NeurIPS, 33:6840–6851, 2020

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffu- sion probabilistic models.NeurIPS, 33:6840–6851, 2020. 2, 4, 5

  25. [25]

    Global structure-aware diffusion pro- cess for low-light image enhancement.NeurIPS, 36, 2023

    Jinhui Hou, Zhiyu Zhu, Junhui Hou, Hui Liu, Huanqiang Zeng, and Hui Yuan. Global structure-aware diffusion pro- cess for low-light image enhancement.NeurIPS, 36, 2023. 2

  26. [26]

    Exposure: A white-box photo post-processing framework.ACM TOG, 37(2):1–17, 2018

    Yuanming Hu, Hao He, Chenxi Xu, Baoyuan Wang, and Stephen Lin. Exposure: A white-box photo post-processing framework.ACM TOG, 37(2):1–17, 2018. 3, 6, 7

  27. [27]

    Deep fourier-based exposure correction network with spatial- frequency interaction

    Jie Huang, Yajing Liu, Feng Zhao, Keyu Yan, Jinghao Zhang, Yukun Huang, Man Zhou, and Zhiwei Xiong. Deep fourier-based exposure correction network with spatial- frequency interaction. InECCV, pages 163–180, 2022. 3, 6, 7

  28. [28]

    Dark channel prior-based spatially adaptive contrast 9 enhancement for back lighting compensation

    Jaehyun Im, Inhye Yoon, Monson H Hayes, and Joonki Paik. Dark channel prior-based spatially adaptive contrast 9 enhancement for back lighting compensation. InICASSP, pages 2464–2468, 2013. 3

  29. [29]

    Low-light image enhancement with wavelet-based diffusion models.ACM TOG, 42(6):1–14,

    Hai Jiang, Ao Luo, Songchen Han, Haoqiang Fan, and Shuaicheng Liu. Low-light image enhancement with wavelet-based diffusion models.ACM TOG, 42(6):1–14,

  30. [30]

    Lightendiffusion: Unsupervised low-light image enhancement with latent-retinex diffusion models

    Hai Jiang, Ao Luo, Xiaohong Liu, Songchen Han, and Shuaicheng Liu. Lightendiffusion: Unsupervised low-light image enhancement with latent-retinex diffusion models. In ECCV, pages 161–179, 2024. 1, 2, 3, 6, 8

  31. [31]

    Revisiting coarse- to-fine strategy for low-light image enhancement with deep decomposition guided training.Computer Vision and Image Understanding, 142:103952, 2024

    Hai Jiang, Yang Ren, and Songchen Han. Revisiting coarse- to-fine strategy for low-light image enhancement with deep decomposition guided training.Computer Vision and Image Understanding, 142:103952, 2024. 2

  32. [32]

    Learning to see in the extremely dark

    Hai Jiang, Binhao Guan, Zhen Liu, Xiaohong Liu, Jian Yu, Zheng Liu, Songchen Han, and Shuaicheng Liu. Learning to see in the extremely dark. InICCV, pages 7676–7685, 2025. 3

  33. [33]

    Supervised small-baseline and large- baseline homography learning with diffusion-based data generation.IEEE TPAMI, 2026

    Hai Jiang, Haipeng Li, Songchen Han, Bing Zeng, and Shuaicheng Liu. Supervised small-baseline and large- baseline homography learning with diffusion-based data generation.IEEE TPAMI, 2026. 3

  34. [34]

    Enlightengan: Deep light enhancement without paired supervision.IEEE TIP, 30:2340–2349, 2021

    Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, and Zhangyang Wang. Enlightengan: Deep light enhancement without paired supervision.IEEE TIP, 30:2340–2349, 2021. 2, 6

  35. [35]

    Exposure-slot: Exposure-centric represen- tations learning with slot-in-slot attention for region-aware exposure correction

    Donggo Jung, Daehyun Kim, Guanghui Wang, and Tae Hyun Kim. Exposure-slot: Exposure-centric represen- tations learning with slot-in-slot attention for region-aware exposure correction. InCVPR, 2025. 1, 2, 3, 6, 7

  36. [36]

    Adam: A method for stochastic optimization

    Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. InICLR, 2015. 5

  37. [37]

    The retinex theory of color vision.Scientific American, 237(6):108–129, 1977

    Edwin H Land. The retinex theory of color vision.Scientific American, 237(6):108–129, 1977. 2

  38. [38]

    Embedding fourier for ultra-high-definition low-light image enhancement

    Chongyi Li, Chun-Le Guo, Man Zhou, Zhexin Liang, Shangchen Zhou, Ruicheng Feng, and Chen Change Loy. Embedding fourier for ultra-high-definition low-light image enhancement. InICLR, 2023. 1, 2, 6

  39. [39]

    Dmhomo: Learning ho- mography with diffusion models.ACM TOG, 43(3):1–16,

    Haipeng Li, Hai Jiang, Ao Luo, Ping Tan, Haoqiang Fan, Bing Zeng, and Shuaicheng Liu. Dmhomo: Learning ho- mography with diffusion models.ACM TOG, 43(3):1–16,

  40. [40]

    Foundir: Unleashing million-scale training data to advance foundation models for image restoration

    Hao Li, Xiang Chen, Jiangxin Dong, Jinhui Tang, and Jin- shan Pan. Foundir: Unleashing million-scale training data to advance foundation models for image restoration. InICCV, pages 12626–12636, 2025. 3

  41. [41]

    Real-time expo- sure correction via collaborative transformations and adap- tive sampling

    Ziwen Li, Feng Zhang, Meng Cao, Jinpu Zhang, Yuanjie Shao, Yuehuan Wang, and Nong Sang. Real-time expo- sure correction via collaborative transformations and adap- tive sampling. InCVPR, pages 2984–2994, 2024. 1, 2, 3, 6, 7

  42. [42]

    Iterative prompt learning for unsupervised backlit image enhancement

    Zhexin Liang, Chongyi Li, Shangchen Zhou, Ruicheng Feng, and Chen Change Loy. Iterative prompt learning for unsupervised backlit image enhancement. InICCV, pages 8094–8103, 2023. 1, 2, 3, 5, 6

  43. [43]

    Aglldiff: Guiding diffusion models to- wards unsupervised training-free real-world low-light image enhancement

    Yunlong Lin, Tian Ye, Sixiang Chen, Zhenqi Fu, Yingying Wang, Wenhao Chai, Zhaohu Xing, Wenxue Li, Lei Zhu, and Xinghao Ding. Aglldiff: Guiding diffusion models to- wards unsupervised training-free real-world low-light image enhancement. InAAAI, pages 5307–5315, 2025. 1, 2, 3, 6

  44. [44]

    Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement

    Risheng Liu, Long Ma, Jiaao Zhang, Xin Fan, and Zhongx- uan Luo. Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In CVPR, pages 10561–10570, 2021. 2, 6

  45. [45]

    Coding-prior guided diffusion network for video deblurring

    Yike Liu, Jianhui Zhang, Haipeng Li, Shuaicheng Liu, and Bing Zeng. Coding-prior guided diffusion network for video deblurring. InACM MM, pages 10268–10277, 2025. 3

  46. [46]

    Solving ill-posed regions in high dynamic range re- construction with uncertainty-aware diffusion models.IEEE TCSVT, 2025

    Zhen Liu, Hai Jiang, Haipeng Li, Shuaicheng Liu, and Bing Zeng. Solving ill-posed regions in high dynamic range re- construction with uncertainty-aware diffusion models.IEEE TCSVT, 2025

  47. [47]

    Raw-flow: Advancing rgb-to-raw image reconstruction with deterministic latent flow matching

    Zhen Liu, Diedong Feng, Hai Jiang, Liaoyuan Zeng, Hao Wang, Chaoyu Feng, Lei Lei, Bing Zeng, and Shuaicheng Liu. Raw-flow: Advancing rgb-to-raw image reconstruction with deterministic latent flow matching. 40(9):7431–7439,

  48. [48]

    Getting to know low- light images with the exclusively dark dataset.Computer Vision and Image Understanding, 178:30–42, 2019

    Yuen Peng Loh and Chee Seng Chan. Getting to know low- light images with the exclusively dark dataset.Computer Vision and Image Understanding, 178:30–42, 2019. 1

  49. [49]

    Backlitnet: A dataset and network for backlit image enhancement.Computer Vision and Image Understanding, 218:103403, 2022

    Xiaoqian Lv, Shengping Zhang, Qinglin Liu, Haozhe Xie, Bineng Zhong, and Huiyu Zhou. Backlitnet: A dataset and network for backlit image enhancement.Computer Vision and Image Understanding, 218:103403, 2022. 3, 4, 5, 6, 7

  50. [50]

    Fourier priors-guided diffusion for zero-shot joint low-light enhance- ment and deblurring

    Xiaoqian Lv, Shengping Zhang, Chenyang Wang, Yichen Zheng, Bineng Zhong, Chongyi Li, and Liqiang Nie. Fourier priors-guided diffusion for zero-shot joint low-light enhance- ment and deblurring. InCVPR, pages 25378–25388, 2024. 2, 3, 6

  51. [51]

    In- corporating fourier transformation with diffusion models for low-light image enhancement.IEEE Sign

    Ailin Ma, Hai Jiang, Binbin Liang, and Songchen Han. In- corporating fourier transformation with diffusion models for low-light image enhancement.IEEE Sign. Process. Letters,

  52. [52]

    Toward fast, flexible, and robust low-light image enhancement

    Long Ma, Tengyu Ma, Risheng Liu, Xin Fan, and Zhongx- uan Luo. Toward fast, flexible, and robust low-light image enhancement. InCVPR, pages 5637–5646, 2022. 2, 6

  53. [53]

    Practical exposure correction via compensation

    Long Ma, Tianjiao Ma, Xinwei Xue, Xin Fan, Zhongxuan Luo, and Risheng Liu. Practical exposure correction: Great truths are always simple.arXiv preprint arXiv:2212.14245,

  54. [54]

    completely blind

    Anish Mittal, Rajiv Soundararajan, and Alan C Bovik. Mak- ing a “completely blind” image quality analyzer.IEEE Sign. Process. Letters, 20(3):209–212, 2012. 6

  55. [55]

    Psenet: Progressive self-enhancement network for unsuper- vised extreme-light image enhancement

    Hue Nguyen, Diep Tran, Khoi Nguyen, and Rang Nguyen. Psenet: Progressive self-enhancement network for unsuper- vised extreme-light image enhancement. InWACV, pages 1756–1765, 2023. 1, 2, 3, 6, 7

  56. [56]

    Elucidating the exposure bias in diffusion models

    Mang Ning, Mingxiao Li, Jianlin Su, Albert Ali Salah, and Itir Onal Ertugrul. Elucidating the exposure bias in diffusion models. InICLR, 2024. 2, 4

  57. [57]

    Learn- ing exposure correction via consistency modeling

    Ntumba Elie Nsampi, Zhongyun Hu, and Qing Wang. Learn- ing exposure correction via consistency modeling. InBMVC, page 12, 2021. 3, 6, 7

  58. [58]

    Learn- ing transferable visual models from natural language super- vision

    Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, 10 Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn- ing transferable visual models from natural language super- vision. InICML, pages 8748–8763, 2021. 3

  59. [59]

    Ispdiffuser: Learning raw-to-srgb mappings with texture-aware diffusion models and histogram-guided color consistency

    Yang Ren, Hai Jiang, Menglong Yang, Wei Li, and Shuaicheng Liu. Ispdiffuser: Learning raw-to-srgb mappings with texture-aware diffusion models and histogram-guided color consistency. InAAAI, pages 6722–6730, 2025. 3

  60. [60]

    U-net: Convolutional networks for biomedical image segmentation

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. InMICCAI, pages 234–241, 2015. 5

  61. [61]

    Denois- ing diffusion implicit models

    Jiaming Song, Chenlin Meng, and Stefano Ermon. Denois- ing diffusion implicit models. InICLR, 2021. 2, 4

  62. [62]

    Contrast compensation by fuzzy classification and image illumination analysis for back-lit and front-lit color face images.IEEE TCE, 56(3): 1570–1578, 2010

    Chun-Ming Tsai and Zong-Mu Yeh. Contrast compensation by fuzzy classification and image illumination analysis for back-lit and front-lit color face images.IEEE TCE, 56(3): 1570–1578, 2010. 3

  63. [63]

    Ex- ploring clip for assessing the look and feel of images

    Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. Ex- ploring clip for assessing the look and feel of images. In AAAI, pages 2555–2563, 2023. 6

  64. [64]

    Zero-reference low-light enhancement via physical quadru- ple priors

    Wenjing Wang, Huan Yang, Jianlong Fu, and Jiaying Liu. Zero-reference low-light enhancement via physical quadru- ple priors. InCVPR, pages 26057–26066, 2024. 5, 6

  65. [65]

    Low-light image enhancement with illumination-aware gamma correction and complete image modelling network

    Yinglong Wang, Zhen Liu, Jianzhuang Liu, Songcen Xu, and Shuaicheng Liu. Low-light image enhancement with illumination-aware gamma correction and complete image modelling network. InICCV, pages 13128–13137, 2023. 2

  66. [66]

    Bovik, H.R

    Zhou Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli. Image quality assessment: from error visibility to structural similarity.IEEE TIP, 13(4):600–612, 2004. 5

  67. [67]

    Stablemotion: Repurposing diffusion-based image priors for motion estimation.arXiv preprint arXiv:2505.06668, 2025

    Ziyi Wang, Haipeng Li, Lin Sui, Tianhao Zhou, Hai Jiang, Lang Nie, and Shuaicheng Liu. Stablemotion: Repurposing diffusion-based image priors for motion estimation.arXiv preprint arXiv:2505.06668, 2025. 3

  68. [68]

    Deep retinex decomposition for low-light enhancement

    Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. Deep retinex decomposition for low-light enhancement. In BMVC, 2018. 4, 5, 6, 7, 8

  69. [69]

    Exploring image enhancement for salient object detection in low light images.ACM TOMM, 17(1s):1–19,

    Xin Xu, Shiqin Wang, Zheng Wang, Xiaolong Zhang, and Ruimin Hu. Exploring image enhancement for salient object detection in low light images.ACM TOMM, 17(1s):1–19,

  70. [70]

    Snr-aware low-light image enhancement

    Xiaogang Xu, Ruixing Wang, Chi-Wing Fu, and Jiaya Jia. Snr-aware low-light image enhancement. InCVPR, pages 17714–17724, 2022. 2

  71. [71]

    Low-light image enhancement via structure modeling and guidance

    Xiaogang Xu, Ruixing Wang, and Jiangbo Lu. Low-light image enhancement via structure modeling and guidance. In CVPR, pages 9893–9903, 2023. 2

  72. [72]

    Implicit neural representation for coopera- tive low-light image enhancement

    Shuzhou Yang, Moxuan Ding, Yanmin Wu, Zihan Li, and Jian Zhang. Implicit neural representation for coopera- tive low-light image enhancement. InICCV, pages 12918– 12927, 2023. 2, 6

  73. [73]

    Difflle: Diffusion-based do- main calibration for weak supervised low-light image en- hancement.IJCV, 133(5):2527–2546, 2025

    Shuzhou Yang, Xuanyu Zhang, Yinhuai Wang, Jiwen Yu, Yuhan Wang, and Jian Zhang. Difflle: Diffusion-based do- main calibration for weak supervised low-light image en- hancement.IJCV, 133(5):2527–2546, 2025. 2, 3

  74. [74]

    Multi-frame rolling shut- ter correction with diffusion models.IEEE TCSVT, 2025

    Zhanglei Yang, Haipeng Li, Shen Cheng, Mingbo Hong, Bing Zeng, and Shuaicheng Liu. Multi-frame rolling shut- ter correction with diffusion models.IEEE TCSVT, 2025. 3

  75. [75]

    Single image rolling shutter removal with diffusion models

    Zhanglei Yang, Haipeng Li, Mingbo Hong, Chen-Lin Zhang, Jiajun Li, and Shuaicheng Liu. Single image rolling shutter removal with diffusion models. InAAAI, pages 9373–9381,

  76. [76]

    Diff-retinex++: Retinex-driven reinforced diffu- sion model for low-light image enhancement.IEEE TPAMI,

    Xunpeng Yi, Han Xu, Hao Zhang, Linfeng Tang, and Ji- ayi Ma. Diff-retinex++: Retinex-driven reinforced diffu- sion model for low-light image enhancement.IEEE TPAMI,

  77. [77]

    Learning enriched features for real image restoration and enhancement

    Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Ming-Hsuan Yang, and Ling Shao. Learning enriched features for real image restoration and enhancement. InECCV, pages 492–511, 2020. 2

  78. [78]

    Zero-shot restoration of back- lit images using deep internal learning

    Lin Zhang, Lijun Zhang, Xiao Liu, Ying Shen, Shaoming Zhang, and Shengjie Zhao. Zero-shot restoration of back- lit images using deep internal learning. InACM MM, pages 1623–1631, 2019. 3, 6, 7

  79. [79]

    High-quality exposure correction of under- exposed photos

    Qing Zhang, Ganzhao Yuan, Chunxia Xiao, Lei Zhu, and Wei-Shi Zheng. High-quality exposure correction of under- exposed photos. InACM MM, pages 582–590, 2018. 3

  80. [80]

    Dual illumi- nation estimation for robust exposure correction

    Qing Zhang, Yongwei Nie, and Wei-Shi Zheng. Dual illumi- nation estimation for robust exposure correction. InComput. Graph. Forum, pages 243–252, 2019. 3, 6, 7

Showing first 80 references.