pith. machine review for the scientific record. sign in

arxiv: 2604.17669 · v1 · submitted 2026-04-19 · 💻 cs.CV

Recognition: unknown

Low Light Image Enhancement Challenge at NTIRE 2026

Authors on Pith no claims yet

Pith reviewed 2026-05-10 05:20 UTC · model grok-4.3

classification 💻 cs.CV
keywords low-light image enhancementimage restorationNTIRE challengedeep learningdenoisingcomputer visionbenchmark dataset
0
0 comments X

The pith

The NTIRE 2026 Low Light Image Enhancement Challenge shows 22 teams advancing restoration of low-contrast noisy images via a new dataset.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper reports the outcomes of the NTIRE 2026 Low Light Image Enhancement Challenge. Over 300 teams registered across two tracks focused on enhancement and joint denoising, yet only 22 delivered valid submissions that were assessed. The review examines the networks proposed by participants and measures how effectively they recover lost details in low-light conditions using the organizers' novel dataset. A reader would see value in this as a current map of what works for turning poor-light captures into clearer results.

Core claim

The paper establishes that the 22 submitted networks demonstrate significant progress in low-light image enhancement by learning representative visual cues to restore information lost due to low contrast and noise, as validated through evaluation on the novel dataset across the challenge tracks.

What carries the argument

The novel low-light dataset that benchmarks the submitted networks for enhancement and joint denoising across the two challenge tracks.

If this is right

  • Effective networks identified in the challenge can produce clearer and visually compelling images under diverse challenging low-light conditions.
  • Joint denoising and enhancement approaches address combined degradations in low-contrast noisy inputs.
  • The evaluated solutions establish updated performance references for future low-light restoration work.
  • The results point to practical gains in applications requiring reliable image recovery from poor lighting.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The top methods could be extended to video sequences if temporal consistency is added to frame-wise processing.
  • A broader test on datasets with lighting extremes outside the challenge collection would check how well the gains hold.
  • Adopting the leading approaches in consumer devices might improve automatic correction for night photography.

Load-bearing premise

The 22 submitted entries and the novel dataset sufficiently represent real-world low-light conditions and the best available solutions.

What would settle it

An independent test set of real low-light images from new environments where the top challenge entries fail to produce clearer results than prior methods.

Figures

Figures reproduced from arXiv: 2604.17669 by Aashish Negi, Abdur Rehman, Akshay Dudhane, Alexandru Brateanu, Amit Shukla, Ananya N, Anas M. Ali, Ariel Lapid, Bilel Benjdira, Bofei Chen, Bohyung Han, Chang Ye, Cheng Li, Chun-Chuen Hui, Ciprian Orhei, Codruta O. Ancuti, Cosmin Ancuti, Donghun Ryou, Duo Zhang, Fayaz Ali Dharejo, Furkan K{\i}nl{\i}, George Ciubotariu, Guangsheng Tang, Guoyi Xu, Hao Yang, Hardik Sharma, Harini A, Heng Sun, Hongjun Wu, Hon Man Hammond Lee, Idit Diamant, Inju Ha, Jayant Kumar, Jiachen Tu, Jiajia Liu, Jiangning Zhang, Jinao Song, Jing Xu, Jingyi Xu, Jun Chen, Junoh Kang, Kaifan Qiao, Kai Hu, Lai Jiang, Lakshanya K, Leilei Cao, Lin Wang, Liyuan Pan, Long Bao, Mai Xu, Marcos V. Conde, Mohab Kishawy, MoHao Wu, Nikhil Akalwadi, Padmashree Desai, Praful Hambarde, Prateek Shaily, Qinglong Yan, Radu Timofte, Ramesh Ashok Tabib, Raul Balmez, Reuven Peretz, Rizwan Ali Naqvi, Ruikun Zhang, Sachin Chaudhary, Sharif S M A, Shengxi Li, Shijun Shi, Shuo Zhang, Uma Mudenagudi, Varda I Pattanshetty, Varsha I Pattanshetty, Wadii Boulila, Wan-Chi Siu, Wei Zhou, Wenjian Zhang, Xianfang Zeng, Xin Deng, Xinyi Zhu, Xunpeng Yi, Yan Chen, Yaokun Shi, Yaoxin Jiang, Yibing Zhang, Yihao Cheng, Ying Xu, Yong Liu, Yuqiang Yang, Yuval Haitman, Zhi Jin, Ziyi Wang.

Figure 1
Figure 1. Figure 1: Pipeline of RLLIE proposed by SYSU-FVL WHU-MVP Training Details. They utilize the full-resolution images from the LSD dataset provided by the organizers, rather than the cropped patch version. Training is performed ex￾clusively on four NVIDIA RTX 4090 GPUs. A progres￾sive training strategy is adopted, where the patch sizes are set to 960, 1280, 1280, 1440, 1600, with correspond￾ing batch sizes of 4, 2, 2, … view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the proposed UHDM architecture by [PITH_FULL_IMAGE:figures/full_fig_p017_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: BITssvgg’s architecture overview inference, the restored output is resized back to the original resolution. The result is then clipped to the range of [0, 1] and saved as the final output image. BAU-Vision Training Details. To stabilize training on given challenge dataset, They define a sample-adaptive coefficient vector α ∈ R N for each batch of N samples. For the i-th sam￾ple, the coefficient αi is compu… view at source ↗
Figure 4
Figure 4. Figure 4: BAU-Vision’s Wave-P architecture Input Image CIDNet+ ( full Image ) OSEDiff ( patch 2048 ) Laplacian Pyramid Fusion ORNet Enhanced Image Global Condition low-freq (tone) high-freq (detail) down sampling [PITH_FULL_IMAGE:figures/full_fig_p018_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: SNUCV’s MB-LPFR pipeline SNUCV Training Details. They trained CIDNet+ [97] and OSED￾iff [93] exclusively on the LSD training dataset [72]. For CIDNet+, they followed the original paper’s protocol and cropped patches to a size of 1280 × 1280. OSEDiff was initialized from a pre-trained Stable Diffusion checkpoint (CompVis/stable-diffusion-v1-4) and fine-tuned on the LSD dataset according to the ICM-SR [36] t… view at source ↗
Figure 7
Figure 7. Figure 7: AAIR ARM’s LFM-LLIE overview. (a) Training: A flow-matching network uθ (HDiT-based) is trained in latent space using interpolated samples zt created from the noise-perturbed low-light latent image z˜1. (b) Inference: Starting from z1 = E(x1), the latent is iteratively transformed by the learned veloc￾ity field uθ into zˆ0 and decoded by D to produce the enhanced image xˆ0. TranssionAI [PITH_FULL_IMAGE:fig… view at source ↗
Figure 8
Figure 8. Figure 8: TranssionAI overview Training Details. They optimize the model using a hy￾brid loss function that combines reconstruction, perceptual, structural, and frequency-domain constraints: L = 0.2Lr + 0.2Llc +Llpips + 0.2Lgrad + 0.5Lf req (12) where Lr is the mean intensity reconstruction loss (GT￾Mean loss was adopted, instead of L1 loss), and Llc de￾notes the luminance and chrominance loss adopted from the basel… view at source ↗
Figure 9
Figure 9. Figure 9: Overview of the proposed weighted late-fusion architecture by [PITH_FULL_IMAGE:figures/full_fig_p020_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: DH-XHDL-Team’s TEI-LLIE architechture of structural detail. They use AdamW (β1=0.9, β2=0.999, weight decay 10−3 ) with an initial learning rate of 10−4 and cosine annealing over 300 scheduled epochs. An expo￾nential moving average (EMA) of the weights is maintained with a decay of 0.999. Data augmentation includes random horizontal and vertical flips and 90◦ rotations. Training was conducted on 2× NVIDIA … view at source ↗
Figure 11
Figure 11. Figure 11: DUSKAN architecture by PSU. Top: Symmetric 4-level U-Net with DUSKANBlock stages, strided downsampling, PixelShuf￾fle upsampling, and global residual learning. Bottom: DUSKANBlock detail. Path A (blue) extracts global features via FFT magnitude modulation and fuses them with local multi-scale depthwise convolution features. Path B (red) uses Kolmogorov-Arnold polynomial-basis activations with a parallel-a… view at source ↗
Figure 12
Figure 12. Figure 12: Qualitative comparison of DNDiff’s color performance [PITH_FULL_IMAGE:figures/full_fig_p022_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: DNATT pipeline of Team Lucky one. Dense partitioning into many small overlapping tiles. Our uniform 2x3 tiling scheme [PITH_FULL_IMAGE:figures/full_fig_p023_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: APRIL-AIGC’s comparison of tiling strategies for high￾resolution inference. A fixed 2×3 partition with moderate overlap yields fewer illumination discontinuities than denser window lay￾outs. 1600 × 1456 pixels with overlaps close to 128 × 144 pix￾els. They use 5 sampling steps, a guidance scale of 2.0, and an empty string negative prompt. In practice, this setting provides the most stable trade-off betwee… view at source ↗
Figure 15
Figure 15. Figure 15: MiVideoDLLIE’s pipeline of MiDLLIE Training Details. During the training phase, to improve training efficiency while preserving image high-frequency details as much as possible, they resize the training data from 512 × 512 to 256 × 256. The optimizer is Adam with hyperparameters β1 = 0.9 and β2 = 0.99, and the initial learning rate is set to 1 × 10−4 , using a cosine annealing with restarts strategy. For … view at source ↗
Figure 16
Figure 16. Figure 16: RetinexDualV2 overview [PITH_FULL_IMAGE:figures/full_fig_p024_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Overall structure of WIRNet to recover fine details. To address these limitations, they introduce three key contributions: 24 [PITH_FULL_IMAGE:figures/full_fig_p024_17.png] view at source ↗
read the original abstract

This paper presents a comprehensive review of the NTIRE 2026 Low Light Image Enhancement Challenge, highlighting the proposed solutions and final results. The objective of this challenge is to identify effective networks capable of producing clearer and visually compelling images in diverse and challenging conditions by learning representative visual cues with the purpose of restoring information loss due to low-contrast and noisy images. A total of 195 participants registered for the first track and 153 for the second track of the competition, and 22 teams ultimately submitted valid entries. This paper thoroughly evaluates the state-of-the-art advances in (joint denoising and) low-light image enhancement, showcasing the significant progress in the field, while leveraging samples of our novel dataset.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 0 minor

Summary. The manuscript reports on the NTIRE 2026 Low Light Image Enhancement Challenge, stating registration figures (195 for track 1, 153 for track 2), 22 valid submissions, and claiming to thoroughly evaluate state-of-the-art advances in low-light image enhancement (including joint denoising) while showcasing significant progress via samples from a novel dataset.

Significance. A well-documented challenge report with a new dataset could serve as a useful benchmark reference for the low-light enhancement community. However, the significance is limited by the small fraction of valid entries relative to registrants and the absence of evidence that the evaluation captures the broader field beyond challenge participants.

major comments (2)
  1. [Abstract] Abstract: The assertion that the paper 'thoroughly evaluates the state-of-the-art advances' and 'showcas[es] the significant progress in the field' is not supported by the reported numbers alone. With only 22 valid entries from 195/153 registrants, the manuscript must either (a) add side-by-side quantitative comparisons against recent published SOTA methods that did not participate or (b) qualify the claim to refer only to participating solutions.
  2. [Abstract] Abstract: The representativeness of the novel dataset for real-world low-light conditions is asserted but not demonstrated. The manuscript should supply concrete statistics (e.g., distribution of illumination levels, noise characteristics, scene diversity metrics) or explicit checks against established low-light benchmarks to justify the claim that the evaluation reflects field-wide progress.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript reporting the NTIRE 2026 Low Light Image Enhancement Challenge. We address each major comment point by point below.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The assertion that the paper 'thoroughly evaluates the state-of-the-art advances' and 'showcas[es] the significant progress in the field' is not supported by the reported numbers alone. With only 22 valid entries from 195/153 registrants, the manuscript must either (a) add side-by-side quantitative comparisons against recent published SOTA methods that did not participate or (b) qualify the claim to refer only to participating solutions.

    Authors: We agree that the abstract phrasing overstates the scope. This is a challenge report whose evaluation is limited to the 22 valid participating submissions. We will revise the abstract to qualify the claims, stating that we evaluate the advances demonstrated by the submitted solutions rather than claiming a thorough field-wide assessment of all state-of-the-art methods. This approach follows the standard format for challenge reports and avoids the need for additional external comparisons. revision: yes

  2. Referee: [Abstract] Abstract: The representativeness of the novel dataset for real-world low-light conditions is asserted but not demonstrated. The manuscript should supply concrete statistics (e.g., distribution of illumination levels, noise characteristics, scene diversity metrics) or explicit checks against established low-light benchmarks to justify the claim that the evaluation reflects field-wide progress.

    Authors: We acknowledge that the current manuscript asserts the dataset's suitability without providing the requested quantitative details. In the revised version we will add concrete statistics on illumination level distributions, noise characteristics, and scene diversity metrics, together with explicit comparisons to established low-light benchmarks, to better support the claim of representativeness. revision: yes

Circularity Check

0 steps flagged

No circularity: competition report with no derivations or fitted predictions

full rationale

This is a factual report on NTIRE 2026 challenge outcomes, participant submissions (22 valid entries), and results on a novel dataset. No equations, derivations, parameter fitting, or predictions appear in the text. Claims of evaluating SOTA advances and showcasing progress are presented as direct summaries of competition results rather than any derived quantity that reduces to the paper's own inputs by construction. Self-citations, if present, are not load-bearing for any central result. The paper is self-contained as an empirical summary without circular structure.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

No mathematical derivations, free parameters, axioms, or invented entities are introduced; the paper is an empirical summary of external submissions.

pith-pipeline@v0.9.0 · 5823 in / 872 out tokens · 47392 ms · 2026-05-10T05:20:28.847577+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

111 extracted references · 11 canonical work pages · 4 internal anchors

  1. [1]

    Workshop and challenges website, 2026

    Ntire 2026: New trends in image restoration and enhance- ment. Workshop and challenges website, 2026. 5

  2. [2]

    NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing

    Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cos- min Ancuti. NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  3. [3]

    NTIRE 2026 Nighttime Image De- hazing Challenge Report

    Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cosmin Ancuti, et al. NTIRE 2026 Nighttime Image De- hazing Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  4. [4]

    Depthlux: Employ- ing depthwise separable convolutions for low-light image enhancement.Sensors, 25(5):1530, 2025

    Raul Balmez, Alexandru Brateanu, Ciprian Orhei, Co- druta O Ancuti, and Cosmin Ancuti. Depthlux: Employ- ing depthwise separable convolutions for low-light image enhancement.Sensors, 25(5):1530, 2025. 6

  5. [5]

    Isalux: Illumination and semantics-aware transformer employing mixture of ex- perts for low light image enhancement

    Raul Balmez, Alexandru Brateanu, Ciprian Orhei, Co- druta O Ancuti, and Cosmin Ancuti. Isalux: Illumination and semantics-aware transformer employing mixture of ex- perts for low light image enhancement. InProceedings of the IEEE/CVF Winter Conference on Applications of Com- puter Vision, pages 7862–7872, 2026. 6

  6. [6]

    Modalformer: Multimodal trans- former for low-light image enhancement,

    Alexandru Brateanu, Raul Balmez, Ciprian Orhei, Co- druta Ancuti, and Cosmin Ancuti. Modalformer: Multi- modal transformer for low-light image enhancement.arXiv preprint arXiv:2507.20388, 2025. 6

  7. [7]

    NTIRE 2026 Challenge on Single Image Reflection Removal in the Wild: Datasets, Results, and Methods

    Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Reflection Removal in the Wild: Datasets, Results, and Methods . InProceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  8. [8]

    Retinexformer: One-stage retinex-based transformer for low-light image enhance- ment

    Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Timofte, and Yulun Zhang. Retinexformer: One-stage retinex-based transformer for low-light image enhance- ment. InProceedings of the IEEE/CVF International Con- ference on Computer Vision (ICCV), pages 12504–12513,

  9. [9]

    Deterministic edge-preserving regu- larization in computed imaging.IEEE Transactions on im- age processing, 6(2):298–311, 1997

    Pierre Charbonnier, Laure Blanc-F ´eraud, Gilles Aubert, and Michel Barlaud. Deterministic edge-preserving regu- larization in computed imaging.IEEE Transactions on im- age processing, 6(2):298–311, 1997. 17

  10. [10]

    Learning to see in the dark

    Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun. Learning to see in the dark. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3291–3300, 2018. 19

  11. [11]

    Simple baselines for image restoration

    Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. InECCV, 2022. 4

  12. [12]

    Simple baselines for image restoration

    Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. InEuropean con- ference on computer vision, pages 17–33. Springer, 2022. 8

  13. [13]

    Simple baselines for image restoration

    Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. InEuropean Con- ference on Computer Vision, 2022. 7

  14. [14]

    Learning A sparse transformer network for effective image deraining

    Xiang Chen, Hao Li, Mingqiang Li, and Jinshan Pan. Learning A sparse transformer network for effective image deraining. InCVPR, pages 5896–5905. IEEE, 2023. 4

  15. [15]

    The Fourth Chal- lenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview

    Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xiaoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Chal- lenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) W...

  16. [16]

    High FPS Video Frame Inter- polation Challenge at NTIRE 2026

    George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Inter- polation Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  17. [17]

    Efficient image restoration via latent consistency flow matching

    Elad Cohen, Idan Achituve, Idit Diamant, Arnon Netzer, and Hai Victor Habi. Efficient image restoration via latent consistency flow matching. InBMVC, 2025. 5

  18. [18]

    Scalable high-resolution pixel-space image synthesis with hourglass diffusion transformers

    Katherine Crowson, Stefan Andreas Baumann, Alex Birch, Tanishq Mathew Abraham, Daniel Z Kaplan, and En- rico Shippole. Scalable high-resolution pixel-space image synthesis with hourglass diffusion transformers. InPro- ceedings of the 41st International Conference on Machine Learning, pages 9550–9575. PMLR, 2024. 5, 18

  19. [19]

    W. Dong, Y . Min, H. Zhou, and J. Chen. Towards scale- aware low-light enhancement via structure-guided trans- former design. InProc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), pages 1459–1468, Nashville, TN, USA, 2025. 8

  20. [20]

    NTIRE 2026 Rip Current Detection and Segmentation (RipDet- Seg) Challenge Report

    Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Ta- tui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDet- Seg) Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  21. [21]

    Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al

    Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Trans- fer, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  22. [22]

    Sigmoid- weighted linear units for neural network function approx- imation in reinforcement learning.Neural networks, 107: 3–11, 2018

    Stefan Elfwing, Eiji Uchibe, and Kenji Doya. Sigmoid- weighted linear units for neural network function approx- imation in reinforcement learning.Neural networks, 107: 3–11, 2018. 4

  23. [23]

    Darkir: Robust low-light image restoration

    Daniel Feijoo, Juan C Benito, Alvaro Garcia, and Marcos V Conde. Darkir: Robust low-light image restoration. InPro- ceedings of the Computer Vision and Pattern Recognition Conference, pages 10879–10889, 2025. 2

  24. [24]

    Darkir: Robust low-light image restoration

    Daniel Feijoo, Juan C Benito, Alvaro Garcia, and Marcos V Conde. Darkir: Robust low-light image restoration. InPro- ceedings of the Computer Vision and Pattern Recognition Conference, pages 10879–10889, 2025. 7

  25. [25]

    NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results

    Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InProceedings of the IEEE/CVF Confer- 9 ence on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  26. [26]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)

    Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  27. [27]

    NTIRE 2026 Challenge on Robust AI- Generated Image Detection in the Wild

    Aleksandr Gushchin, Khaled Abud, Ekaterina Shu- mitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI- Generated Image Detection in the Wild . InProceedings of the IEEE/CVF Conference on Computer Vision and...

  28. [28]

    Zur theorie der orthogonalen funktionensys- teme.Mathematische Annalen, 71(1):38–53, 1911

    Alfred Haar. Zur theorie der orthogonalen funktionensys- teme.Mathematische Annalen, 71(1):38–53, 1911. 8

  29. [29]

    R2rnet: Low-light image enhancement via real-low to real-normal network

    Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu Zou, Fang Lin, and Songchen Han. R2rnet: Low-light image enhancement via real-low to real-normal network. Journal of Visual Communication and Image Representa- tion, 90:103712, 2023. 1

  30. [30]

    Robust Deepfake De- tection, NTIRE 2026 Challenge: Report

    Benedikt Hopf, Radu Timofte, et al. Robust Deepfake De- tection, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  31. [31]

    Squeeze-and-excitation networks

    Jie Hu, Li Shen, Samuel Albanie, Gang Sun, and Enhua Wu. Squeeze-and-excitation networks. In2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7132–7141, 2017. 7

  32. [32]

    Squeeze-and-excitation networks

    Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 7132–7141, 2018. 5

  33. [33]

    Focal frequency loss for image reconstruction and synthe- sis

    Liming Jiang, Bo Dai, Wayne Wu, and Chen Change Loy. Focal frequency loss for image reconstruction and synthe- sis. In2021 IEEE/CVF International Conference on Com- puter Vision (ICCV), pages 13899–13909, 2021. 7

  34. [34]

    Perceptual losses for real-time style transfer and super-resolution

    Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pages 694–711. Springer, 2016. 17

  35. [35]

    Perceptual Losses for Real-Time Style Transfer and Super-Resolution

    Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Percep- tual losses for real-time style transfer and super-resolution. ArXiv, abs/1603.08155, 2016. 7

  36. [36]

    Icm- sr: Image-conditioned manifold regularization for image super-resolution.arXiv preprint arXiv:2511.22048, 2025

    Junoh Kang, Donghun Ryou, and Bohyung Han. Icm- sr: Image-conditioned manifold regularization for image super-resolution.arXiv preprint arXiv:2511.22048, 2025. 4, 18

  37. [37]

    NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge

    Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Kor- chagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Doro- gova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani ´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In Proceedings of the IEEE/CVF Con...

  38. [38]

    Adam: A Method for Stochastic Optimization

    Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980,

  39. [39]

    Retinexdualv2: Physically- grounded dual retinex for generalized uhd image restora- tion, 2026

    Mohab Kishawy and Jun Chen. Retinexdualv2: Physically- grounded dual retinex for generalized uhd image restora- tion, 2026. 8

  40. [40]

    Retinexdual: Retinex-based dual nature approach for gen- eralized ultra-high-definition image restoration, 2025

    Mohab Kishawy, Ali Abdellatif Hussein, and Jun Chen. Retinexdual: Retinex-based dual nature approach for gen- eralized ultra-high-definition image restoration, 2025. 8

  41. [41]

    Imagenet classification with deep convolutional neural net- works

    Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- works. InAdvances in Neural Information Processing Sys- tems. Curran Associates, Inc., 2012. 23

  42. [42]

    FLUX.2: Frontier Visual Intelligence

    Black Forest Labs. FLUX.2: Frontier Visual Intelligence. https://bfl.ai/blog/flux-2, 2025. 8

  43. [43]

    Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks

    Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming- Hsuan Yang. Fast and accurate image super-resolution with deep laplacian pyramid networks.CoRR, abs/1710.01992,

  44. [44]

    Fast and accurate image super-resolution with deep laplacian pyramid networks.IEEE transactions on pattern analysis and machine intelligence, 41(11):2599– 2613, 2018

    Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming- Hsuan Yang. Fast and accurate image super-resolution with deep laplacian pyramid networks.IEEE transactions on pattern analysis and machine intelligence, 41(11):2599– 2613, 2018. 3

  45. [45]

    The First Challenge on Mobile Real- World Image Super-Resolution at NTIRE 2026: Bench- mark Results and Method Overview

    Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real- World Image Super-Resolution at NTIRE 2026: Bench- mark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  46. [46]

    NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Mod- els: Datasets, Methods and Results

    Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Mod- els: Datasets, Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  47. [47]

    NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual- Focused Images: Methods and Results

    Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual- Focused Images: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer ...

  48. [48]

    Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maxi- milian Nickel, and Matt Le. Flow matching for generative modeling. InThe Thirteenth International Conference on Learning Representations, 2023. 5, 19

  49. [49]

    The First Challenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Challenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF 10 Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  50. [50]

    Multi-level wavelet-cnn for image restora- tion

    Pengju Liu, Hongzhi Zhang, Kai Zhang, Liang Lin, and Wangmeng Zuo. Multi-level wavelet-cnn for image restora- tion. InProceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 773–782,

  51. [51]

    Conde, et al

    Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: Re- alX3D Challenge Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  52. [52]

    Flow straight and fast: Learning to generate and transfer data with rectified flow, 2022

    Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow, 2022. 8, 22

  53. [53]

    NTIRE 2026 X- AIGC Quality Assessment Challenge: Methods and Results

    Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...

  54. [54]

    KAN: Kolmogorov-Arnold Networks

    Ziming Liu, Yixuan Wang, Sachin Vaidya, Fabian Ruehle, James Halverson, Marin Soljacic, Thomas Y . Hou, and Max Tegmark. Kan: Kolmogorov-arnold networks.ArXiv, abs/2404.19756, 2024. 7

  55. [55]

    SGDR: Stochastic Gradient Descent with Warm Restarts

    Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts.arXiv preprint arXiv:1608.03983, 2016. 16

  56. [56]

    Decoupled Weight Decay Regularization

    Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization.arXiv preprint arXiv:1711.05101, 2017. 17, 23

  57. [57]

    Decoupled weight de- cay regularization

    Ilya Loshchilov and Frank Hutter. Decoupled weight de- cay regularization. InInternational Conference on Learn- ing Representations (ICLR), 2019. Poster. 19

  58. [58]

    MBLLEN: low-light image/video enhancement using cnns

    Feifan Lv, Feng Lu, Jianhua Wu, and Chongsoon Lim. MBLLEN: low-light image/video enhancement using cnns. InBMVC, page 220. BMV A Press, 2018. 7

  59. [59]

    A theory for multiresolution signal de- composition: the wavelet representation.IEEE transactions on pattern analysis and machine intelligence, 11(7):674– 693, 2002

    Stephane G Mallat. A theory for multiresolution signal de- composition: the wavelet representation.IEEE transactions on pattern analysis and machine intelligence, 11(7):674– 693, 2002. 8

  60. [60]

    NTIRE 2026 Challenge on Video Saliency Prediction: Methods and Results

    Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timofte, et al. NTIRE 2026 Challenge on Video Saliency Prediction: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  61. [61]

    Posterior-mean rectified flow: Towards minimum MSE photo-realistic image restoration

    Guy Ohayon, Tomer Michaeli, and Michael Elad. Posterior-mean rectified flow: Towards minimum MSE photo-realistic image restoration. InThe Thirteenth Inter- national Conference on Learning Representations, 2025. 5

  62. [62]

    NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results

    Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  63. [63]

    NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results

    Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results . InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  64. [64]

    FiLM: Visual reasoning with a general conditioning layer

    Ethan Perez, Florian Strub, Harm de Vries, Vincent Du- moulin, and Aaron Courville. FiLM: Visual reasoning with a general conditioning layer. InProceedings of the AAAI Conference on Artificial Intelligence, 2018. 6

  65. [65]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Chal- lenge: Professional Image Quality Assessment (Track 1)

    Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Chal- lenge: Professional Image Quality Assessment (Track 1) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  66. [66]

    The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results

    Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timo- fte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  67. [67]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2)

    Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  68. [68]

    The Eleventh NTIRE 2026 Efficient Super- Resolution Challenge Report

    Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fa- had Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super- Resolution Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  69. [69]

    High-resolution image synthesis with latent diffusion models

    Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj ¨orn Ommer. High-resolution image synthesis with latent diffusion models. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10684–10695, 2022. 5, 18

  70. [70]

    Beyond the ground truth: Enhanced supervision for image restoration.arXiv preprint arXiv:2512.03932, 2025

    Donghun Ryou, Inju Ha, Sanghyeok Chu, and Bohyung Han. Beyond the ground truth: Enhanced supervision for image restoration.arXiv preprint arXiv:2512.03932, 2025. 4, 18

  71. [71]

    Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al

    Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Render- ing Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2 11

  72. [72]

    Illumi- nating darkness: Learning to enhance low-light images in- the-wild

    SMA Sharif, Abdur Rehman, Zain Ul Abidin, Fayaz Ali Dharejo, Radu Timofte, and Rizwan Ali Naqvi. Illumi- nating darkness: Learning to enhance low-light images in- the-wild. InProceedings of the IEEE/CVF Winter Con- ference on Applications of Computer Vision, pages 2263– 2272, 2026. 1, 2, 5, 8, 18, 19, 23

  73. [73]

    Le, Geoffrey E

    Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V . Le, Geoffrey E. Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely- gated mixture-of-experts layer. InProceedings of the 5th International Conference on Learning Representations (ICLR), 2017. 5

  74. [74]

    Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang

    Wenzhe Shi, Jose Caballero, Ferenc Husz ´ar, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1874–1883, 2016. 7

  75. [75]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    Karen Simonyan and Andrew Zisserman. Very deep convo- lutional networks for large-scale image recognition.arXiv preprint arXiv:1409.1556, 2014. 3

  76. [76]

    Very deep convolutional networks for large-scale image recognition

    Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. InInternational Conference on Learning Representations (ICLR), 2015. 19

  77. [77]

    The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results

    Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  78. [78]

    The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results

    Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Wor...

  79. [79]

    NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results

    Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  80. [80]

    Restoring images in adverse weather con- ditions via histogram transformer

    Shangquan Sun, Wenqi Ren, Xinwei Gao, Rui Wang, and Xiaochun Cao. Restoring images in adverse weather con- ditions via histogram transformer. InECCV (22), pages 111–129. Springer, 2024. 4

Showing first 80 references.