pith. machine review for the scientific record. sign in

arxiv: 2605.02212 · v1 · submitted 2026-05-04 · 💻 cs.CV

Recognition: 2 theorem links

NTIRE 2026 Challenge on Efficient Low Light Image Enhancement: Methods and Results

Authors on Pith no claims yet

Pith reviewed 2026-05-08 19:30 UTC · model grok-4.3

classification 💻 cs.CV
keywords low-light image enhancementefficient neural networksmobile deploymentNTIRE challengeimage processinglightweight modelscomputer visionbenchmark evaluation
0
0 comments X

The pith

The NTIRE 2026 challenge shows lightweight networks can improve low-light image quality while meeting strict mobile resource limits.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper reviews the NTIRE 2026 Efficient Low Light Image Enhancement challenge and the solutions submitted by participating teams. It explains how the challenge required networks to enhance images taken in dim conditions without exceeding the computation and memory budgets typical of mobile phones. The review evaluates 17 valid submissions and reports that the leading methods reached better quality scores at lower computational cost than earlier approaches. A reader concerned with real-device deployment learns which design choices produced the strongest practical results under those constraints.

Core claim

The paper states that the 27 valid team submissions, of which 17 supplied detailed factsheets, demonstrate measurable advances in the trade-off between enhancement quality and computational efficiency for low-light images on mobile hardware, as measured by the challenge benchmarks.

What carries the argument

The central mechanism is the challenge evaluation protocol that scores each submitted network on both perceptual image quality and explicit efficiency measures such as FLOPs, parameter count, and measured runtime on target mobile platforms.

If this is right

  • Top methods combine architectural pruning and lightweight convolutions to cut computation while preserving detail recovery in dark regions.
  • The challenge supplies a public benchmark that future efficient enhancement work can use for direct comparison.
  • Practical mobile pipelines can now incorporate these networks without exceeding typical power and latency budgets.
  • Subsequent challenges can tighten the efficiency thresholds to drive further reductions in model size.
  • The results indicate that quality gains remain possible even after aggressive efficiency constraints are applied.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Future work could test whether the same networks maintain their advantage when the input comes from actual phone sensors rather than the challenge dataset.
  • The efficiency numbers may guide hardware designers in deciding how much dedicated image-processing silicon is still required on mobile chips.
  • If the top methods generalize to video, they could support real-time low-light video on phones without frame drops.

Load-bearing premise

The test images and scoring rules used in the challenge accurately reflect the conditions and constraints that matter for real mobile cameras.

What would settle it

An independent test on actual mobile phones that finds the top-ranked challenge methods no longer lead in either quality or speed would show that the reported progress does not hold outside the challenge setting.

Figures

Figures reproduced from arXiv: 2605.02212 by Chenyu Tu, Jiebin Yan, Peibei Cao, Qinghua Lin, Radu Timofte, Weixia Zhang, Xiaoning Liu, Yuming Fang, Zhihua Wang, Zhuyun Zhou, Zongwei Wu.

Figure 1
Figure 1. Figure 1: The architecture of RetinexFormerRefine proposed by view at source ↗
Figure 2
Figure 2. Figure 2: Architecture of Team CVPR TCD Description: We propose a lightweight grayscale-guided cross-attention transformer network for efficient low-light image enhancement, as illustrated in view at source ↗
Figure 4
Figure 4. Figure 4: Architecture of Team sun. Description: We adopt a compact Retinex-inspired encoder–decoder architecture augmented with Restormer￾style local transformer blocks [6, 67, 68, 90], as illus￾trated in view at source ↗
Figure 3
Figure 3. Figure 3: Architecture of Team S3. Description: We follow a two-stage paradigm widely adopted in low-light image enhancement, similar to RetinexFormer [7] and HVI-CIDNet [87], consisting of a preprocessing module and a lightweight restoration back￾bone, as illustrated in view at source ↗
Figure 6
Figure 6. Figure 6: Architecture of Team NUDT DeepIter. scheduling. The training objective is a weighted combi￾nation of multiple loss functions, including MS-SSIM, L1, perceptual, dark region, color, frequency, and SNR losses. The model is designed to be lightweight and efficient, con￾taining approximately 450K parameters, enabling fast infer￾ence while maintaining high enhancement quality. 3.7. HIT-LLIE-team Description: We… view at source ↗
Figure 8
Figure 8. Figure 8: Architecture of Team Xie Liu view at source ↗
Figure 9
Figure 9. Figure 9: Architecture of Team JialuXu(IVC) view at source ↗
Figure 10
Figure 10. Figure 10: Architecture of Team Bustaaa view at source ↗
Figure 11
Figure 11. Figure 11: Architecture of Team sysu 701 view at source ↗
Figure 13
Figure 13. Figure 13: Architecture of Team ShinNam!. HGM-Net overview Input RGB image Encoder 3 scales 30 / 60 / 90 channels skip connections Bottleneck depthwise residual blocks SE + GC modules Decoder upsampling + skip fusion full-resolution features Fusion final feature combination Post-processing deterministic presets tone / color adjustment Output enhanced RGB Global router scene-level + dark-level routing Adaptive contro… view at source ↗
Figure 12
Figure 12. Figure 12: Architecture of Team KLETech-CEVI view at source ↗
Figure 15
Figure 15. Figure 15: Architecture of Team Cidaut AI view at source ↗
read the original abstract

This paper presents a comprehensive review of the NITRE 2026 Efficient Low Light Image Enhancement (E-LLIE) Challenge, highlighting the proposed solutions and final outcomes. This challenge focuses on mobile image enhancement under low-light conditions, aiming to design lightweight networks that improve enhancement quality while ensuring practical deployability under limited computational resources. A total of 207 participants registered, 27 teams submitted valid entries, and 17 teams ultimately provided valid factsheet. Based on these submissions, this paper provides a systematic evaluation of recent methods for E-LLIE, offering a comprehensive overview of state-of-the-art progress and demonstrating significant improvements in both performance and efficiency.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. This paper presents the NTIRE 2026 Challenge on Efficient Low Light Image Enhancement (E-LLIE), reporting participation from 207 registrants with 27 valid submissions and 17 factsheets. It offers a systematic evaluation of the submitted methods for lightweight low-light image enhancement suitable for mobile devices and claims significant improvements in both performance and efficiency.

Significance. If the claims hold, the paper provides a useful benchmark and overview of recent advances in efficient low-light enhancement, which is relevant for practical mobile applications. It aggregates insights from multiple teams, potentially accelerating progress in the field by highlighting effective lightweight architectures.

major comments (2)
  1. [Abstract] The abstract claims 'significant improvements in both performance and efficiency' without detailing the metrics used, the baselines compared against, or the test dataset characteristics. This omission is load-bearing for the paper's purpose as it leaves the central claim of state-of-the-art progress unsupported at the summary level.
  2. [Challenge Results] The efficiency claims are based on self-reported factsheet data (parameter count, FLOPs, simulated latency) without evidence of independent verification on target mobile hardware. This is a critical weakness for the assertion of 'practical deployability under limited computational resources', as real-world factors like memory bandwidth and quantization may not be accounted for.
minor comments (1)
  1. The manuscript would benefit from including a summary table of the top methods with their reported metrics for easier comparison.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback on our manuscript. We address each major comment below with clarifications and indicate the revisions we will make to improve the paper.

read point-by-point responses
  1. Referee: [Abstract] The abstract claims 'significant improvements in both performance and efficiency' without detailing the metrics used, the baselines compared against, or the test dataset characteristics. This omission is load-bearing for the paper's purpose as it leaves the central claim of state-of-the-art progress unsupported at the summary level.

    Authors: We agree that the abstract would benefit from additional specificity to better support the claims. While challenge overview papers often keep abstracts concise, we will revise it to explicitly mention the primary metrics (PSNR and SSIM for quality; parameter count, FLOPs, and reported latency for efficiency), note comparisons to prior NTIRE low-light challenges and standard lightweight baselines, and briefly characterize the test dataset used for evaluation. This change will be incorporated in the revised manuscript. revision: yes

  2. Referee: [Challenge Results] The efficiency claims are based on self-reported factsheet data (parameter count, FLOPs, simulated latency) without evidence of independent verification on target mobile hardware. This is a critical weakness for the assertion of 'practical deployability under limited computational resources', as real-world factors like memory bandwidth and quantization may not be accounted for.

    Authors: We acknowledge the reliance on self-reported factsheet data, which is the established protocol in NTIRE challenges to enable broad participation. Latency figures reflect team-reported measurements on representative mobile hardware rather than centralized independent verification, which was not feasible given the scale (27 submissions). We will add explicit language in the revised manuscript stating the self-reported nature of these metrics and include a dedicated discussion of limitations, including potential effects of quantization, memory bandwidth, and hardware variability. This will temper the deployability claims while preserving the overview value of the aggregated results. revision: partial

Circularity Check

0 steps flagged

Challenge report aggregates external submissions without internal derivations

full rationale

This is a standard NTIRE challenge report that compiles results and factsheets from 17 independent external teams. No equations, fitted parameters, predictions, or derivations appear in the manuscript. The central claims rest on tabulated participant submissions and standard challenge metrics rather than any self-referential construction or author-specific ansatz. Self-citations, if present, are incidental and not load-bearing for the reported outcomes.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is a descriptive challenge report containing no mathematical derivations, fitted parameters, axioms, or postulated entities; all content is empirical summary of external submissions.

pith-pipeline@v0.9.0 · 5444 in / 1041 out tokens · 19452 ms · 2026-05-08T19:30:32.774636+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

94 extracted references · 9 canonical work pages · 2 internal anchors

  1. [1]

    NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing

    Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cos- min Ancuti. NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  2. [2]

    NTIRE 2026 Nighttime Image Dehazing Challenge Report

    Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cos- min Ancuti, et al. NTIRE 2026 Nighttime Image Dehazing Challenge Report . InProceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  3. [3]

    Hue-net: Intensity-based image-to-image translation with differentiable histogram loss functions.arXiv preprint arXiv:1912.06044, 2019

    Mor Avi-Aharon, Assaf Arbelle, and Tammy Riklin Ra- viv. Hue-net: Intensity-based image-to-image translation with differentiable histogram loss functions.arXiv preprint arXiv:1912.06044, 2019. 7

  4. [4]

    Lyt-net: Lightweight yuv transformer-based network for low-light image enhance- ment.IEEE Signal Processing Letters, 2025

    Alexandru Brateanu, Raul Balmez, Adrian Avram, Ciprian Orhei, and Cosmin Ancuti. Lyt-net: Lightweight yuv transformer-based network for low-light image enhance- ment.IEEE Signal Processing Letters, 2025. 7

  5. [5]

    NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods

    Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods . InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) Workshops,

  6. [6]

    Retinexformer: One-stage retinex- based transformer for low-light image enhancement

    Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Tim- ofte, and Yulun Zhang. Retinexformer: One-stage retinex- based transformer for low-light image enhancement. InPro- ceedings of the IEEE/CVF international conference on com- puter vision, pages 12504–12513, 2023. 3, 4

  7. [7]

    RetinexFormer: One-stage retinex- based transformer for low-light image enhancement

    Yuanhao Cai, Hao Bian, Jing Lin, Haoqian Wang, Radu Tim- ofte, and Yulun Zhang. RetinexFormer: One-stage retinex- based transformer for low-light image enhancement. In ICCV, 2023. 4

  8. [8]

    Gcnet: Non-local networks meet squeeze-excitation net- works and beyond

    Yue Cao, Jiarui Xu, Stephen Lin, Fangyun Wei, and Han Hu. Gcnet: Non-local networks meet squeeze-excitation net- works and beyond. InICCV Workshops, pages 0–0, 2019. 8

  9. [9]

    Two deterministic half-quadratic regular- ization algorithms for computed imaging

    Pierre Charbonnier, Laure Blanc-Feraud, Gilles Aubert, and Michel Barlaud. Two deterministic half-quadratic regular- ization algorithms for computed imaging. InICIP, pages 168–172. IEEE, 1994. 5

  10. [10]

    Simple baselines for image restoration

    Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. InEuropean confer- ence on computer vision, pages 17–33. Springer, 2022. 8

  11. [11]

    Simple baselines for image restoration

    Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. InProceedings of the European Conference on Computer Vision (ECCV), pages 17–33, 2022. 5

  12. [12]

    Retinexformer: One-stage retinex-based transformer for low-light image enhancement

    Wentao Chen, Yanyun Wu, Wenjing Yang, and Jiaying Liu. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 12504–12513, 2023. 5

  13. [13]

    The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview

    Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xi- aoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) W...

  14. [14]

    Low Light Image Enhancement Challenge at NTIRE 2026

    George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali, Rizwan Ali Naqvi, Marcos Conde, Radu Timofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026 . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2

  15. [15]

    High FPS Video Frame Interpolation Challenge at NTIRE 2026

    George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  16. [16]

    Superpoint: Self-supervised interest point detection and description

    Daniel DeTone, Tomasz Malisiewicz, and Andrew Rabi- novich. Superpoint: Self-supervised interest point detection and description. InProceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 224–236, 2018. 3

  17. [17]

    Adam: A method for stochastic optimiza- tion.(No Title), 2014

    Kingma Diederik. Adam: A method for stochastic optimiza- tion.(No Title), 2014. 8

  18. [18]

    RepVGG: Making VGG-style convnets great again

    Xiaohan Ding, Xiangyu Zhang, Ningning Ma, Jungong Han, Guiguang Ding, and Jian Sun. RepVGG: Making VGG-style convnets great again. InCVPR, pages 13733–13742, 2021. 5

  19. [19]

    NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Chal- lenge Report

    Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Ta- tui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Chal- lenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2

  20. [20]

    Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al

    Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Trans- fer, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  21. [21]

    Pearson edu- cation india, 2009

    Rafael C Gonzalez.Digital image processing. Pearson edu- cation india, 2009. 6

  22. [22]

    Mamba: Linear-Time Sequence Modeling with Selective State Spaces

    Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces.arXiv preprint arXiv:2312.00752, 2024. 8

  23. [23]

    NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results

    Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2

  24. [24]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)

    Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  25. [25]

    Zero-reference deep curve estimation for low-light image enhancement

    Chunle Guo, Chongyi Li, Jichang Guo, Chen Change Loy, Junhui Hou, Sam Kwong, and Runmin Cong. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the IEEE/CVF conference on computer vi- sion and pattern recognition, pages 1780–1789, 2020. 7

  26. [26]

    NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild

    Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InProceedings of the IEEE/CVF Conference on Computer Vision and Pa...

  27. [27]

    R2rnet: Low-light image enhancement via real-low to real-normal network.Jour- nal of Visual Communication and Image Representation, 90: 103712, 2023

    Jiang Hai, Zhu Xuan, Ren Yang, Yutong Hao, Fengzhu Zou, Fang Lin, and Songchen Han. R2rnet: Low-light image enhancement via real-low to real-normal network.Jour- nal of Visual Communication and Image Representation, 90: 103712, 2023. 6

  28. [28]

    Robust Deepfake De- tection, NTIRE 2026 Challenge: Report

    Benedikt Hopf, Radu Timofte, et al. Robust Deepfake De- tection, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  29. [29]

    Searching for mobilenetv3

    Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V Le, and Hartwig Adam. Searching for mobilenetv3. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 1314–1324, 2019. 5

  30. [30]

    MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

    Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco An- dreetto, and Hartwig Adam. Mobilenets: Efficient convolu- tional neural networks for mobile vision applications.arXiv preprint arXiv:1704.04861, 2017. 7

  31. [31]

    Squeeze-and-excitation net- works

    Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation net- works. InCVPR, pages 7132–7141, 2018. 8

  32. [32]

    Perceptual losses for real-time style transfer and super-resolution

    Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pages 694–711. Springer, 2016. 7

  33. [33]

    NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge

    Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Ko- rchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Doro- gova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani ´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In Proceedings of the IEEE/CVF Con...

  34. [35]

    Adam: A Method for Stochastic Optimization

    Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980,

  35. [36]

    Imagenet classification with deep convolutional neural net- works.Advances in neural information processing systems, 25, 2012

    Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- works.Advances in neural information processing systems, 25, 2012. 4

  36. [37]

    The retinex theory of color vision.Scientific American, 237(6):108–129, 1977

    Edwin H Land. The retinex theory of color vision.Scientific American, 237(6):108–129, 1977. 5

  37. [38]

    CPGA-Net: Curve point guided attention net- work for low-light image enhancement

    Huan Li, Ao Liu, Wenyan Wen, Kaiyan Jiang, Yushuai Chen, and Pinle Liu. CPGA-Net: Curve point guided attention net- work for low-light image enhancement. InProceedings of the IEEE International Conference on Multimedia and Expo (ICME), 2023. 4

  38. [39]

    The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2

  39. [40]

    NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results

    Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  40. [41]

    NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results

    Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer...

  41. [42]

    LightGlue: Local Feature Matching at Light Speed

    Philipp Lindenberger, Paul-Edouard Sarlin, and Marc Polle- feys. LightGlue: Local Feature Matching at Light Speed. In ICCV, 2023. 3

  42. [43]

    Image inpainting for irregular holes using partial convolutions

    Guilin Liu, Fitsum A Reda, Kevin J Shih, Ting-Chun Wang, Andrew Tao, and Bryan Catanzaro. Image inpainting for irregular holes using partial convolutions. InProceedings of the European Conference on Computer Vision (ECCV), pages 85–100, 2018. 5

  43. [44]

    The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  44. [45]

    Conde, et al

    Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: Re- alX3D Challenge Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  45. [46]

    NTIRE 2026 X- AIGC Quality Assessment Challenge: Methods and Results

    Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...

  46. [47]

    Enhancing low-light images: A synthetic data perspective on practical and generalizable so- lutions

    Yu Long, Qinghua Lin, Zhihua Wang, Kai Zhang, Jianguo Zhang, and Yuming Fang. Enhancing low-light images: A synthetic data perspective on practical and generalizable so- lutions. InProceedings of the AAAI Conference on Artificial Intelligence, pages 5784–5792, 2025. 1

  47. [49]

    SGDR: Stochastic Gradient Descent with Warm Restarts

    Ilya Loshchilov and Frank Hutter. SGDR: Stochas- tic gradient descent with warm restarts.arXiv preprint arXiv:1608.03983, 2016. 5

  48. [50]

    Decoupled Weight Decay Regularization

    Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization.arXiv preprint arXiv:1711.05101, 2019. 4

  49. [51]

    Least squares genera- tive adversarial networks

    Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares genera- tive adversarial networks. InProceedings of the IEEE inter- national conference on computer vision, pages 2794–2802,

  50. [52]

    NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results

    Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timo- fte, et al. NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  51. [53]

    NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results

    Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  52. [54]

    Pytorch: An im- perative style, high-performance deep learning library.Ad- vances in neural information processing systems, 32, 2019

    Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An im- perative style, high-performance deep learning library.Ad- vances in neural information processing systems, 32, 2019. 8

  53. [55]

    NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results

    Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 2

  54. [56]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1)

    Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1) . InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  55. [57]

    The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results

    Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timo- fte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  56. [58]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2)

    Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  57. [59]

    The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report

    Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2

  58. [60]

    U- net: Convolutional networks for biomedical image segmen- tation

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U- net: Convolutional networks for biomedical image segmen- tation. InInternational Conference on Medical image com- puting and computer-assisted intervention, pages 234–241. Springer, 2015. 8

  59. [61]

    U-net: Convolutional networks for biomedical image segmentation

    Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. InMICCAI, pages 234–241, 2015. 8

  60. [62]

    Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al

    Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Rendering Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  61. [63]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    Karen Simonyan and Andrew Zisserman. Very deep convo- lutional networks for large-scale image recognition.arXiv preprint arXiv:1409.1556, 2014. 7

  62. [64]

    The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results

    Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  63. [65]

    The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results

    Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Wor...

  64. [66]

    NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results

    Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  65. [67]

    Restoring images in adverse weather condi- tions via histogram transformer

    Shangquan Sun, Wenqi Ren, Xinwei Gao, Rui Wang, and Xiaochun Cao. Restoring images in adverse weather condi- tions via histogram transformer. InEuropean Conference on Computer Vision, pages 111–129. Springer, 2024. 4

  66. [68]

    Di-retinex: digital-imaging retinex model for low-light image enhancement.International Jour- nal of Computer Vision, 133(12):8293–8314, 2025

    Shangquan Sun, Wenqi Ren, Jingyang Peng, Fenglong Song, and Xiaochun Cao. Di-retinex: digital-imaging retinex model for low-light image enhancement.International Jour- nal of Computer Vision, 133(12):8293–8314, 2025. 4

  67. [69]

    Going deeper with convolutions

    Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015. 6

  68. [70]

    Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings

    Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  69. [71]

    Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge

    Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2

  70. [72]

    The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results

    Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2

  71. [73]

    NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results

    Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2

  72. [74]

    Chan, Chen Change Loy, and Chao Dong

    Xintao Wang, Liangbin Xie, Ke Yu, Kelvin C.K. Chan, Chen Change Loy, and Chao Dong. BasicSR: Open source image and video restoration toolbox.https://github. com/XPixelGroup/BasicSR, 2022. 3

  73. [75]

    NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results

    Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wending Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2

  74. [76]

    Multi- scale structural similarity for image quality assessment

    Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multi- scale structural similarity for image quality assessment. In The thrity-seventh asilomar conference on signals, systems & computers, 2003, pages 1398–1402. Ieee, 2003. 7

  75. [77]

    Image quality assessment: from error visibility to structural similarity.IEEE TIP, 13(4):600–612, 2004

    Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity.IEEE TIP, 13(4):600–612, 2004. 5

  76. [78]

    Towards realistic low-light image enhancement via isp driven data modeling.arXiv preprint arXiv:2504.12204, 2025

    Zhihua Wang, Yu Long, Qinghua Lin, Kai Zhang, Yazhu Zhang, Yuming Fang, Li Liu, and Xiaochun Cao. Towards realistic low-light image enhancement via isp driven data modeling.arXiv preprint arXiv:2504.12204, 2025. 1

  77. [79]

    Robust low-light image enhancement in the wild via data synthesis and generative diffusion prior.Pattern Recognition, page 113336, 2026

    Zhihua Wang, Qinghua Lin, Feiyang Liu, Weixia Zhang, and Wei Zhou. Robust low-light image enhancement in the wild via data synthesis and generative diffusion prior.Pattern Recognition, page 113336, 2026. 1

  78. [80]

    Deep retinex decomposition for low-light enhancement

    Chen Wei, Wenjing Wang, Wenhan Yang, and Jiaying Liu. Deep retinex decomposition for low-light enhancement. InProceedings of the British Machine Vision Conference (BMVC), pages 1–12, 2018. 5

  79. [81]

    Q-align: Teaching lmms for visual scoring via discrete text-defined levels

    Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, Qiong Yan, Xiongkuo Min, Guang- tao Zhai, and Weisi Lin. Q-align: Teaching lmms for visual scoring via discrete text-defined levels. InProceedings of the 41st International Conference on Machine Learning, pages 54015–54029, 2024. 6

  80. [82]

    Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement

    Wenhui Wu, Jian Weng, Pingping Zhang, Xu Wang, Wenhan Yang, and Jianmin Jiang. Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement. InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5901–5910, 2022. 5

Showing first 80 references.