Recognition: unknown
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track 2)
Pith reviewed 2026-05-10 17:39 UTC · model grok-4.3
The pith
A new benchmark dataset shows that top methods remove more ghosting and recover finer details when fusing multi-exposure images in moving scenes.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper establishes a realistic benchmark for multi-exposure image fusion in dynamic scenes by releasing sequences that contain both exposure variation and motion-induced misalignment, then shows that the highest-ranked fusion methods achieve better artifact removal and detail recovery than previous techniques when measured on PSNR, SSIM, LPIPS, and perceptual review.
What carries the argument
The RAIM-HDR dataset of 200 multi-exposure sequences captured under handheld conditions, scored through a leaderboard that combines PSNR, SSIM, and LPIPS with human perceptual evaluation.
If this is right
- Standardized comparison becomes possible for algorithms that must jointly solve alignment and exposure fusion.
- Efficiency and reproducibility requirements push solutions toward practical use on mobile devices.
- Consumer HDR photography in handheld scenarios can rely on fewer manual corrections or tripod use.
Where Pith is reading between the lines
- The public release may speed development of networks that perform joint motion compensation and tone mapping on bracketed inputs.
- Performance gains on this benchmark could extend to related tasks such as video HDR or deghosting in burst photography.
- Future extensions might add longer sequences or more extreme lighting to test whether current top methods scale beyond the current test conditions.
Load-bearing premise
The introduced sequences and evaluation protocol accurately reproduce the misalignment and ghosting problems that occur in real handheld multi-exposure photography.
What would settle it
Apply the winning methods to new handheld bracketed sequences captured with different cameras and motion patterns outside the released 200 sequences; a large drop in artifact removal and detail scores relative to the challenge test set would show the benchmark does not capture the full range of real-world difficulties.
Figures
read the original abstract
This paper presents NTIRE 2026, the 3rd Restore Any Image Model (RAIM) challenge on multi-exposure image fusion in dynamic scenes. We introduce a benchmark that targets a practical yet difficult HDR imaging setting, where exposure bracketing must be fused under scene motion, illumination variation, and handheld camera jitter. The challenge data contains 100 training sequences with 7 exposure levels and 100 test sequences with 5 exposure levels, reflecting real-world scenarios that frequently cause misalignment and ghosting artefacts. We evaluate submissions with a leaderboard score derived from PSNR, SSIM, and LPIPS, while also considering perceptual quality, efficiency, and reproducibility during the final review. This track attracted 114 participating teams and received 987 submissions. The winning methods significantly improved the ability to remove artifacts from multi-exposure fusion and recover fine details. The dataset and the code of each team can be found at the repository: https://github.com/qulishen/RAIM-HDR.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. This manuscript reports on the NTIRE 2026 RAIM Challenge Track 2 for multi-exposure image fusion in dynamic scenes. It describes a new benchmark with 100 training sequences (7 exposure levels) and 100 test sequences (5 exposure levels) that incorporate scene motion, illumination variation, and handheld jitter. Submissions are evaluated via a leaderboard score derived from PSNR, SSIM, and LPIPS, with additional review for perceptual quality, efficiency, and reproducibility. The challenge attracted 114 teams and 987 submissions; the paper states that the winning methods significantly advanced artifact removal and fine-detail recovery. The dataset and all team codes are released at https://github.com/qulishen/RAIM-HDR.
Significance. If the test set and protocol are representative, the work supplies a needed public benchmark for a practically important but under-served HDR setting. The large participation and public code release are clear strengths that will support future reproducible research. The reported leaderboard gains indicate measurable progress on artifact handling, but the long-term value hinges on whether the improvements generalize beyond the specific test sequences.
major comments (1)
- The central claim that winning methods 'significantly improved the ability to remove artifacts ... and recover fine details' is stated in the abstract and results summary without any quantitative breakdown (e.g., per-metric deltas, per-scene-type analysis, or comparison against a fixed baseline). No table or figure supplies the raw scores or statistical significance that would allow readers to verify the magnitude or consistency of the improvement.
minor comments (2)
- The exact formula or weighting used to combine PSNR, SSIM, and LPIPS into the final leaderboard score is not specified; adding this detail (perhaps in a dedicated evaluation subsection) would improve transparency.
- A short table listing the top three teams' individual metric scores (rather than only the composite leaderboard rank) would help readers assess trade-offs between fidelity and perceptual quality.
Simulated Author's Rebuttal
We thank the referee for the positive assessment of the challenge's significance and for the constructive major comment. We agree that the claim regarding improvements by winning methods would benefit from explicit quantitative support and will revise the manuscript to include it.
read point-by-point responses
-
Referee: The central claim that winning methods 'significantly improved the ability to remove artifacts ... and recover fine details' is stated in the abstract and results summary without any quantitative breakdown (e.g., per-metric deltas, per-scene-type analysis, or comparison against a fixed baseline). No table or figure supplies the raw scores or statistical significance that would allow readers to verify the magnitude or consistency of the improvement.
Authors: We acknowledge the validity of this observation. The current manuscript reports the final leaderboard ranking and top scores but does not include a dedicated breakdown table with per-metric deltas, a fixed baseline (such as naive exposure averaging or a prior state-of-the-art method), or per-scene-type analysis. We will add a new table in the revised version that lists PSNR, SSIM, and LPIPS for the top three teams alongside a simple baseline, reports absolute and relative improvements, and includes a brief note on consistency across the 100 test sequences. This addition will directly address the request for verifiable quantitative evidence while remaining within the page limits. revision: yes
Circularity Check
No significant circularity; descriptive challenge report with no derivations
full rationale
This is a standard competition summary paper that introduces a benchmark dataset (100 training and 100 test sequences), an evaluation protocol based on PSNR/SSIM/LPIPS, and reports leaderboard results from 114 teams. No equations, predictions, fitted parameters, or derivation chains exist. The claim that winning methods improved artifact removal follows directly from external submissions on the released test data, with no self-referential reductions or load-bearing self-citations. The paper is fully self-contained as a factual report.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing
Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cosmin Ancuti. NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[2]
NTIRE 2026 Nighttime Image Dehazing Challenge Report
Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cosmin Ancuti, et al. NTIRE 2026 Nighttime Image Dehazing Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[3]
Retinex-mef: Retinex-based glare effects aware unsupervised multi-exposure image fusion
Haowen Bai, Jiangshe Zhang, Zixiang Zhao, Lilun Deng, Yukun Cui, and Shuang Xu. Retinex-mef: Retinex-based glare effects aware unsupervised multi-exposure image fusion. InIEEE/CVF International Conference on Computer Vision (ICCV), pages 7251–7261, 2025. 1
work page 2025
-
[4]
Learning a deep single image contrast enhancer from multi-exposure images
Jianrui Cai, Shuhang Gu, and Lei Zhang. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing (TIP), 27(4): 2049–2062, 2018. 1
work page 2049
-
[5]
NTIRE 2026 Challenge on Single Image Reflection Removal in the Wild: Datasets, Results, and Methods
Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Reflection Removal in the Wild: Datasets, Results, and Methods . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[6]
Mask-guided spectral-wise transformer for efficient hyperspectral image reconstruction
Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Mask-guided spectral-wise transformer for efficient hyperspectral image reconstruction. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 4
work page 2022
-
[7]
Ultrafusion: Ultra high dynamic imaging using exposure fusion
Zixuan Chen, Yujin Wang, Xin Cai, Zhiyuan You, Zheming Lu, Fan Zhang, Shi Guo, and Tianfan Xue. Ultrafusion: Ultra high dynamic imaging using exposure fusion. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16111–16121, 2025. 1
work page 2025
-
[8]
Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xiaoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Works...
work page 2026
-
[9]
Improving image restoration by revisiting global information aggregation
Xiaojie Chu, Liangyu Chen, Chengpeng Chen, and Xin Lu. Improving image restoration by revisiting global information aggregation. InEuropean Conference on Computer Vision (ECCV), pages 53–71, 2022. 6
work page 2022
-
[10]
Low Light Image Enhancement Challenge at NTIRE 2026
George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali, Rizwan Ali Naqvi, Marcos Conde, Radu Timofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[11]
High FPS Video Frame Interpolation Challenge at NTIRE 2026
George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[12]
NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report
Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Tatui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[13]
Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al
Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Transfer, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[14]
Stefan M Fischer, Johannes Kiechle, Laura Daza, Lina Felsner, Richard Osuala, Daniel M Lang, Karim Lekadir, Jan C Peeken, and Julia A Schnabel. Progressive growing of patch size: Curriculum learning for accelerated and improved medical image segmentation.arXiv preprint arXiv:2510.23241, 2025. 5
-
[15]
Dr.experts: Differential refinement of distortion-aware experts for blind image quality assessment
Bohan Fu, Guanyi Qin, Fazhan Zhang, Zihao Huang, Mingxuan Li, and Runze Hu. Dr.experts: Differential refinement of distortion-aware experts for blind image quality assessment. InAAAI Conference on Artificial Intelligence (AAAI), 2026. 2
work page 2026
-
[16]
Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[17]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)
Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[18]
NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild
Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InProceedings of the IEEE/CVF Conference on Computer Vision and Pa...
work page 2026
-
[19]
Robust Deepfake Detection, NTIRE 2026 Challenge: Report
Benedikt Hopf, Radu Timofte, et al. Robust Deepfake Detection, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[20]
Meflut: Unsupervised 1d lookup tables for multi-exposure image fusion
Ting Jiang, Chuan Wang, Xinpeng Li, Ru Li, Haoqiang Fan, and Shuaicheng Liu. Meflut: Unsupervised 1d lookup tables for multi-exposure image fusion. InIEEE/CVF International Conference on Computer Vision (ICCV), pages 10542–10551, 2023. 1
work page 2023
-
[21]
Nima Khademi Kalantari and Ravi Ramamoorthi. Deep high dynamic range imaging of dynamic scenes.ACM Transactions on Graphics, 36(4):144:1–144:12, 2017. 1
work page 2017
-
[22]
NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge
Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Korchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Dorogova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In Proceedings of the IEEE/CVF Conferen...
work page 2026
-
[23]
Safnet: Selective alignment fusion network for efficient hdr imaging
Lingtong Kong, Bo Li, Yike Xiong, Hao Zhang, Hong Gu, and Jinwei Chen. Safnet: Selective alignment fusion network for efficient hdr imaging. InEuropean Conference on Computer Vision (ECCV), pages 256–273, 2024. 1
work page 2024
-
[24]
Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[25]
Afunet: Cross-iterative alignment-fusion synergy for hdr reconstruction via deep unfolding paradigm
Xinyue Li, Zhangkai Ni, and Wenhan Yang. Afunet: Cross-iterative alignment-fusion synergy for hdr reconstruction via deep unfolding paradigm. InIEEE/CVF International Conference on Computer Vision (ICCV), pages 10666–10675, 2025. 1, 3
work page 2025
-
[26]
Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[27]
Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer ...
work page 2026
-
[28]
Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Challenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[29]
Pengju Liu, Hongzhi Zhang, Kai Zhang, Liang Lin, and Wangmeng Zuo. Multi-level wavelet-cnn for image restoration.Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 773–782, 2018. 6
work page 2018
-
[30]
Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: RealX3D Challenge Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[31]
NTIRE 2026 X-AIGC Quality Assessment Challenge: Methods and Results
Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...
work page 2026
-
[32]
Ghost-free high dynamic range imaging with context-aware transformer
Zhen Liu, Yinglong Wang, Bing Zeng, and Shuaicheng Liu. Ghost-free high dynamic range imaging with context-aware transformer. InEuropean Conference on Computer Vision (ECCV), pages 344–360, 2022. 1, 5
work page 2022
-
[33]
NTIRE 2026 Challenge on Video Saliency Prediction: Methods and Results
Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timofte, et al. NTIRE 2026 Challenge on Video Saliency Prediction: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[34]
NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results
Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[35]
NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results
Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[36]
Data-efficient image quality assessment with attention-panel decoder
Guanyi Qin, Runze Hu, Yutao Liu, Xiawu Zheng, Haotian Liu, Xiu Li, and Yan Zhang. Data-efficient image quality assessment with attention-panel decoder. InAAAI Conference on Artificial Intelligence (AAAI), pages 2091–2100, 2023. 2
work page 2091
-
[37]
Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track
work page 2026
-
[38]
InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[39]
The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results
Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timofte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[40]
Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[41]
Optical flow estimation using a spatial pyramid network
Anurag Ranjan and Michael J Black. Optical flow estimation using a spatial pyramid network. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4161–4170, 2017. 5
work page 2017
-
[42]
The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report
Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[43]
Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al
Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Rendering Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[44]
A survey on image data augmentation for deep learning.Journal of Big Data, 6(1):60, 2019
Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning.Journal of Big Data, 6(1):60, 2019. 5
work page 2019
-
[45]
Yong Shu, Liquan Shen, Xiangyu Hu, Mengyao Li, and Zihao Zhou. Towards real-world hdr video reconstruction: A large-scale benchmark dataset and a two-stage alignment network. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2879–2888, 2024. 1
work page 2024
-
[46]
Yuda Song, Zhuqing He, Hui Qian, and Xin Du. Vision transformers for single image dehazing.IEEE Transactions on Image Processing (TIP), 32:1927–1941, 2023. 6
work page 1927
-
[47]
The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results
Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[48]
The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results
Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Wo...
work page 2026
-
[49]
NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results
Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[50]
Alignment-free hdr deghosting with semantics consistent transformer
Steven Tel, Zongwei Wu, Yulun Zhang, Barth´el´emy Heyrman, C´edric Demonceaux, Radu Timofte, and Dominique Ginhac. Alignment-free hdr deghosting with semantics consistent transformer. InIEEE/CVF International Conference on Computer Vision (ICCV),
-
[51]
Seven ways to improve example-based single image super resolution
Radu Timofte, Rasmus Rothe, and Luc Van Gool. Seven ways to improve example-based single image super resolution. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 1865–1873,
-
[52]
Score-based self- supervised mri denoising.arXiv preprint arXiv:2505.05631,
Jiachen Tu, Yaokun Shi, and Fan Lam. Score-based self-supervised MRI denoising.arXiv preprint arXiv:2505.05631, 2025. 6
-
[53]
Learning-Based Ambient Lighting Normalization: NTIRE 2026 Challenge Results and Findings
Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning-Based Ambient Lighting Normalization: NTIRE 2026 Challenge Results and Findings. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[54]
Advances in Single-Image Shadow Removal: Results from the NTIRE 2026 Challenge
Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single-Image Shadow Removal: Results from the NTIRE 2026 Challenge. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[55]
The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results
Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[56]
NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results
Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[57]
Recovering realistic texture in image super-resolution by deep spatial feature transform
Xintao Wang, Ke Yu, Chao Dong, and Chen Change Loy. Recovering realistic texture in image super-resolution by deep spatial feature transform. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 606–615, 2018. 3
work page 2018
-
[58]
NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results
Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wending Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[59]
Uformer: A general u-shaped transformer for image restoration
Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 17683–17693, 2022. 4
work page 2022
-
[60]
Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report
Jiebin Yan, Chenyu Tu, Qinghua Lin, Zongwei WU, Weixia Zhang, Zhihua Wang, Peibei Cao, Yuming Fang, Xiaoning Liu, Zhuyun Zhou, Radu Timofte, et al. Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[61]
NTIRE 2026 Challenge on High-Resolution Depth of non-Lambertian Surfaces
Pierluigi Zama Ramirez, Fabio Tosi, Luigi Di Stefano, Radu Timofte, Alex Costanzino, Matteo Poggi, Samuele Salti, Stefano Mattoccia, et al. NTIRE 2026 Challenge on High-Resolution Depth of non-Lambertian Surfaces . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[62]
Restormer: Efficient transformer for high-resolution image restoration
Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5728–5739, 2022. 4, 5
work page 2022
-
[63]
NTIRE 2026 Challenge Report on Anomaly Detection of Face Enhancement for UGC Images
Yan Zhong, Qiufang Ma, Zhen Wang, Tingting Jiang, Radu Timofte, et al. NTIRE 2026 Challenge Report on Anomaly Detection of Face Enhancement for UGC Images . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
-
[64]
NTIRE 2026 Challenge on Bitstream-Corrupted Video Restoration: Methods and Results
Wenbin Zou, Tianyi Liu, Kejun Wu, Huiping Zhuang, Zongwei Wu, Zhuyun Zhou, Radu Timofte, et al. NTIRE 2026 Challenge on Bitstream-Corrupted Video Restoration: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
work page 2026
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.