pith. machine review for the scientific record. sign in

arxiv: 2604.09030 · v1 · submitted 2026-04-10 · 💻 cs.CV

Recognition: unknown

NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track 2)

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:39 UTC · model grok-4.3

classification 💻 cs.CV
keywords multi-exposure image fusiondynamic scenesHDR imagingghosting artifactsbenchmark datasetartifact removalimage restoration
0
0 comments X

The pith

A new benchmark dataset shows that top methods remove more ghosting and recover finer details when fusing multi-exposure images in moving scenes.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces a benchmark for fusing bracketed exposures into HDR images under realistic conditions of camera jitter and scene motion. It supplies 100 training sequences with seven exposure levels and 100 test sequences with five levels, all drawn from handheld captures that produce misalignment and ghosting. Submissions are ranked by a composite score of PSNR, SSIM, and LPIPS, with additional checks for perceptual quality, efficiency, and reproducibility. The winning entries demonstrate measurable progress over earlier approaches in suppressing fusion artifacts while preserving fine scene structure. The dataset and participant code are released publicly to support further work.

Core claim

The paper establishes a realistic benchmark for multi-exposure image fusion in dynamic scenes by releasing sequences that contain both exposure variation and motion-induced misalignment, then shows that the highest-ranked fusion methods achieve better artifact removal and detail recovery than previous techniques when measured on PSNR, SSIM, LPIPS, and perceptual review.

What carries the argument

The RAIM-HDR dataset of 200 multi-exposure sequences captured under handheld conditions, scored through a leaderboard that combines PSNR, SSIM, and LPIPS with human perceptual evaluation.

If this is right

  • Standardized comparison becomes possible for algorithms that must jointly solve alignment and exposure fusion.
  • Efficiency and reproducibility requirements push solutions toward practical use on mobile devices.
  • Consumer HDR photography in handheld scenarios can rely on fewer manual corrections or tripod use.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The public release may speed development of networks that perform joint motion compensation and tone mapping on bracketed inputs.
  • Performance gains on this benchmark could extend to related tasks such as video HDR or deghosting in burst photography.
  • Future extensions might add longer sequences or more extreme lighting to test whether current top methods scale beyond the current test conditions.

Load-bearing premise

The introduced sequences and evaluation protocol accurately reproduce the misalignment and ghosting problems that occur in real handheld multi-exposure photography.

What would settle it

Apply the winning methods to new handheld bracketed sequences captured with different cameras and motion patterns outside the released 200 sequences; a large drop in artifact removal and detail scores relative to the challenge test set would show the benchmark does not capture the full range of real-world difficulties.

Figures

Figures reproduced from arXiv: 2604.09030 by Bin Chen, Bo Zhang, Guanyi Qin, Guoyi Xu, He Xu, Hui Zeng, Jiachen Tu, Jiacong Tang, Jiajia Liu, Jiannan Lin, Jie Liang, Juan Wang, Jufeng Yang, Lei Qi, Lei Zhang, Lishen Qu, Qingsen Yan, Qinquan Gao, Radu Timofte, Shihang Li, Shihao Zhou, Song Gao, Sunhan Xu, Tao Hu, Tong Tong, Wanjie Sun, Wen Dai, Xiaowen Ma, Xinyu Sun, Xiyuan Yuan, Ya-nan Guan, Yaokun Shi, Yao Liu, Yaoxin Jiang, Yuxu Chen.

Figure 1
Figure 1. Figure 1: Representative image sequences in RAIM MEF in dynamic scenes. The data includes various scenes and times, with a particular [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the WHU-VIP method with the AFUNet [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Overview of the SHL team’s proposed framework. The [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 5
Figure 5. Figure 5: Overview of the untrafusion pipeline. Additional information. The team uses only the official dataset provided by the challenge organizers and does not rely on external data. 4.4. untrafusion General method description. The untrafusion team pro￾poses an end-to-end reconstruction framework that explic￾itly models physical exposure priors and uses a scale￾evolving training strategy to address noise sensitivi… view at source ↗
Figure 4
Figure 4. Figure 4: Overview of the nunucccb method. The nunucccb team adopts the HDR-Transformer frame￾work based on the Context-Aware Vision Transformer [32]. Network architecture. The model contains two main com￾ponents as shown in [PITH_FULL_IMAGE:figures/full_fig_p005_4.png] view at source ↗
Figure 7
Figure 7. Figure 7: TimeDiffiT architecture [51]. The doubly-corrupted in￾put X˜M t and noise level σt enter the time-conditioned U-Net (en￾coder: orange; decoder: blue), producing the restored output Xˆ. Method description. The NTR team initializes the back￾bone from a pretrained masked diffusion autoencoder and then adapts it to the MEF task via supervised fine-tuning. The model uses a time-conditioned U-Net architecture wi… view at source ↗
Figure 8
Figure 8. Figure 8: Visual comparison on representative examples from Test Stage 1. The compared methods show clear differences in structural [PITH_FULL_IMAGE:figures/full_fig_p007_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Visual comparison on representative examples from Test Stage 2. This stage further highlights differences in motion handling, [PITH_FULL_IMAGE:figures/full_fig_p008_9.png] view at source ↗
read the original abstract

This paper presents NTIRE 2026, the 3rd Restore Any Image Model (RAIM) challenge on multi-exposure image fusion in dynamic scenes. We introduce a benchmark that targets a practical yet difficult HDR imaging setting, where exposure bracketing must be fused under scene motion, illumination variation, and handheld camera jitter. The challenge data contains 100 training sequences with 7 exposure levels and 100 test sequences with 5 exposure levels, reflecting real-world scenarios that frequently cause misalignment and ghosting artefacts. We evaluate submissions with a leaderboard score derived from PSNR, SSIM, and LPIPS, while also considering perceptual quality, efficiency, and reproducibility during the final review. This track attracted 114 participating teams and received 987 submissions. The winning methods significantly improved the ability to remove artifacts from multi-exposure fusion and recover fine details. The dataset and the code of each team can be found at the repository: https://github.com/qulishen/RAIM-HDR.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. This manuscript reports on the NTIRE 2026 RAIM Challenge Track 2 for multi-exposure image fusion in dynamic scenes. It describes a new benchmark with 100 training sequences (7 exposure levels) and 100 test sequences (5 exposure levels) that incorporate scene motion, illumination variation, and handheld jitter. Submissions are evaluated via a leaderboard score derived from PSNR, SSIM, and LPIPS, with additional review for perceptual quality, efficiency, and reproducibility. The challenge attracted 114 teams and 987 submissions; the paper states that the winning methods significantly advanced artifact removal and fine-detail recovery. The dataset and all team codes are released at https://github.com/qulishen/RAIM-HDR.

Significance. If the test set and protocol are representative, the work supplies a needed public benchmark for a practically important but under-served HDR setting. The large participation and public code release are clear strengths that will support future reproducible research. The reported leaderboard gains indicate measurable progress on artifact handling, but the long-term value hinges on whether the improvements generalize beyond the specific test sequences.

major comments (1)
  1. The central claim that winning methods 'significantly improved the ability to remove artifacts ... and recover fine details' is stated in the abstract and results summary without any quantitative breakdown (e.g., per-metric deltas, per-scene-type analysis, or comparison against a fixed baseline). No table or figure supplies the raw scores or statistical significance that would allow readers to verify the magnitude or consistency of the improvement.
minor comments (2)
  1. The exact formula or weighting used to combine PSNR, SSIM, and LPIPS into the final leaderboard score is not specified; adding this detail (perhaps in a dedicated evaluation subsection) would improve transparency.
  2. A short table listing the top three teams' individual metric scores (rather than only the composite leaderboard rank) would help readers assess trade-offs between fidelity and perceptual quality.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the positive assessment of the challenge's significance and for the constructive major comment. We agree that the claim regarding improvements by winning methods would benefit from explicit quantitative support and will revise the manuscript to include it.

read point-by-point responses
  1. Referee: The central claim that winning methods 'significantly improved the ability to remove artifacts ... and recover fine details' is stated in the abstract and results summary without any quantitative breakdown (e.g., per-metric deltas, per-scene-type analysis, or comparison against a fixed baseline). No table or figure supplies the raw scores or statistical significance that would allow readers to verify the magnitude or consistency of the improvement.

    Authors: We acknowledge the validity of this observation. The current manuscript reports the final leaderboard ranking and top scores but does not include a dedicated breakdown table with per-metric deltas, a fixed baseline (such as naive exposure averaging or a prior state-of-the-art method), or per-scene-type analysis. We will add a new table in the revised version that lists PSNR, SSIM, and LPIPS for the top three teams alongside a simple baseline, reports absolute and relative improvements, and includes a brief note on consistency across the 100 test sequences. This addition will directly address the request for verifiable quantitative evidence while remaining within the page limits. revision: yes

Circularity Check

0 steps flagged

No significant circularity; descriptive challenge report with no derivations

full rationale

This is a standard competition summary paper that introduces a benchmark dataset (100 training and 100 test sequences), an evaluation protocol based on PSNR/SSIM/LPIPS, and reports leaderboard results from 114 teams. No equations, predictions, fitted parameters, or derivation chains exist. The claim that winning methods improved artifact removal follows directly from external submissions on the released test data, with no self-referential reductions or load-bearing self-citations. The paper is fully self-contained as a factual report.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

This is a challenge description paper with no mathematical derivations, fitted parameters, axioms, or invented entities. It relies on established image quality metrics (PSNR, SSIM, LPIPS) from prior literature.

pith-pipeline@v0.9.0 · 5614 in / 952 out tokens · 99428 ms · 2026-05-10T17:39:37.409518+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

64 extracted references · 2 canonical work pages

  1. [1]

    NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing

    Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cosmin Ancuti. NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  2. [2]

    NTIRE 2026 Nighttime Image Dehazing Challenge Report

    Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cosmin Ancuti, et al. NTIRE 2026 Nighttime Image Dehazing Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  3. [3]

    Retinex-mef: Retinex-based glare effects aware unsupervised multi-exposure image fusion

    Haowen Bai, Jiangshe Zhang, Zixiang Zhao, Lilun Deng, Yukun Cui, and Shuang Xu. Retinex-mef: Retinex-based glare effects aware unsupervised multi-exposure image fusion. InIEEE/CVF International Conference on Computer Vision (ICCV), pages 7251–7261, 2025. 1

  4. [4]

    Learning a deep single image contrast enhancer from multi-exposure images

    Jianrui Cai, Shuhang Gu, and Lei Zhang. Learning a deep single image contrast enhancer from multi-exposure images. IEEE Transactions on Image Processing (TIP), 27(4): 2049–2062, 2018. 1

  5. [5]

    NTIRE 2026 Challenge on Single Image Reflection Removal in the Wild: Datasets, Results, and Methods

    Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Reflection Removal in the Wild: Datasets, Results, and Methods . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  6. [6]

    Mask-guided spectral-wise transformer for efficient hyperspectral image reconstruction

    Yuanhao Cai, Jing Lin, Xiaowan Hu, Haoqian Wang, Xin Yuan, Yulun Zhang, Radu Timofte, and Luc Van Gool. Mask-guided spectral-wise transformer for efficient hyperspectral image reconstruction. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022. 4

  7. [7]

    Ultrafusion: Ultra high dynamic imaging using exposure fusion

    Zixuan Chen, Yujin Wang, Xin Cai, Zhiyuan You, Zheming Lu, Fan Zhang, Shi Guo, and Tianfan Xue. Ultrafusion: Ultra high dynamic imaging using exposure fusion. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16111–16121, 2025. 1

  8. [8]

    The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview

    Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xiaoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Works...

  9. [9]

    Improving image restoration by revisiting global information aggregation

    Xiaojie Chu, Liangyu Chen, Chengpeng Chen, and Xin Lu. Improving image restoration by revisiting global information aggregation. InEuropean Conference on Computer Vision (ECCV), pages 53–71, 2022. 6

  10. [10]

    Low Light Image Enhancement Challenge at NTIRE 2026

    George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali, Rizwan Ali Naqvi, Marcos Conde, Radu Timofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  11. [11]

    High FPS Video Frame Interpolation Challenge at NTIRE 2026

    George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  12. [12]

    NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report

    Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Tatui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  13. [13]

    Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al

    Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Transfer, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  14. [14]

    Progressive growing of patch size: Curriculum learning for accelerated and improved medical image segmentation.arXiv preprint arXiv:2510.23241, 2025

    Stefan M Fischer, Johannes Kiechle, Laura Daza, Lina Felsner, Richard Osuala, Daniel M Lang, Karim Lekadir, Jan C Peeken, and Julia A Schnabel. Progressive growing of patch size: Curriculum learning for accelerated and improved medical image segmentation.arXiv preprint arXiv:2510.23241, 2025. 5

  15. [15]

    Dr.experts: Differential refinement of distortion-aware experts for blind image quality assessment

    Bohan Fu, Guanyi Qin, Fazhan Zhang, Zihao Huang, Mingxuan Li, and Runze Hu. Dr.experts: Differential refinement of distortion-aware experts for blind image quality assessment. InAAAI Conference on Artificial Intelligence (AAAI), 2026. 2

  16. [16]

    NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results

    Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  17. [17]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)

    Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  18. [18]

    NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild

    Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InProceedings of the IEEE/CVF Conference on Computer Vision and Pa...

  19. [19]

    Robust Deepfake Detection, NTIRE 2026 Challenge: Report

    Benedikt Hopf, Radu Timofte, et al. Robust Deepfake Detection, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  20. [20]

    Meflut: Unsupervised 1d lookup tables for multi-exposure image fusion

    Ting Jiang, Chuan Wang, Xinpeng Li, Ru Li, Haoqiang Fan, and Shuaicheng Liu. Meflut: Unsupervised 1d lookup tables for multi-exposure image fusion. InIEEE/CVF International Conference on Computer Vision (ICCV), pages 10542–10551, 2023. 1

  21. [21]

    Deep high dynamic range imaging of dynamic scenes.ACM Transactions on Graphics, 36(4):144:1–144:12, 2017

    Nima Khademi Kalantari and Ravi Ramamoorthi. Deep high dynamic range imaging of dynamic scenes.ACM Transactions on Graphics, 36(4):144:1–144:12, 2017. 1

  22. [22]

    NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge

    Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Korchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Dorogova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In Proceedings of the IEEE/CVF Conferen...

  23. [23]

    Safnet: Selective alignment fusion network for efficient hdr imaging

    Lingtong Kong, Bo Li, Yike Xiong, Hao Zhang, Hong Gu, and Jinwei Chen. Safnet: Selective alignment fusion network for efficient hdr imaging. InEuropean Conference on Computer Vision (ECCV), pages 256–273, 2024. 1

  24. [24]

    The First Challenge on Mobile Real-World Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  25. [25]

    Afunet: Cross-iterative alignment-fusion synergy for hdr reconstruction via deep unfolding paradigm

    Xinyue Li, Zhangkai Ni, and Wenhan Yang. Afunet: Cross-iterative alignment-fusion synergy for hdr reconstruction via deep unfolding paradigm. InIEEE/CVF International Conference on Computer Vision (ICCV), pages 10666–10675, 2025. 1, 3

  26. [26]

    NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results

    Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  27. [27]

    NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results

    Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer ...

  28. [28]

    The First Challenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Challenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  29. [29]

    Multi-level wavelet-cnn for image restoration.Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 773–782, 2018

    Pengju Liu, Hongzhi Zhang, Kai Zhang, Liang Lin, and Wangmeng Zuo. Multi-level wavelet-cnn for image restoration.Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 773–782, 2018. 6

  30. [30]

    Conde, et al

    Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: RealX3D Challenge Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  31. [31]

    NTIRE 2026 X-AIGC Quality Assessment Challenge: Methods and Results

    Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...

  32. [32]

    Ghost-free high dynamic range imaging with context-aware transformer

    Zhen Liu, Yinglong Wang, Bing Zeng, and Shuaicheng Liu. Ghost-free high dynamic range imaging with context-aware transformer. InEuropean Conference on Computer Vision (ECCV), pages 344–360, 2022. 1, 5

  33. [33]

    NTIRE 2026 Challenge on Video Saliency Prediction: Methods and Results

    Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timofte, et al. NTIRE 2026 Challenge on Video Saliency Prediction: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  34. [34]

    NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results

    Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  35. [35]

    NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results

    Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  36. [36]

    Data-efficient image quality assessment with attention-panel decoder

    Guanyi Qin, Runze Hu, Yutao Liu, Xiawu Zheng, Haotian Liu, Xiu Li, and Yan Zhang. Data-efficient image quality assessment with attention-panel decoder. InAAAI Conference on Artificial Intelligence (AAAI), pages 2091–2100, 2023. 2

  37. [37]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track

    Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track

  38. [38]

    InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  39. [39]

    The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results

    Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timofte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  40. [40]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2)

    Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  41. [41]

    Optical flow estimation using a spatial pyramid network

    Anurag Ranjan and Michael J Black. Optical flow estimation using a spatial pyramid network. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4161–4170, 2017. 5

  42. [42]

    The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report

    Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  43. [43]

    Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al

    Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Rendering Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  44. [44]

    A survey on image data augmentation for deep learning.Journal of Big Data, 6(1):60, 2019

    Connor Shorten and Taghi M Khoshgoftaar. A survey on image data augmentation for deep learning.Journal of Big Data, 6(1):60, 2019. 5

  45. [45]

    Towards real-world hdr video reconstruction: A large-scale benchmark dataset and a two-stage alignment network

    Yong Shu, Liquan Shen, Xiangyu Hu, Mengyao Li, and Zihao Zhou. Towards real-world hdr video reconstruction: A large-scale benchmark dataset and a two-stage alignment network. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2879–2888, 2024. 1

  46. [46]

    Vision transformers for single image dehazing.IEEE Transactions on Image Processing (TIP), 32:1927–1941, 2023

    Yuda Song, Zhuqing He, Hui Qian, and Xin Du. Vision transformers for single image dehazing.IEEE Transactions on Image Processing (TIP), 32:1927–1941, 2023. 6

  47. [47]

    The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results

    Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  48. [48]

    The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results

    Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Wo...

  49. [49]

    NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results

    Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  50. [50]

    Alignment-free hdr deghosting with semantics consistent transformer

    Steven Tel, Zongwei Wu, Yulun Zhang, Barth´el´emy Heyrman, C´edric Demonceaux, Radu Timofte, and Dominique Ginhac. Alignment-free hdr deghosting with semantics consistent transformer. InIEEE/CVF International Conference on Computer Vision (ICCV),

  51. [51]

    Seven ways to improve example-based single image super resolution

    Radu Timofte, Rasmus Rothe, and Luc Van Gool. Seven ways to improve example-based single image super resolution. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 1865–1873,

  52. [52]

    Score-based self- supervised mri denoising.arXiv preprint arXiv:2505.05631,

    Jiachen Tu, Yaokun Shi, and Fan Lam. Score-based self-supervised MRI denoising.arXiv preprint arXiv:2505.05631, 2025. 6

  53. [53]

    Learning-Based Ambient Lighting Normalization: NTIRE 2026 Challenge Results and Findings

    Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning-Based Ambient Lighting Normalization: NTIRE 2026 Challenge Results and Findings. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  54. [54]

    Advances in Single-Image Shadow Removal: Results from the NTIRE 2026 Challenge

    Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single-Image Shadow Removal: Results from the NTIRE 2026 Challenge. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  55. [55]

    The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results

    Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  56. [56]

    NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results

    Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  57. [57]

    Recovering realistic texture in image super-resolution by deep spatial feature transform

    Xintao Wang, Ke Yu, Chao Dong, and Chen Change Loy. Recovering realistic texture in image super-resolution by deep spatial feature transform. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 606–615, 2018. 3

  58. [58]

    NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results

    Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wending Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  59. [59]

    Uformer: A general u-shaped transformer for image restoration

    Zhendong Wang, Xiaodong Cun, Jianmin Bao, Wengang Zhou, Jianzhuang Liu, and Houqiang Li. Uformer: A general u-shaped transformer for image restoration. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 17683–17693, 2022. 4

  60. [60]

    Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report

    Jiebin Yan, Chenyu Tu, Qinghua Lin, Zongwei WU, Weixia Zhang, Zhihua Wang, Peibei Cao, Yuming Fang, Xiaoning Liu, Zhuyun Zhou, Radu Timofte, et al. Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  61. [61]

    NTIRE 2026 Challenge on High-Resolution Depth of non-Lambertian Surfaces

    Pierluigi Zama Ramirez, Fabio Tosi, Luigi Di Stefano, Radu Timofte, Alex Costanzino, Matteo Poggi, Samuele Salti, Stefano Mattoccia, et al. NTIRE 2026 Challenge on High-Resolution Depth of non-Lambertian Surfaces . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  62. [62]

    Restormer: Efficient transformer for high-resolution image restoration

    Syed Waqas Zamir, Aditya Arora, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high-resolution image restoration. InIEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5728–5739, 2022. 4, 5

  63. [63]

    NTIRE 2026 Challenge Report on Anomaly Detection of Face Enhancement for UGC Images

    Yan Zhong, Qiufang Ma, Zhen Wang, Tingting Jiang, Radu Timofte, et al. NTIRE 2026 Challenge Report on Anomaly Detection of Face Enhancement for UGC Images . In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2

  64. [64]

    NTIRE 2026 Challenge on Bitstream-Corrupted Video Restoration: Methods and Results

    Wenbin Zou, Tianyi Liu, Kejun Wu, Huiping Zhuang, Zongwei Wu, Zhuyun Zhou, Radu Timofte, et al. NTIRE 2026 Challenge on Bitstream-Corrupted Video Restoration: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2