pith. machine review for the scientific record. sign in

arxiv: 2604.10321 · v2 · submitted 2026-04-11 · 💻 cs.CV

Recognition: unknown

NTIRE 2026 Challenge on Single Image Reflection Removal in the Wild: Datasets, Results, and Methods

Authors on Pith no claims yet

Pith reviewed 2026-05-10 15:25 UTC · model grok-4.3

classification 💻 cs.CV
keywords single image reflection removalimage restorationNTIRE challengereal-world datasetOpenRR-5kreflection removal benchmarkcomputer vision
0
0 comments X

The pith

The NTIRE 2026 challenge supplies a new real-world dataset and shows top methods improve reflection removal over prior work.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper reviews the NTIRE 2026 challenge on single-image reflection removal from natural scenes. It releases the OpenRR-5k dataset of real photographs that contain reflections of different strengths and patterns, requiring algorithms to output clean background images. More than one hundred teams registered and eleven reached the final test phase. The highest-scoring entries produced visibly better results than earlier methods and received full approval from five domain experts.

Core claim

The challenge demonstrates that methods tuned on the OpenRR-5k collection of real-world images achieve stronger reflection removal performance than previous approaches, as confirmed by unanimous expert judgment, while releasing the dataset publicly to support continued research.

What carries the argument

The OpenRR-5k dataset, a collection of real photographs spanning varied reflection scenarios and intensities that participants must convert into reflection-free images.

If this is right

  • The released dataset provides a common benchmark for measuring how well new algorithms generalize beyond synthetic training data.
  • Winning methods can be applied directly to consumer photography pipelines that encounter window or surface reflections.
  • Expert validation of the top entries supplies a current reference point for what counts as acceptable real-world performance.
  • Future challenges can reuse the same evaluation protocol to track incremental progress on the same distribution of scenes.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar challenge structures with large real-image sets could accelerate progress on related restoration tasks such as removing rain or haze.
  • Deploying the top methods on mobile cameras would test whether the gains survive hardware constraints like limited compute and varying sensor noise.
  • Collecting additional test images from regions or lighting conditions underrepresented in OpenRR-5k would quickly reveal remaining failure modes.

Load-bearing premise

The OpenRR-5k images and the challenge test split capture enough of the variety found in everyday photography for the reported gains to hold in new scenes.

What would settle it

A fresh collection of real photographs containing reflection types absent from OpenRR-5k on which the top-ranked methods leave visible artifacts or incomplete removal.

Figures

Figures reproduced from arXiv: 2604.10321 by Anas M. Ali, Asuka Shin, Bilel Benjdira, Chia-Ming Lee, Chih-Chung Hsu, Daiguo Zhou, Fei Wang, Fengjun Guo, Florin-Alexandru Vasluianu, Fu-En Yang, Guoyi Xu, Hiroto Shirono, Honghui Zhu, Hongyu Huang, Jae-Young Sim, Jiachen Tu, Jiagao Hu, Jiajia Liu, Jie Cai, Jinglin Shen, Jin Guo, Jin-Hui Jiang, Jinlong Li, Jonghyuk Park, Junyan Cao, Kangning Yang, Kosuke Shigematsu, Kui Jiang, Lindong Kong, Linfeng Li, Lu Zhao, Mengru Yang, Misbha Falak Khanpagadi, Nikhil Akalwadi, Pengwei Liu, Radu Timofte, Ramesh Ashok Tabib, Saiprasad Meesiyawar, Shreeniketh Joshi, Uma Mudenagudi, Wadii Boulila, Wei Zhou, Yan Luo, Yaokun Shi, Yaoxin Jiang, Yi'ang Chen, Yu-Chiang Frank Wang, Yu-Fan Lin, Yu-Jou Hsiao, Yuyi Zhang, Zepeng Wang, Zhiyuan Li, Zibo Meng.

Figure 1
Figure 1. Figure 1: Visualization of paired data generation pipeline for reflection removal. [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Visualization of the OPPO AI Reflection Remover pipeline deployed on Find X8 Ultra, using the April 2025 algorithm version. [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Samples of the OpenRR-5k dataset. Landscapes 70% Animals 4% Humans 8% Objects 18% Daytime 61% Indoor Lighting 28% Nighttime 11% Distribution of Image Subjects Distribution of Lighting Conditions [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: The category distribution of our OpenRR-5k [PITH_FULL_IMAGE:figures/full_fig_p005_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Visual comparison on test0004 shows that top methods effectively remove reflections while preserving details, whereas lower￾ranked methods suffer from residual reflections, over-smoothing, or color distortions, consistent with their lower subjective scores. For instance, while methods from VIP Lab, YuFans, and KLETech-CEVI are capable of suppressing intense reflections, they struggle to faithfully recover … view at source ↗
Figure 6
Figure 6. Figure 6: Visual comparison across diverse OpenRR-5k [PITH_FULL_IMAGE:figures/full_fig_p007_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: An overview of RdNafNet for SIRR. As shown in [PITH_FULL_IMAGE:figures/full_fig_p010_7.png] view at source ↗
Figure 10
Figure 10. Figure 10: The overall architecture of the proposed network. Code [PITH_FULL_IMAGE:figures/full_fig_p011_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Overview of our approach. Top: Two-stage fine-tuning pipeline with SWA and geometric TTA. Numbers on arrows indi￾cate validation PSNR at each stage. Bottom: RDNet architecture with FocalNet-Large backbone, RevCol body, and three separate NAFBlock decoders. Code Link: Google Drive We build upon RDNet [82], a Reversible Column Net￾work (RevCol) originally pretrained on the SIRS synthetic dataset. The archit… view at source ↗
Figure 12
Figure 12. Figure 12: Overview of the proposed reflection removal [PITH_FULL_IMAGE:figures/full_fig_p012_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: DUSKAN architecture. Symmetric 4-level U-Net with DUSKANBlock stages. Path A (blue) extracts global features via FFT magnitude modulation, enriched with spectral positional encoding and SE reweighting. Path B (red) uses Kolmogorov￾Arnold polynomial-basis activations with a parallel-additive selec￾tive gate. A learned per-stage logit α blends both outputs. Code Link: Google Drive As shown in [PITH_FULL_IM… view at source ↗
Figure 14
Figure 14. Figure 14: Overview of the final SiGMoid pipeline used for the [PITH_FULL_IMAGE:figures/full_fig_p014_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: TimeDiffiT architecture. The doubly-corrupted input [PITH_FULL_IMAGE:figures/full_fig_p014_15.png] view at source ↗
Figure 16
Figure 16. Figure 16: Overview of our proposed SIRR-Net for single [PITH_FULL_IMAGE:figures/full_fig_p015_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: An overview of RDNet+ for SIRR. Code Link: [PITH_FULL_IMAGE:figures/full_fig_p016_17.png] view at source ↗
Figure 18
Figure 18. Figure 18: Visual comparison of reflection removal results. From left to right: input images, and results generated by RRay, OPPO-baseline, [PITH_FULL_IMAGE:figures/full_fig_p017_18.png] view at source ↗
Figure 19
Figure 19. Figure 19: Visual comparison on test0074 [PITH_FULL_IMAGE:figures/full_fig_p025_19.png] view at source ↗
Figure 20
Figure 20. Figure 20: Visual comparison on test0011 [PITH_FULL_IMAGE:figures/full_fig_p026_20.png] view at source ↗
Figure 21
Figure 21. Figure 21: Visual comparison on test0015 [PITH_FULL_IMAGE:figures/full_fig_p027_21.png] view at source ↗
Figure 22
Figure 22. Figure 22: Visual comparison on test0044 [PITH_FULL_IMAGE:figures/full_fig_p028_22.png] view at source ↗
Figure 23
Figure 23. Figure 23: Visual comparison on test0069 [PITH_FULL_IMAGE:figures/full_fig_p029_23.png] view at source ↗
Figure 24
Figure 24. Figure 24: Visual comparison on test0084 [PITH_FULL_IMAGE:figures/full_fig_p030_24.png] view at source ↗
Figure 25
Figure 25. Figure 25: Visual comparison on test0090 [PITH_FULL_IMAGE:figures/full_fig_p031_25.png] view at source ↗
read the original abstract

In this paper, we review the NTIRE 2026 challenge on single-image reflection removal (SIRR) in the wild. SIRR is a fundamental task in image restoration. Despite progress in academic research, most methods are tested on synthetic images or limited real-world images, creating a gap in real-world applications. In this challenge, we provide participants with the OpenRR-5k dataset. This dataset requires participants to process real-world images covering a range of reflection scenarios and intensities, aiming to generate clean images without reflections. The challenge attracted more than 100 registrations, with eleven of them participating in the final testing phase. The top-ranked methods advanced the state-of-the-art reflection removal performance and earned unanimous recognition from five experts in the field. The proposed OpenRR-5k dataset is available at https://huggingface.co/datasets/qiuzhangTiTi/OpenRR-5k, and the homepage of this challenge is at https://github.com/caijie0620/OpenRR-5k.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 0 minor

Summary. The manuscript reports on the NTIRE 2026 Challenge on Single Image Reflection Removal in the Wild. It introduces the OpenRR-5k dataset of real-world images spanning a range of reflection scenarios and intensities, notes over 100 registrations with 11 teams reaching the final testing phase, and states that the top-ranked methods advanced state-of-the-art performance while receiving unanimous recognition from five experts. The dataset is released publicly at the provided Hugging Face link.

Significance. If the central claims hold, the work is significant because it supplies a new public benchmark dataset explicitly aimed at closing the gap between synthetic and real-world reflection removal, a long-standing limitation in the field. The release of OpenRR-5k itself constitutes a concrete, reusable contribution that can support future reproducible research and standardized evaluation.

major comments (2)
  1. [Abstract] Abstract: the claim that 'the top-ranked methods advanced the state-of-the-art reflection removal performance' is unsupported by any reported quantitative metrics, baseline comparisons, PSNR/SSIM values, or statistical significance tests, preventing verification of the magnitude or reliability of the reported improvement.
  2. [Abstract] Dataset description (implicit in Abstract and challenge overview): no quantitative characterization of OpenRR-5k is supplied (e.g., reflection-intensity histograms, scene-category coverage, or direct comparison against prior real-world benchmarks), so it is impossible to assess whether measured gains reflect genuine generalization rather than dataset-specific tuning.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments. We agree that the abstract would be strengthened by explicit quantitative support for the performance claims and by additional characterization of the OpenRR-5k dataset. We will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim that 'the top-ranked methods advanced the state-of-the-art reflection removal performance' is unsupported by any reported quantitative metrics, baseline comparisons, PSNR/SSIM values, or statistical significance tests, preventing verification of the magnitude or reliability of the reported improvement.

    Authors: The full manuscript contains a results section with quantitative evaluations of the participating methods on the OpenRR-5k test set, including PSNR and SSIM scores together with comparisons against prior state-of-the-art reflection removal approaches. The abstract statement is therefore grounded in those reported numbers. To make the claim immediately verifiable without requiring the reader to consult later sections, we will revise the abstract to include a concise summary of the key metric improvements (e.g., the top method’s average PSNR gain) and a pointer to the detailed tables. revision: yes

  2. Referee: [Abstract] Dataset description (implicit in Abstract and challenge overview): no quantitative characterization of OpenRR-5k is supplied (e.g., reflection-intensity histograms, scene-category coverage, or direct comparison against prior real-world benchmarks), so it is impossible to assess whether measured gains reflect genuine generalization rather than dataset-specific tuning.

    Authors: The manuscript provides a high-level description of OpenRR-5k (5 000 real-world images spanning diverse reflection scenarios and intensities) and notes its public release. We concur that explicit quantitative characterization would strengthen the paper. In the revision we will add a dedicated paragraph or table reporting basic statistics such as the distribution of estimated reflection strengths, the breakdown of scene categories (indoor/outdoor, urban/natural, etc.), and a side-by-side comparison with existing real-world reflection datasets. These additions will be placed in the dataset section and referenced from the abstract. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical challenge report with no derivations or self-referential reductions

full rationale

The paper is a standard challenge report describing the NTIRE 2026 SIRR task, the release of the OpenRR-5k dataset, participation statistics, and empirical rankings of submitted methods. No equations, parameter fitting, or derivation chain exists. Claims of SOTA advancement rest on external participant submissions and expert judging, not on any internal construction that reduces to the paper's own inputs. Self-citations, if present, are incidental and non-load-bearing for any claimed result. This matches the default expectation of no significant circularity for non-theoretical reporting papers.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

No mathematical derivations or theoretical constructs are present; the work is an empirical challenge summary relying on standard dataset curation and evaluation practices.

pith-pipeline@v0.9.0 · 5728 in / 982 out tokens · 43855 ms · 2026-05-10T15:25:20.967690+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

89 extracted references · 4 canonical work pages · 1 internal anchor

  1. [1]

    NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing

    Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cos- min Ancuti. NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  2. [2]

    NTIRE 2026 Night- time Image Dehazing Challenge Report

    Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cosmin Ancuti, et al. NTIRE 2026 Night- time Image Dehazing Challenge Report . InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  3. [3]

    Degradation-aware image enhancement via vision- language classification

    Jie Cai, Kangning Yang, Jiaming Ding, Lan Fu, Ling Ouyang, Jiang Li, Jinglin Shen, and Zibo Meng. Degradation-aware image enhancement via vision- language classification. In2025 IEEE 8th Interna- tional Conference on Multimedia Information Pro- cessing and Retrieval (MIPR), pages 270–276. IEEE,

  4. [4]

    Openrr- 5k: A large-scale benchmark for reflection removal in the wild

    Jie Cai, Kangning Yang, Ling Ouyang, Lan Fu, Ji- aming Ding, Jinglin Shen, and Zibo Meng. Openrr- 5k: A large-scale benchmark for reflection removal in the wild. In2025 IEEE 8th International Conference on Multimedia Information Processing and Retrieval (MIPR), pages 14–19. IEEE, 2025. 1, 4, 11

  5. [5]

    F2t2-hit: A u-shaped fft transformer and hier- archical transformer for reflection removal

    Jie Cai, Kangning Yang, Ling Ouyang, Lan Fu, Ji- aming Ding, Huiming Sun, Chiu Man Ho, and Zibo Meng. F2t2-hit: A u-shaped fft transformer and hier- archical transformer for reflection removal. In2025 IEEE International Conference on Image Processing (ICIP), pages 809–814. IEEE, 2025. 2

  6. [6]

    NTIRE 2026 Challenge on Sin- gle Image Reflection Removal in the Wild: Datasets, Results, and Methods

    Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Sin- gle Image Reflection Removal in the Wild: Datasets, Results, and Methods . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 3

  7. [7]

    Single image reflection removal with edge guidance, reflection classifier, and recurrent de- composition

    Ya-Chu Chang, Chia-Ni Lu, Chia-Chi Cheng, and Wei-Chen Chiu. Single image reflection removal with edge guidance, reflection classifier, and recurrent de- composition. InProceedings of the IEEE/CVF winter conference on applications of computer vision, pages 2033–2042, 2021. 2

  8. [8]

    Guang-Yong Chen, Chao-Wei Zheng, Guodong Fan, Jian-Nan Su, Min Gan, and CL Philip Chen. Real- world image reflection removal: An ultra-high- definition dataset and an efficient baseline.IEEE Transactions on Circuits and Systems for Video Tech- nology, 35(5):4397–4408, 2024. 10

  9. [9]

    Simple baselines for image restoration

    Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. InEuro- pean Conference on Computer Vision (ECCV), 2022. 4, 10, 12

  10. [10]

    The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview

    Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xiaoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Wor...

  11. [11]

    Low Light Image Enhancement Challenge at NTIRE 2026

    George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali Dharejo, Rizwan Ali Naqvi, Marcos Conde, Radu Timofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 3

  12. [12]

    High FPS Video Frame Interpolation Challenge at NTIRE 2026

    George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zong- wei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026 . InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  13. [13]

    Location-aware sin- gle image reflection removal

    Zheng Dong, Ke Xu, Yin Yang, Hujun Bao, Wei- wei Xu, and Rynson WH Lau. Location-aware sin- gle image reflection removal. InProceedings of the IEEE/CVF international conference on computer vi- sion, pages 5017–5026, 2021. 2

  14. [14]

    NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report

    Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Tatui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 3

  15. [15]

    Conde, Zongwei Wu, Yey- ing Jin, Radu Timofte, et al

    Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yey- ing Jin, Radu Timofte, et al. Photography Retouching Transfer, NTIRE 2026 Challenge: Report . InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,

  16. [16]

    Everingham, L

    M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman. The pascal visual object classes (voc) challenge.International journal of computer vi- sion, 2010. 11

  17. [17]

    Q. Fan, J. Yang, G. Hua, B. Chen, and D. Wipf. A generic deep architecture for single image reflection removal and image smoothing. InICCV, 2017. 2

  18. [18]

    Deep-masking genera- tive network: A unified framework for background restoration from superimposed images.IEEE Trans- actions on Image Processing, 30:4867–4882, 2021

    Xin Feng, Wenjie Pei, Zihui Jia, Fanglin Chen, David Zhang, and Guangming Lu. Deep-masking genera- tive network: A unified framework for background restoration from superimposed images.IEEE Trans- actions on Image Processing, 30:4867–4882, 2021. 2

  19. [19]

    NTIRE 2026 Challenge on End-to-End Financial Re- ceipt Restoration and Reasoning from Degraded Im- ages: Datasets, Methods and Results

    Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Re- ceipt Restoration and Reasoning from Degraded Im- ages: Datasets, Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  20. [20]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)

    Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  21. [21]

    NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild

    Aleksandr Gushchin, Khaled Abud, Ekaterina Shu- mitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InProceedings of the IEEE/CVF Conference on Com- puter Vision an...

  22. [22]

    Masked autoencoders are scalable vision learners

    Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll´ar, and Ross Girshick. Masked autoencoders are scalable vision learners. InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 16000–16009, 2022. 14

  23. [23]

    L-differ: Single image reflec- tion removal with language-based diffusion model

    Yuchen Hong, Haofeng Zhong, Shuchen Weng, Jinxiu Liang, and Boxin Shi. L-differ: Single image reflec- tion removal with language-based diffusion model. In European Conference on Computer Vision, pages 58–

  24. [24]

    Robust Deepfake Detection, NTIRE 2026 Challenge: Report

    Benedikt Hopf, Radu Timofte, et al. Robust Deepfake Detection, NTIRE 2026 Challenge: Report . InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,

  25. [25]

    Dereflection any image with diffusion priors and diversified data

    Jichen Hu, Chen Yang, Zanwei Zhou, Jiemin Fang, Qi Tian, and Wei Shen. Dereflection any image with diffusion priors and diversified data. InProceedings of the AAAI Conference on Artificial Intelligence, pages 4860–4868, 2026. 10, 11

  26. [26]

    Trash or treasure? an interactive dual-stream strategy for single image re- flection separation.Advances in Neural Information Processing Systems, 34:24683–24694, 2021

    Qiming Hu and Xiaojie Guo. Trash or treasure? an interactive dual-stream strategy for single image re- flection separation.Advances in Neural Information Processing Systems, 34:24683–24694, 2021. 2

  27. [27]

    Single image reflection separation via component synergy

    Qiming Hu and Xiaojie Guo. Single image reflection separation via component synergy. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 13138–13147, 2023. 2

  28. [28]

    Averag- ing weights leads to wider optima and better general- ization

    Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averag- ing weights leads to wider optima and better general- ization. InUncertainty in Artificial Intelligence (UAI),

  29. [29]

    Focal frequency loss for image reconstruction and synthesis

    Liming Jiang, Bo Dai, Wayne Wu, and Chen Change Loy. Focal frequency loss for image reconstruction and synthesis. InProceedings of the IEEE/CVF In- ternational Conference on Computer Vision (ICCV),

  30. [30]

    NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge

    Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Korchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Dorogova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . InProceedings of the IEEE/CVF Conferenc...

  31. [31]

    S. Kim, Y . Huo, and S. E. Yoon. Single image reflec- tion removal with physically-based training images. In CVPR, 2020. 2

  32. [32]

    Polarized re- flection removal with perfect alignment in the wild

    Chenyang Lei, Xuhua Huang, Mengdi Zhang, Qiong Yan, Wenxiu Sun, and Qifeng Chen. Polarized re- flection removal with perfect alignment in the wild. InProceedings of the IEEE/CVF conference on com- puter vision and pattern recognition, pages 1750– 1758, 2020. 3

  33. [33]

    A categorized reflection removal dataset with diverse real-world scenes

    Chenyang Lei, Xuhua Huang, Chenyang Qi, Yankun Zhao, Wenxiu Sun, Qiong Yan, and Qifeng Chen. A categorized reflection removal dataset with diverse real-world scenes. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion, pages 3040–3048, 2022

  34. [34]

    Single image reflection removal through cascaded refinement

    Chao Li, Yixiao Yang, Kun He, Stephen Lin, and John E Hopcroft. Single image reflection removal through cascaded refinement. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3565–3574, 2020. 2, 3, 11

  35. [35]

    The First Challenge on Mobile Real-World Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zi- han Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,

  36. [36]

    Rectifying latent space for generative single-image reflection removal.arXiv preprint arXiv:2512.06358, 2025

    Mingjia Li, Jin Hu, Hainuo Wang, Qiming Hu, Jiarui Wang, and Xiaojie Guo. Rectifying latent space for generative single-image reflection removal.arXiv preprint arXiv:2512.06358, 2025. 11

  37. [37]

    NTIRE 2026 Challenge on Short- form UGC Video Restoration in the Wild with Gener- ative Models: Datasets, Methods and Results

    Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short- form UGC Video Restoration in the Wild with Gener- ative Models: Datasets, Methods and Results . InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,

  38. [38]

    NTIRE 2026 The Second Challenge on Day and Night Rain- drop Removal for Dual-Focused Images: Methods and Results

    Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bi- han Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Rain- drop Removal for Dual-Focused Images: Methods and Results . InProceedings of the IEEE/CVF Conference on Comput...

  39. [39]

    Two-stage single image reflec- tion removal with reflection-aware guidance.Applied Intelligence, 53(16):19433–19448, 2023

    Yu Li, Ming Liu, Yaling Yi, Qince Li, Dongwei Ren, and Wangmeng Zuo. Two-stage single image reflec- tion removal with reflection-aware guidance.Applied Intelligence, 53(16):19433–19448, 2023. 2

  40. [40]

    The First Challenge on Remote Sensing Infrared Im- age Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Challenge on Remote Sensing Infrared Im- age Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 3

  41. [41]

    Conde, et al

    Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: RealX3D Challenge Results . In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  42. [42]

    NTIRE 2026 X-AIGC Quality Assessment Challenge: Methods and Results

    Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...

  43. [43]

    Decoupled weight decay regularization

    Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. InInternational Conference on Learning Representations, 2019. 14

  44. [44]

    NTIRE 2026 Challenge on Video Saliency Prediction: Methods and Results

    Andrey Moskalenko, Alexey Bryncev, Ivan Kos- mynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timofte, et al. NTIRE 2026 Challenge on Video Saliency Prediction: Methods and Results . InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  45. [45]

    Maxime Oquab, Timoth ´ee Darcet, Theo Moutakanni, Huy V . V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Russell Howes, Po-Yao Huang, Hu Xu, Vasu Sharma, Shang-Wen Li, Wojciech Galuba, Mike Rabbat, Mido Assran, Nicolas Ballas, Gabriel Synnaeve, Ishan Misra, Herve Jegou, Julien Mairal, Patrick La...

  46. [46]

    NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Re- sults

    Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Tim- ofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Re- sults . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  47. [47]

    Complemen- tary mixture-of-experts and complementary cross- attention for single image reflection separation in the wild.IEEE Transactions on Image Processing, 2026

    Jonghyuk Park and Jae-Young Sim. Complemen- tary mixture-of-experts and complementary cross- attention for single image reflection separation in the wild.IEEE Transactions on Image Processing, 2026. 9, 11, 12

  48. [48]

    NTIRE 2026 Challenge on Learned Smart- phone ISP with Unpaired Data: Methods and Results

    Georgy Perevozchikov, Daniil Vladimirov, Radu Tim- ofte, et al. NTIRE 2026 Challenge on Learned Smart- phone ISP with Unpaired Data: Methods and Results . InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  49. [49]

    V-desirr: Very fast deep embedded single image reflection removal

    BH Prasad, Lokesh R Boregowda, Kaushik Mitra, Sanjoy Chowdhury, et al. V-desirr: Very fast deep embedded single image reflection removal. InPro- ceedings of the IEEE/CVF International Conference on Computer Vision, pages 2390–2399, 2021. 2

  50. [50]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality As- sessment (Track 1)

    Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality As- sessment (Track 1) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion (CVPR) Workshops, 2026. 3

  51. [51]

    The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Re- sults

    Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timofte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Re- sults . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  52. [52]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi- Exposure Image Fusion in Dynamic Scenes (Track2)

    Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi- Exposure Image Fusion in Dynamic Scenes (Track2) . InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  53. [53]

    The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report

    Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  54. [54]

    Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al

    Tim Seizinger, Florin-Alexandru Vasluianu, Mar- cos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Rendering Challenge at NTIRE 2026 . InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops,

  55. [55]

    Score-Based Generative Modeling through Stochastic Differential Equations

    Yang Song, Jasper Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations.arXiv preprint arXiv:2011.13456, 2021. 14

  56. [56]

    Robust single image reflection removal against adver- sarial attacks

    Zhenbo Song, Zhenyuan Zhang, Kaihao Zhang, Wen- han Luo, Zhaoxin Fan, Wenqi Ren, and Jianfeng Lu. Robust single image reflection removal against adver- sarial attacks. InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 24688–24698, 2023. 2

  57. [57]

    The Third Challenge on Image De- noising at NTIRE 2026: Methods and Results

    Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image De- noising at NTIRE 2026: Methods and Results . In Proceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  58. [58]

    The Second Chal- lenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results

    Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Chal- lenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR)...

  59. [59]

    NTIRE 2026 The First Challenge on Blind Computational Aberra- tion Correction: Methods and Results

    Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberra- tion Correction: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  60. [60]

    Resolution-robust large mask inpainting with fourier convolutions

    Roman Suvorov, Elizaveta Logacheva, Anton Mashikhin, Anastasia Remizova, Arsenii Ashukha, Aleksei Silvestrov, Naejin Kong, Harshith Goka, Ki- woong Park, and Victor Lempitsky. Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 2149–2159,

  61. [61]

    Score-based self- supervised mri denoising.arXiv preprint arXiv:2505.05631,

    Jiachen Tu, Yaokun Shi, and Fan Lam. Score- based self-supervised MRI denoising.arXiv preprint arXiv:2505.05631, 2025. 14

  62. [62]

    Learning-Based Ambient Lighting Normaliza- tion: NTIRE 2026 Challenge Results and Findings

    Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning-Based Ambient Lighting Normaliza- tion: NTIRE 2026 Challenge Results and Findings . InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  63. [63]

    Advances in Single-Image Shadow Removal: Results from the NTIRE 2026 Challenge

    Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single-Image Shadow Removal: Results from the NTIRE 2026 Challenge . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 3

  64. [64]

    Depth of field guided reflection removal

    Renjie Wan, Boxin Shi, Tan Ah Hwee, and Alex C Kot. Depth of field guided reflection removal. In2016 IEEE International Conference on Image Processing (ICIP), pages 21–25. IEEE, 2016. 1

  65. [65]

    R. Wan, B. Shi, H. Li, L. Y Duan, A. H. Tan, and A. C. Kot. Corrn: Cooperative reflection removal network. IEEE TPAMI, 2019. 2

  66. [66]

    R. Wan, B. Shi, H. Li, L. Y . Duan, and A. C. Kot. Reflection scene separation from a single image. In CVPR, 2020. 2

  67. [67]

    The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results

    Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Ji- atong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  68. [68]

    NTIRE 2026 Challenge on 3D Content Super- Resolution: Methods and Results

    Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timo- fte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Challenge on 3D Content Super- Resolution: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  69. [69]

    NTIRE 2026 Challenge on Light Field Image Super- Resolution: Methods and Results

    Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wending Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 Challenge on Light Field Image Super- Resolution: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  70. [70]

    K. Wei, J. Yang, Y . Fu, D. Wipf, and H. Huang. Single image reflection removal exploiting misaligned train- ing data and network enhancements. InCVPR, 2019. 2

  71. [71]

    Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report

    Jiebin Yan, Chenyu Tu, Qinghua Lin, Zongwei WU, Weixia Zhang, Zhihua Wang, Peibei Cao, Yuming Fang, Xiaoning Liu, Zhuyun Zhou, Radu Timofte, et al. Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 3

  72. [72]

    J. Yang, D. Gong, L. Liu, and Q. Shi. Seeing deeply and bidirectionally: A deep learning approach for sin- gle image reflection removal. InECCV, 2018. 2

  73. [73]

    Focal modulation networks

    Jianwei Yang, Chunyuan Li, Xiyang Dai, and Jianfeng Gao. Focal modulation networks. InAdvances in Neu- ral Information Processing Systems (NeurIPS), 2022. 12

  74. [74]

    Ntire 2025 challenge on single image reflection removal in the wild: Datasets, methods and results

    Kangning Yang, Jie Cai, Ling Ouyang, Florin- Alexandru Vasluianu, Radu Timofte, et al. Ntire 2025 challenge on single image reflection removal in the wild: Datasets, methods and results. InProceedings of the Computer Vision and Pattern Recognition Con- ference, pages 1301–1311, 2025. 1

  75. [75]

    Openrr-1k: A scalable dataset for real-world reflec- tion removal

    Kangning Yang, Ling Ouyang, Huiming Sun, Jie Cai, Lan Fu, Jiaming Ding, Chiu Man Ho, and Zibo Meng. Openrr-1k: A scalable dataset for real-world reflec- tion removal. In2025 IEEE International Conference on Image Processing (ICIP), pages 839–844. IEEE,

  76. [76]

    Survey on single-image reflection removal using deep learning techniques

    Kangning Yang, Huiming Sun, Jie Cai, Lan Fu, Ji- aming Ding, Jinlong Li, and Zibo Meng. Survey on single-image reflection removal using deep learning techniques. In2025 IEEE 8th International Confer- ence on Multimedia Information Processing and Re- trieval (MIPR), pages 20–26. IEEE, 2025. 1

  77. [77]

    Fast single image reflection suppres- sion via convex optimization

    Yang Yang, Wenye Ma, Yin Zheng, Jian-Feng Cai, and Weiyu Xu. Fast single image reflection suppres- sion via convex optimization. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8141–8149, 2019. 1

  78. [78]

    Reflection removal through efficient adaptation of diffusion transformers.arXiv preprint arXiv:2512.05000, 2025

    Daniyar Zakarin, Thiemo Wandel, Anton Obukhov, and Dengxin Dai. Reflection removal through efficient adaptation of diffusion transformers.arXiv preprint arXiv:2512.05000, 2025. 11

  79. [79]

    NTIRE 2026 Challenge on High-Resolution Depth of non- Lambertian Surfaces

    Pierluigi Zama Ramirez, Fabio Tosi, Luigi Di Ste- fano, Radu Timofte, Alex Costanzino, Matteo Poggi, Samuele Salti, Stefano Mattoccia, et al. NTIRE 2026 Challenge on High-Resolution Depth of non- Lambertian Surfaces . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 3

  80. [80]

    Restormer: Efficient transformer for high- resolution image restoration

    Syed Waqas Zamir, Aditya Arora, Salman Khan, Mu- nawar Hayat, Fahad Shahbaz Khan, and Ming-Hsuan Yang. Restormer: Efficient transformer for high- resolution image restoration. InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 5728–5739, 2022. 15

Showing first 80 references.