Recognition: unknown
The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results
Pith reviewed 2026-05-10 16:10 UTC · model grok-4.3
The pith
A review of real-world face restoration competition shows teams advancing natural outputs and identity preservation.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper reports that ten teams submitted valid models to the challenge and nine received final rankings based on a combined quality and identity metric, providing an overview of effective techniques for perceptual improvement in face restoration tasks.
What carries the argument
The weighted image quality assessment score paired with an identity verification model that together rank outputs on naturalness and consistency.
Load-bearing premise
That the chosen quality metrics and identity checker accurately measure naturalness, realism, and identity preservation without missing key failure modes or introducing metric-specific biases.
What would settle it
Human preference ratings on the submitted outputs that systematically disagree with the automated ranking order.
Figures
read the original abstract
This paper provides a review of the NTIRE 2026 challenge on real-world face restoration, highlighting the proposed solutions and the resulting outcomes. The challenge focuses on generating natural and realistic outputs while maintaining identity consistency. Its goal is to advance state-of-the-art solutions for perceptual quality and realism, without imposing constraints on computational resources or training data. Performance is evaluated using a weighted image quality assessment (IQA) score and employs the AdaFace model as an identity checker. The competition attracted 96 registrants, with 10 teams submitting valid models; ultimately, 9 teams achieved valid scores in the final ranking. This collaborative effort advances the performance of real-world face restoration while offering an in-depth overview of the latest trends in the field.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. This manuscript reports on the NTIRE 2026 challenge for real-world face restoration. It outlines the challenge goals of producing natural, realistic outputs with identity consistency, notes the participation of 96 registrants with 10 teams submitting valid models and 9 achieving valid scores, describes the evaluation using a weighted IQA score combined with the AdaFace model for identity checking, and asserts that the effort advances the state of the art while providing an overview of current trends in the field.
Significance. Should the chosen metrics prove to be reliable proxies for human-perceived naturalness, realism, and identity preservation, the paper would offer significant value by documenting a community benchmark and summarizing methodological trends in unconstrained face restoration. It highlights collaborative progress in a perceptual task.
major comments (2)
- [Evaluation Protocol] The central claim that the challenge advances real-world face restoration performance (abstract) rests on rankings derived from a weighted IQA score and AdaFace identity checker. The manuscript provides no evidence of correlation between these metrics and human judgments of naturalness or identity, nor analysis of failure modes such as texture inconsistencies, lighting mismatches, or subtle identity drift that standard IQA and face-recognition models are known to under-penalize. This is load-bearing for the assertion of genuine progress rather than metric optimization.
- [Results] The overview of submitted methods and final rankings lacks detailed descriptions of the top-performing approaches, including key innovations or ablations that drove the reported scores. Without this, the claim of providing an in-depth overview of latest trends cannot be fully assessed from the results alone.
minor comments (2)
- [Participation Statistics] The abstract states that 9 teams achieved valid scores while 10 submitted valid models; the main text should explicitly clarify the distinction and any disqualifications to prevent reader confusion.
- Add references to the first NTIRE face restoration challenge and related perceptual metrics literature to better situate the current evaluation protocol.
Simulated Author's Rebuttal
We are grateful to the referee for the careful reading and constructive criticism of our manuscript. We address each major comment below and indicate the revisions we plan to make.
read point-by-point responses
-
Referee: [Evaluation Protocol] The central claim that the challenge advances real-world face restoration performance (abstract) rests on rankings derived from a weighted IQA score and AdaFace identity checker. The manuscript provides no evidence of correlation between these metrics and human judgments of naturalness or identity, nor analysis of failure modes such as texture inconsistencies, lighting mismatches, or subtle identity drift that standard IQA and face-recognition models are known to under-penalize. This is load-bearing for the assertion of genuine progress rather than metric optimization.
Authors: We concur that the manuscript would benefit from a more explicit discussion of the metrics' validity. The evaluation protocol is defined by the challenge organizers and employs metrics commonly adopted in the field, with supporting evidence from prior publications on their correlation to human judgments. To strengthen the paper, we will revise the evaluation section to include references to studies validating these metrics and add a brief analysis of potential failure modes based on qualitative examples from the challenge. We will also moderate the language in the abstract and conclusion to emphasize that the results reflect performance under the specified evaluation protocol. revision: yes
-
Referee: [Results] The overview of submitted methods and final rankings lacks detailed descriptions of the top-performing approaches, including key innovations or ablations that drove the reported scores. Without this, the claim of providing an in-depth overview of latest trends cannot be fully assessed from the results alone.
Authors: The manuscript provides an overview by presenting the rankings and categorizing the methods according to their primary techniques. Detailed ablations are often included in the teams' individual submissions to the NTIRE workshop. We will expand the results section in the revised manuscript with more in-depth summaries of the top three methods, drawing from the technical descriptions submitted by the teams, to better illustrate the latest trends and innovations. revision: partial
Circularity Check
No circularity: descriptive challenge report with no derivations or self-referential claims
full rationale
The paper is a summary of an external competition (NTIRE 2026 face restoration challenge). It reports participant methods, rankings, and outcomes using pre-defined evaluation metrics (weighted IQA + AdaFace) without any internal derivations, predictions, fitted parameters, or load-bearing self-citations. No equations, ansatzes, or uniqueness theorems are invoked that could reduce to the paper's own inputs. The central claim of advancing the field is a factual statement about submitted results, not a constructed prediction. This is a standard non-circular challenge overview.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing
Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cosmin Ancuti. NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing . InCVPRW, 2026. 2
2026
-
[2]
NTIRE 2026 Nighttime Image Dehazing Challenge Report
Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cos- min Ancuti, et al. NTIRE 2026 Nighttime Image Dehazing Challenge Report . InCVPRW, 2026. 2
2026
-
[3]
Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer
Huanqia Cai, Sihan Cao, Ruoyi Du, Peng Gao, Steven Hoi, Zhaohui Hou, Shijie Huang, Dengyang Jiang, Xin Jin, Liangchen Li, et al. Z-image: An efficient image generation foundation model with single-stream diffusion transformer. arXiv preprint arXiv:2511.22699, 2025. 4, 5, 6
work page internal anchor Pith review arXiv 2025
-
[4]
NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods
Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods . InCVPRW, 2026. 2
2026
-
[5]
Glean: Generative latent bank for large- factor image super-resolution
Kelvin CK Chan, Xintao Wang, Xiangyu Xu, Jinwei Gu, and Chen Change Loy. Glean: Generative latent bank for large- factor image super-resolution. InCVPR, 2021. 1
2021
-
[6]
IQA-PyTorch: Pytorch toolbox for image quality assessment
Chaofeng Chen and Jiadi Mo. IQA-PyTorch: Pytorch toolbox for image quality assessment. [Online]. Avail- able:https : / / github . com / chaofengc / IQA - PyTorch, 2022. 7
2022
-
[7]
Chaofeng Chen, Xiaoming Li, Yang Lingbo, Xianhui Lin, Lei Zhang, and Kwan-Yee K. Wong. Progressive semantic- aware style transformation for blind face restoration. In CVPR, 2021. 1
2021
-
[8]
Xiaoxu Chen, Jingfan Tan, Tao Wang, Kaihao Zhang, Wen- han Luo, and Xiaocun Cao. Towards real-world blind face restoration with generative diffusion prior.arXiv preprint arXiv:2312.15736, 2023. 1, 2
-
[9]
Fsrnet: End-to-end learning face super-resolution with facial priors
Yu Chen, Ying Tai, Xiaoming Liu, Chunhua Shen, and Jian Yang. Fsrnet: End-to-end learning face super-resolution with facial priors. InCVPR, 2018. 1
2018
-
[10]
The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview
Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xi- aoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InCVPRW,
2026
-
[11]
Low Light Image Enhancement Challenge at NTIRE 2026
George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali, Rizwan Ali Naqvi, Marcos Conde, Radu Timofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026 . In CVPRW, 2026. 2
2026
-
[12]
High FPS Video Frame Interpolation Challenge at NTIRE 2026
George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026 . InCVPRW, 2026. 2
2026
-
[13]
ArcFace: Additive angular margin loss for deep face recognition
Jiankang Deng, Jia Guo, Xue Niannan, and Stefanos Zafeiriou. ArcFace: Additive angular margin loss for deep face recognition. InCVPR, 2019. 5, 7
2019
-
[14]
NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report
Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Tatui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report . InCVPRW, 2026. 2
2026
-
[15]
Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al
Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Transfer, NTIRE 2026 Challenge: Report . InCVPRW, 2026. 2
2026
-
[16]
VQFR: Blind face restoration with vector-quantized dictionary and parallel de- coder
Yuchao Gu, Xintao Wang, Liangbin Xie, Chao Dong, Gen Li, Ying Shan, and Ming-Ming Cheng. VQFR: Blind face restoration with vector-quantized dictionary and parallel de- coder. InECCV, 2022. 1
2022
-
[17]
NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results
Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InCVPRW, 2026. 2
2026
-
[18]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)
Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InCVPRW, 2026. 2
2026
-
[19]
NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild
Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InCVPRW, 2026. 2
2026
-
[20]
Robust Deepfake De- tection, NTIRE 2026 Challenge: Report
Benedikt Hopf, Radu Timofte, et al. Robust Deepfake De- tection, NTIRE 2026 Challenge: Report . InCVPRW, 2026. 2
2026
-
[21]
Ref-ldm: A latent diffusion model for reference-based face image restoration
Chi-Wei Hsiao, Yu-Lun Liu, Cheng-Kun Yang, Sheng-Po Kuo, Yucheun Kevin Jou, and Chia-Ping Chen. Ref-ldm: A latent diffusion model for reference-based face image restoration. InAdvances in Neural Information Processing Systems, pages 74840–74867. Curran Associates, Inc., 2024. 7
2024
-
[22]
Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller
Gary B. Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller. Labeled Faces in the Wild: A Database forStudying Face Recognition in Unconstrained Environ- ments. InWorkshop on Faces in ’Real-Life’ Images: De- tection, Alignment, and Recognition, 2008. 2
2008
-
[23]
Progressive growing of GANs for improved quality, stability, and variation
Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. InICLR, 2018. 2, 3
2018
-
[24]
A style-based generator architecture for generative adversarial networks
Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019. 2, 7, 8, 9
2019
-
[25]
Analyzing and improving the image quality of stylegan
Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. InCVPR, 2020. 5
2020
-
[26]
Elucidating the design space of diffusion-based generative models.NeurIPS, 2022
Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models.NeurIPS, 2022. 7
2022
-
[27]
MUSIQ: Multi-scale Image Quality Trans- former
Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang. MUSIQ: Multi-scale Image Quality Trans- former . InICCV, 2021. 3
2021
-
[28]
NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge
Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Ko- rchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Doro- gova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani ´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In CVPRW, 2026. 2
2026
-
[29]
Progressive face super-resolution via attention to facial landmark
Deokyun Kim, Minseon Kim, Gihyun Kwon, and Dae-Shik Kim. Progressive face super-resolution via attention to facial landmark. InBMVC, 2019. 1
2019
-
[30]
Adaface: Quality adaptive margin for face recognition
Minchul Kim, Anil K Jain, and Xiaoming Liu. Adaface: Quality adaptive margin for face recognition. InCVPR,
-
[31]
The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview
Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview . InCVPRW, 2026. 2
2026
-
[32]
INTERLCM: Low-quality images as intermediate states of latent consistency models for effective blind face restoration
Senmao Li, Kai Wang, Joost van de Weijer, Fahad Shahbaz Khan, Chun-Le Guo, Shiqi Yang, Yaxing Wang, Jian Yang, and Ming-Ming Cheng. INTERLCM: Low-quality images as intermediate states of latent consistency models for effective blind face restoration. InICLR, 2025. 2
2025
-
[33]
Self-Supervised Selective-Guided Diffusion Model for Old-Photo Face Restoration
Wenjie Li, Xiangyi Wang, Heng Guo, Guangwei Gao, and Zhanyu Ma. Self-Supervised Selective-Guided Diffusion Model for Old-Photo Face Restoration. InNeurIPS, 2025. 2
2025
-
[34]
NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results
Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results . InCVPRW, 2026. 2
2026
-
[35]
NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results
Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results . InCVPRW, 2026. 2
2026
-
[36]
Ls- dir: A large scale dataset for image restoration
Yawei Li, Kai Zhang, Jingyun Liang, Jiezhang Cao, Ce Liu, Rui Gong, Yulun Zhang, Hao Tang, Yun Liu, Denis Deman- dolx, Rakesh Ranjan, Radu Timofte, and Luc Van Gool. Ls- dir: A large scale dataset for image restoration. InCVPRW,
-
[37]
SwinIR: Image restora- tion using swin transformer
Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. SwinIR: Image restora- tion using swin transformer. InInternational Conference on Computer Vision Workshops, 2021. 7
2021
-
[38]
Diff- BIR: Toward blind image restoration via generative diffusion prior.European Conference on Computer Vision, 2024
Xinqi Lin, Jingwen He, Ziyan Chen, Zhaoyang Lyu, Bo Dai, Fanghua Yu, Wanli Ouyang, Yu Qiao, and Chao Dong. Diff- BIR: Toward blind image restoration via generative diffusion prior.European Conference on Computer Vision, 2024. 1, 2, 5, 7, 8
2024
-
[39]
The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview
Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InCVPRW, 2026. 2
2026
-
[40]
FaceMe: Robust Blind Face Restoration with Personal Identification
Siyu Liu, Zheng-Peng Duan, Jia OuYang, Jiayi Fu, Hyunhee Park, Zikun Liu, Chun-Le Guo, and Chongyi Li. FaceMe: Robust Blind Face Restoration with Personal Identification. InAAAI, 2025. 2
2025
-
[41]
Conde, et al
Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: Re- alX3D Challenge Results . InCVPRW, 2026. 2
2026
-
[42]
NTIRE 2026 X- AIGC Quality Assessment Challenge: Methods and Results
Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...
2026
-
[43]
Waveface: Authentic face restoration with efficient frequency recovery
Yunqi Miao, Jiankang Deng, and Jungong Han. Waveface: Authentic face restoration with efficient frequency recovery. InCVPR, 2024. 1, 2
2024
-
[44]
Unlocking the Potential of Diffusion Priors in Blind Face Restoration
Yunqi Miao, Zhiyu Qu, Mingqi Gao, Changrui Chen, Jifei Song, Jungong Han, and Jiankang Deng. Unlocking the Potential of Diffusion Priors in Blind Face Restoration. In ICCV, 2025. 2
2025
-
[45]
NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results
Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timo- fte, et al. NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results . InCVPRW, 2026. 2
2026
-
[46]
Bvi-aom: A new training dataset for deep video compression optimization
Jakub Nawała, Yuxuan Jiang, Fan Zhang, Xiaoqing Zhu, Joel Sole, and David Bull. Bvi-aom: A new training dataset for deep video compression optimization. InICIP, 2024. 9
2024
-
[47]
DINOv2: Learning Robust Visual Features without Supervision
Maxime Oquab, Timoth ´ee Darcet, Th ´eo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 5
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[48]
NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results
Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . InCVPRW,
2026
-
[49]
NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results
Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results . InCVPRW,
2026
-
[50]
SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas M ¨uller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion mod- els for high-resolution image synthesis.arXiv preprint arXiv:2307.01952, 2023. 8
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[51]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Chal- lenge: Professional Image Quality Assessment (Track 1)
Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Chal- lenge: Professional Image Quality Assessment (Track 1) . InCVPRW, 2026. 2
2026
-
[52]
Diffbfr: Bootstrapping dif- fusion model for blind face restoration
Xinmin Qiu, Congying Han, Zicheng Zhang, Bonan Li, Tiande Guo, and Xuecheng Nie. Diffbfr: Bootstrapping dif- fusion model for blind face restoration. InACM MM, 2023. 1, 2
2023
-
[53]
Feature out! Let Raw Image as Your Condition for Blind Face Restoration
Xinmin Qiu, Chen Gege, Bonan Li, Congying Han, Tiande Guo, and Zicheng Zhang. Feature out! Let Raw Image as Your Condition for Blind Face Restoration. InICML, 2025. 2
2025
-
[54]
The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results
Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timo- fte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results . InCVPRW, 2026. 2
2026
-
[55]
NTIRE 2026 The 3rd Restore Any Im- age Model (RAIM) Challenge: Multi-Exposure Image Fu- sion in Dynamic Scenes (Track2)
Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Im- age Model (RAIM) Challenge: Multi-Exposure Image Fu- sion in Dynamic Scenes (Track2) . InCVPRW, 2026. 2
2026
-
[56]
The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report
Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InCVPRW, 2026. 2
2026
-
[57]
High-resolution image syn- thesis with latent diffusion models
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High-resolution image syn- thesis with latent diffusion models. InCVPR, 2022. 7
2022
-
[58]
Adversarial diffusion distillation
Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. Adversarial diffusion distillation. InECCV, 2024. 4
2024
-
[59]
Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al
Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Rendering Challenge at NTIRE 2026 . InCVPRW, 2026. 2
2026
-
[60]
Deep semantic face deblurring
Ziyi Shen, Wei-Sheng Lai, Tingfa Xu, Jan Kautz, and Ming- Hsuan Yang. Deep semantic face deblurring. InCVPR, 2018. 1
2018
-
[61]
CLR-Face: Conditional latent refinement for blind face restoration using score-based diffusion models
Maitreya Suin and Rama Chellappa. CLR-Face: Conditional latent refinement for blind face restoration using score-based diffusion models. InIJCAI, 2024. 1, 2
2024
-
[62]
The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results
Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InCVPRW, 2026. 2
2026
-
[63]
The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results
Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . In CVPRW, 2026. 2
2026
-
[64]
NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results
Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InCVPRW, 2026. 2
2026
-
[65]
Overcoming false illusions in real-world face restora- tion with multi-modal guided diffusion model
Keda Tao, Jinjin Gu, Yulun Zhang, Xiucheng Wang, and Nan Cheng. Overcoming false illusions in real-world face restora- tion with multi-modal guided diffusion model. InICLR,
-
[66]
Dual associated encoder for face restoration
Yu-Ju Tsai, Yu-Lun Liu, Lu Qi, Kelvin CK Chan, and Ming- Hsuan Yang. Dual associated encoder for face restoration. In ICLR, 2024. 1
2024
-
[67]
Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings
Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings . InCVPRW, 2026. 2
2026
-
[68]
Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge
Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge . InCVPRW, 2026. 2
2026
-
[69]
Ex- ploring clip for assessing the look and feel of images
Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. Ex- ploring clip for assessing the look and feel of images. In AAAI, 2023. 3, 4, 6
2023
-
[70]
Chan, and Chen Change Loy
Jianyi Wang, Zongsheng Yue, Shangchen Zhou, Kelvin C.K. Chan, and Chen Change Loy. Exploiting diffusion prior for real-world image super-resolution.IJCV, 2024. 2
2024
-
[71]
One-step diffusion model for face restoration
Jingkai Wang, Jue Gong, Lin Zhang, Zheng Chen, Xing Liu, Hong Gu, Yutong Liu, Yulun Zhang, and Xiaokang Yang. One-step diffusion model for face restoration. InCVPR,
-
[72]
The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results
Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InCVPRW, 2026. 2
2026
-
[73]
NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results
Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results . In CVPRW, 2026. 2
2026
-
[74]
To- wards real-world blind face restoration with generative facial prior
Xintao Wang, Yu Li, Honglun Zhang, and Ying Shan. To- wards real-world blind face restoration with generative facial prior. InCVPR, 2021. 1, 2, 8
2021
-
[75]
Real-esrgan: Training real-world blind super-resolution with pure synthetic data
Xintao Wang, Liangbin Xie, Chao Dong, and Ying Shan. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In2021 IEEE/CVF International Con- ference on Computer Vision Workshops (ICCVW), pages 1905–1914, Montreal, BC, Canada, 2021. 7, 9
1905
-
[76]
NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results
Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wending Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results . In CVPRW, 2026. 2
2026
-
[77]
Restoreformer++: Towards real- world blind face restoration from undegraded key-value pairs.IEEE TPAMI, 2023
Zhouxia Wang, Jiawei Zhang, Tianshui Chen, Wenping Wang, and Ping Luo. Restoreformer++: Towards real- world blind face restoration from undegraded key-value pairs.IEEE TPAMI, 2023. 1
2023
-
[78]
Dr2: Diffusion-based robust degradation remover for blind face restoration
Zhixin Wang, Xiaoyun Zhang, Ziying Zhang, Huangjie Zheng, Mingyuan Zhou, Ya Zhang, and Yanfeng Wang. Dr2: Diffusion-based robust degradation remover for blind face restoration. InCVPR, 2023. 1, 2
2023
-
[79]
Q-align: Teaching lmms for visual scoring via discrete text-defined levels
Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Chunyi Li, Liang Liao, Annan Wang, Erli Zhang, Wenxiu Sun, Qiong Yan, Xiongkuo Min, Guangtai Zhai, and Weisi Lin. Q-align: Teaching lmms for visual scoring via discrete text-defined levels. InICML, 2024. 3
2024
-
[80]
One-step effective diffusion network for real-world image super-resolution
Rongyuan Wu, Lingchen Sun, Zhiyuan Ma, and Lei Zhang. One-step effective diffusion network for real-world image super-resolution. InNeurIPS, 2024. 1, 2
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.