pith. machine review for the scientific record. sign in

arxiv: 2604.10532 · v2 · submitted 2026-04-12 · 💻 cs.CV

Recognition: unknown

The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results

Authors on Pith no claims yet

Pith reviewed 2026-05-10 16:10 UTC · model grok-4.3

classification 💻 cs.CV
keywords real-world face restorationimage quality assessmentidentity preservationperceptual qualitychallenge resultsimage restorationdegraded face images
0
0 comments X

The pith

A review of real-world face restoration competition shows teams advancing natural outputs and identity preservation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper summarizes the results from a competition on restoring faces in real-world images. It covers the methods submitted by participating teams and how they performed under an evaluation that balances image quality with identity matching. The work aims to capture current approaches for generating realistic faces without limits on computation or data used. Readers would care because it maps out practical progress in handling common image degradations like blur and noise while keeping recognizable features intact.

Core claim

The paper reports that ten teams submitted valid models to the challenge and nine received final rankings based on a combined quality and identity metric, providing an overview of effective techniques for perceptual improvement in face restoration tasks.

What carries the argument

The weighted image quality assessment score paired with an identity verification model that together rank outputs on naturalness and consistency.

Load-bearing premise

That the chosen quality metrics and identity checker accurately measure naturalness, realism, and identity preservation without missing key failure modes or introducing metric-specific biases.

What would settle it

Human preference ratings on the submitted outputs that systematically disagree with the automated ranking order.

Figures

Figures reproduced from arXiv: 2604.10532 by Alexandru-Gabriel Lefterache, Anamaria Radoi, Axi Niu, ChangYoung Jeong, Chengxi Zeng, Chuanyue Yan, Claudia Jesuraj, Congchao Zhu, Daiguo Zhou, David Bull Wei Zhou, Fan Zhang, Guoyi Xu, Hongyu Huang, Hoyoung Lee, Hui Li, Jiachen Tu, Jiajia Liu, Jiaming Wang, Jiatong Li, Jingkai Wang, Jinqiu Sun, Jinyang Zhang, Jue Gong, Kai Liu, Kanghui Zhao, Linfeng Li, Nikhil Akalwadi, Radu Timofte, Ramesh Ashok Tabib, SangYun Oh, Senyan Qing, Spoorthi LC, Sujith Roy V, Tao Lu, Tianhao Peng, Uma Mudenagudi, Vikas B, Wei Deng, WenBo Xiong, Xian Hu, Yanduo Zhang, Yanning Zhang, Yaokun Shi, Yaoxin Jiang, Yifei Chen, Yijiao Liu, Yingsi Chen, Yulun Zhang, Yuqi Li, Yu Wang, Yuxuan Jiang, Zheng Chen, Zhenguo Wu.

Figure 1
Figure 1. Figure 1: MiPlusCV adopts a two-stage pipeline that combines OSDFace-based coarse restoration with a Z-Image-based one-step detail [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Overview of the CEVI-KLETech pipeline. A semantic [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Overview of the HONORAICamera pipeline. Training and optimization. As it is illustrated in [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: YuFans combines a one-step SDFace restoration stage with CLIPIQA-guided pixel optimization at test time. [PITH_FULL_IMAGE:figures/full_fig_p006_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Overview of DeSC-Face. The degraded image is encoded into degraded latent tokens, which are used both as the main condition [PITH_FULL_IMAGE:figures/full_fig_p006_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Architecture diagram of the DiffBIR v2.1 two-stage [PITH_FULL_IMAGE:figures/full_fig_p007_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Overall architecture and training objective of MaDENN. The baseline CodeFormer architecture is extended with identity [PITH_FULL_IMAGE:figures/full_fig_p008_7.png] view at source ↗
Figure 9
Figure 9. Figure 9: The workflow of PRIDE-Face. GFPGAN provides the [PITH_FULL_IMAGE:figures/full_fig_p008_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: The BVI extends TADSR with a residual noise re [PITH_FULL_IMAGE:figures/full_fig_p009_10.png] view at source ↗
read the original abstract

This paper provides a review of the NTIRE 2026 challenge on real-world face restoration, highlighting the proposed solutions and the resulting outcomes. The challenge focuses on generating natural and realistic outputs while maintaining identity consistency. Its goal is to advance state-of-the-art solutions for perceptual quality and realism, without imposing constraints on computational resources or training data. Performance is evaluated using a weighted image quality assessment (IQA) score and employs the AdaFace model as an identity checker. The competition attracted 96 registrants, with 10 teams submitting valid models; ultimately, 9 teams achieved valid scores in the final ranking. This collaborative effort advances the performance of real-world face restoration while offering an in-depth overview of the latest trends in the field.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. This manuscript reports on the NTIRE 2026 challenge for real-world face restoration. It outlines the challenge goals of producing natural, realistic outputs with identity consistency, notes the participation of 96 registrants with 10 teams submitting valid models and 9 achieving valid scores, describes the evaluation using a weighted IQA score combined with the AdaFace model for identity checking, and asserts that the effort advances the state of the art while providing an overview of current trends in the field.

Significance. Should the chosen metrics prove to be reliable proxies for human-perceived naturalness, realism, and identity preservation, the paper would offer significant value by documenting a community benchmark and summarizing methodological trends in unconstrained face restoration. It highlights collaborative progress in a perceptual task.

major comments (2)
  1. [Evaluation Protocol] The central claim that the challenge advances real-world face restoration performance (abstract) rests on rankings derived from a weighted IQA score and AdaFace identity checker. The manuscript provides no evidence of correlation between these metrics and human judgments of naturalness or identity, nor analysis of failure modes such as texture inconsistencies, lighting mismatches, or subtle identity drift that standard IQA and face-recognition models are known to under-penalize. This is load-bearing for the assertion of genuine progress rather than metric optimization.
  2. [Results] The overview of submitted methods and final rankings lacks detailed descriptions of the top-performing approaches, including key innovations or ablations that drove the reported scores. Without this, the claim of providing an in-depth overview of latest trends cannot be fully assessed from the results alone.
minor comments (2)
  1. [Participation Statistics] The abstract states that 9 teams achieved valid scores while 10 submitted valid models; the main text should explicitly clarify the distinction and any disqualifications to prevent reader confusion.
  2. Add references to the first NTIRE face restoration challenge and related perceptual metrics literature to better situate the current evaluation protocol.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We are grateful to the referee for the careful reading and constructive criticism of our manuscript. We address each major comment below and indicate the revisions we plan to make.

read point-by-point responses
  1. Referee: [Evaluation Protocol] The central claim that the challenge advances real-world face restoration performance (abstract) rests on rankings derived from a weighted IQA score and AdaFace identity checker. The manuscript provides no evidence of correlation between these metrics and human judgments of naturalness or identity, nor analysis of failure modes such as texture inconsistencies, lighting mismatches, or subtle identity drift that standard IQA and face-recognition models are known to under-penalize. This is load-bearing for the assertion of genuine progress rather than metric optimization.

    Authors: We concur that the manuscript would benefit from a more explicit discussion of the metrics' validity. The evaluation protocol is defined by the challenge organizers and employs metrics commonly adopted in the field, with supporting evidence from prior publications on their correlation to human judgments. To strengthen the paper, we will revise the evaluation section to include references to studies validating these metrics and add a brief analysis of potential failure modes based on qualitative examples from the challenge. We will also moderate the language in the abstract and conclusion to emphasize that the results reflect performance under the specified evaluation protocol. revision: yes

  2. Referee: [Results] The overview of submitted methods and final rankings lacks detailed descriptions of the top-performing approaches, including key innovations or ablations that drove the reported scores. Without this, the claim of providing an in-depth overview of latest trends cannot be fully assessed from the results alone.

    Authors: The manuscript provides an overview by presenting the rankings and categorizing the methods according to their primary techniques. Detailed ablations are often included in the teams' individual submissions to the NTIRE workshop. We will expand the results section in the revised manuscript with more in-depth summaries of the top three methods, drawing from the technical descriptions submitted by the teams, to better illustrate the latest trends and innovations. revision: partial

Circularity Check

0 steps flagged

No circularity: descriptive challenge report with no derivations or self-referential claims

full rationale

The paper is a summary of an external competition (NTIRE 2026 face restoration challenge). It reports participant methods, rankings, and outcomes using pre-defined evaluation metrics (weighted IQA + AdaFace) without any internal derivations, predictions, fitted parameters, or load-bearing self-citations. No equations, ansatzes, or uniqueness theorems are invoked that could reduce to the paper's own inputs. The central claim of advancing the field is a factual statement about submitted results, not a constructed prediction. This is a standard non-circular challenge overview.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

As a challenge report the paper relies on standard computer vision evaluation practices without introducing new parameters, axioms, or entities.

pith-pipeline@v0.9.0 · 5652 in / 920 out tokens · 41885 ms · 2026-05-10T16:10:42.747401+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

99 extracted references · 6 canonical work pages · 3 internal anchors

  1. [1]

    NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing

    Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cosmin Ancuti. NT-HAZE: A Benchmark Dataset for Realistic Night-time Image Dehazing . InCVPRW, 2026. 2

  2. [2]

    NTIRE 2026 Nighttime Image Dehazing Challenge Report

    Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cos- min Ancuti, et al. NTIRE 2026 Nighttime Image Dehazing Challenge Report . InCVPRW, 2026. 2

  3. [3]

    Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer

    Huanqia Cai, Sihan Cao, Ruoyi Du, Peng Gao, Steven Hoi, Zhaohui Hou, Shijie Huang, Dengyang Jiang, Xin Jin, Liangchen Li, et al. Z-image: An efficient image generation foundation model with single-stream diffusion transformer. arXiv preprint arXiv:2511.22699, 2025. 4, 5, 6

  4. [4]

    NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods

    Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods . InCVPRW, 2026. 2

  5. [5]

    Glean: Generative latent bank for large- factor image super-resolution

    Kelvin CK Chan, Xintao Wang, Xiangyu Xu, Jinwei Gu, and Chen Change Loy. Glean: Generative latent bank for large- factor image super-resolution. InCVPR, 2021. 1

  6. [6]

    IQA-PyTorch: Pytorch toolbox for image quality assessment

    Chaofeng Chen and Jiadi Mo. IQA-PyTorch: Pytorch toolbox for image quality assessment. [Online]. Avail- able:https : / / github . com / chaofengc / IQA - PyTorch, 2022. 7

  7. [7]

    Chaofeng Chen, Xiaoming Li, Yang Lingbo, Xianhui Lin, Lei Zhang, and Kwan-Yee K. Wong. Progressive semantic- aware style transformation for blind face restoration. In CVPR, 2021. 1

  8. [8]

    Towards real-world blind face restoration with generative diffusion prior.arXiv preprint arXiv:2312.15736, 2023

    Xiaoxu Chen, Jingfan Tan, Tao Wang, Kaihao Zhang, Wen- han Luo, and Xiaocun Cao. Towards real-world blind face restoration with generative diffusion prior.arXiv preprint arXiv:2312.15736, 2023. 1, 2

  9. [9]

    Fsrnet: End-to-end learning face super-resolution with facial priors

    Yu Chen, Ying Tai, Xiaoming Liu, Chunhua Shen, and Jian Yang. Fsrnet: End-to-end learning face super-resolution with facial priors. InCVPR, 2018. 1

  10. [10]

    The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview

    Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xi- aoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InCVPRW,

  11. [11]

    Low Light Image Enhancement Challenge at NTIRE 2026

    George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali, Rizwan Ali Naqvi, Marcos Conde, Radu Timofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026 . In CVPRW, 2026. 2

  12. [12]

    High FPS Video Frame Interpolation Challenge at NTIRE 2026

    George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026 . InCVPRW, 2026. 2

  13. [13]

    ArcFace: Additive angular margin loss for deep face recognition

    Jiankang Deng, Jia Guo, Xue Niannan, and Stefanos Zafeiriou. ArcFace: Additive angular margin loss for deep face recognition. InCVPR, 2019. 5, 7

  14. [14]

    NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report

    Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Tatui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report . InCVPRW, 2026. 2

  15. [15]

    Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al

    Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Transfer, NTIRE 2026 Challenge: Report . InCVPRW, 2026. 2

  16. [16]

    VQFR: Blind face restoration with vector-quantized dictionary and parallel de- coder

    Yuchao Gu, Xintao Wang, Liangbin Xie, Chao Dong, Gen Li, Ying Shan, and Ming-Ming Cheng. VQFR: Blind face restoration with vector-quantized dictionary and parallel de- coder. InECCV, 2022. 1

  17. [17]

    NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results

    Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InCVPRW, 2026. 2

  18. [18]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)

    Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InCVPRW, 2026. 2

  19. [19]

    NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild

    Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InCVPRW, 2026. 2

  20. [20]

    Robust Deepfake De- tection, NTIRE 2026 Challenge: Report

    Benedikt Hopf, Radu Timofte, et al. Robust Deepfake De- tection, NTIRE 2026 Challenge: Report . InCVPRW, 2026. 2

  21. [21]

    Ref-ldm: A latent diffusion model for reference-based face image restoration

    Chi-Wei Hsiao, Yu-Lun Liu, Cheng-Kun Yang, Sheng-Po Kuo, Yucheun Kevin Jou, and Chia-Ping Chen. Ref-ldm: A latent diffusion model for reference-based face image restoration. InAdvances in Neural Information Processing Systems, pages 74840–74867. Curran Associates, Inc., 2024. 7

  22. [22]

    Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller

    Gary B. Huang, Marwan Mattar, Tamara Berg, and Eric Learned-Miller. Labeled Faces in the Wild: A Database forStudying Face Recognition in Unconstrained Environ- ments. InWorkshop on Faces in ’Real-Life’ Images: De- tection, Alignment, and Recognition, 2008. 2

  23. [23]

    Progressive growing of GANs for improved quality, stability, and variation

    Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for improved quality, stability, and variation. InICLR, 2018. 2, 3

  24. [24]

    A style-based generator architecture for generative adversarial networks

    Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019. 2, 7, 8, 9

  25. [25]

    Analyzing and improving the image quality of stylegan

    Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. InCVPR, 2020. 5

  26. [26]

    Elucidating the design space of diffusion-based generative models.NeurIPS, 2022

    Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models.NeurIPS, 2022. 7

  27. [27]

    MUSIQ: Multi-scale Image Quality Trans- former

    Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, and Feng Yang. MUSIQ: Multi-scale Image Quality Trans- former . InICCV, 2021. 3

  28. [28]

    NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge

    Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Ko- rchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Doro- gova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani ´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In CVPRW, 2026. 2

  29. [29]

    Progressive face super-resolution via attention to facial landmark

    Deokyun Kim, Minseon Kim, Gihyun Kwon, and Dae-Shik Kim. Progressive face super-resolution via attention to facial landmark. InBMVC, 2019. 1

  30. [30]

    Adaface: Quality adaptive margin for face recognition

    Minchul Kim, Anil K Jain, and Xiaoming Liu. Adaface: Quality adaptive margin for face recognition. InCVPR,

  31. [31]

    The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview . InCVPRW, 2026. 2

  32. [32]

    INTERLCM: Low-quality images as intermediate states of latent consistency models for effective blind face restoration

    Senmao Li, Kai Wang, Joost van de Weijer, Fahad Shahbaz Khan, Chun-Le Guo, Shiqi Yang, Yaxing Wang, Jian Yang, and Ming-Ming Cheng. INTERLCM: Low-quality images as intermediate states of latent consistency models for effective blind face restoration. InICLR, 2025. 2

  33. [33]

    Self-Supervised Selective-Guided Diffusion Model for Old-Photo Face Restoration

    Wenjie Li, Xiangyi Wang, Heng Guo, Guangwei Gao, and Zhanyu Ma. Self-Supervised Selective-Guided Diffusion Model for Old-Photo Face Restoration. InNeurIPS, 2025. 2

  34. [34]

    NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results

    Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results . InCVPRW, 2026. 2

  35. [35]

    NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results

    Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results . InCVPRW, 2026. 2

  36. [36]

    Ls- dir: A large scale dataset for image restoration

    Yawei Li, Kai Zhang, Jingyun Liang, Jiezhang Cao, Ce Liu, Rui Gong, Yulun Zhang, Hao Tang, Yun Liu, Denis Deman- dolx, Rakesh Ranjan, Radu Timofte, and Luc Van Gool. Ls- dir: A large scale dataset for image restoration. InCVPRW,

  37. [37]

    SwinIR: Image restora- tion using swin transformer

    Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. SwinIR: Image restora- tion using swin transformer. InInternational Conference on Computer Vision Workshops, 2021. 7

  38. [38]

    Diff- BIR: Toward blind image restoration via generative diffusion prior.European Conference on Computer Vision, 2024

    Xinqi Lin, Jingwen He, Ziyan Chen, Zhaoyang Lyu, Bo Dai, Fanghua Yu, Wanli Ouyang, Yu Qiao, and Chao Dong. Diff- BIR: Toward blind image restoration via generative diffusion prior.European Conference on Computer Vision, 2024. 1, 2, 5, 7, 8

  39. [39]

    The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InCVPRW, 2026. 2

  40. [40]

    FaceMe: Robust Blind Face Restoration with Personal Identification

    Siyu Liu, Zheng-Peng Duan, Jia OuYang, Jiayi Fu, Hyunhee Park, Zikun Liu, Chun-Le Guo, and Chongyi Li. FaceMe: Robust Blind Face Restoration with Personal Identification. InAAAI, 2025. 2

  41. [41]

    Conde, et al

    Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: Re- alX3D Challenge Results . InCVPRW, 2026. 2

  42. [42]

    NTIRE 2026 X- AIGC Quality Assessment Challenge: Methods and Results

    Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...

  43. [43]

    Waveface: Authentic face restoration with efficient frequency recovery

    Yunqi Miao, Jiankang Deng, and Jungong Han. Waveface: Authentic face restoration with efficient frequency recovery. InCVPR, 2024. 1, 2

  44. [44]

    Unlocking the Potential of Diffusion Priors in Blind Face Restoration

    Yunqi Miao, Zhiyu Qu, Mingqi Gao, Changrui Chen, Jifei Song, Jungong Han, and Jiankang Deng. Unlocking the Potential of Diffusion Priors in Blind Face Restoration. In ICCV, 2025. 2

  45. [45]

    NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results

    Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timo- fte, et al. NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results . InCVPRW, 2026. 2

  46. [46]

    Bvi-aom: A new training dataset for deep video compression optimization

    Jakub Nawała, Yuxuan Jiang, Fan Zhang, Xiaoqing Zhu, Joel Sole, and David Bull. Bvi-aom: A new training dataset for deep video compression optimization. InICIP, 2024. 9

  47. [47]

    DINOv2: Learning Robust Visual Features without Supervision

    Maxime Oquab, Timoth ´ee Darcet, Th ´eo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 5

  48. [48]

    NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results

    Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . InCVPRW,

  49. [49]

    NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results

    Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results . InCVPRW,

  50. [50]

    SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis

    Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas M ¨uller, Joe Penna, and Robin Rombach. Sdxl: Improving latent diffusion mod- els for high-resolution image synthesis.arXiv preprint arXiv:2307.01952, 2023. 8

  51. [51]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Chal- lenge: Professional Image Quality Assessment (Track 1)

    Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Chal- lenge: Professional Image Quality Assessment (Track 1) . InCVPRW, 2026. 2

  52. [52]

    Diffbfr: Bootstrapping dif- fusion model for blind face restoration

    Xinmin Qiu, Congying Han, Zicheng Zhang, Bonan Li, Tiande Guo, and Xuecheng Nie. Diffbfr: Bootstrapping dif- fusion model for blind face restoration. InACM MM, 2023. 1, 2

  53. [53]

    Feature out! Let Raw Image as Your Condition for Blind Face Restoration

    Xinmin Qiu, Chen Gege, Bonan Li, Congying Han, Tiande Guo, and Zicheng Zhang. Feature out! Let Raw Image as Your Condition for Blind Face Restoration. InICML, 2025. 2

  54. [54]

    The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results

    Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timo- fte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results . InCVPRW, 2026. 2

  55. [55]

    NTIRE 2026 The 3rd Restore Any Im- age Model (RAIM) Challenge: Multi-Exposure Image Fu- sion in Dynamic Scenes (Track2)

    Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Im- age Model (RAIM) Challenge: Multi-Exposure Image Fu- sion in Dynamic Scenes (Track2) . InCVPRW, 2026. 2

  56. [56]

    The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report

    Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InCVPRW, 2026. 2

  57. [57]

    High-resolution image syn- thesis with latent diffusion models

    Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bj¨orn Ommer. High-resolution image syn- thesis with latent diffusion models. InCVPR, 2022. 7

  58. [58]

    Adversarial diffusion distillation

    Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. Adversarial diffusion distillation. InECCV, 2024. 4

  59. [59]

    Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al

    Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Rendering Challenge at NTIRE 2026 . InCVPRW, 2026. 2

  60. [60]

    Deep semantic face deblurring

    Ziyi Shen, Wei-Sheng Lai, Tingfa Xu, Jan Kautz, and Ming- Hsuan Yang. Deep semantic face deblurring. InCVPR, 2018. 1

  61. [61]

    CLR-Face: Conditional latent refinement for blind face restoration using score-based diffusion models

    Maitreya Suin and Rama Chellappa. CLR-Face: Conditional latent refinement for blind face restoration using score-based diffusion models. InIJCAI, 2024. 1, 2

  62. [62]

    The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results

    Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InCVPRW, 2026. 2

  63. [63]

    The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results

    Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . In CVPRW, 2026. 2

  64. [64]

    NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results

    Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InCVPRW, 2026. 2

  65. [65]

    Overcoming false illusions in real-world face restora- tion with multi-modal guided diffusion model

    Keda Tao, Jinjin Gu, Yulun Zhang, Xiucheng Wang, and Nan Cheng. Overcoming false illusions in real-world face restora- tion with multi-modal guided diffusion model. InICLR,

  66. [66]

    Dual associated encoder for face restoration

    Yu-Ju Tsai, Yu-Lun Liu, Lu Qi, Kelvin CK Chan, and Ming- Hsuan Yang. Dual associated encoder for face restoration. In ICLR, 2024. 1

  67. [67]

    Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings

    Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings . InCVPRW, 2026. 2

  68. [68]

    Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge

    Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge . InCVPRW, 2026. 2

  69. [69]

    Ex- ploring clip for assessing the look and feel of images

    Jianyi Wang, Kelvin CK Chan, and Chen Change Loy. Ex- ploring clip for assessing the look and feel of images. In AAAI, 2023. 3, 4, 6

  70. [70]

    Chan, and Chen Change Loy

    Jianyi Wang, Zongsheng Yue, Shangchen Zhou, Kelvin C.K. Chan, and Chen Change Loy. Exploiting diffusion prior for real-world image super-resolution.IJCV, 2024. 2

  71. [71]

    One-step diffusion model for face restoration

    Jingkai Wang, Jue Gong, Lin Zhang, Zheng Chen, Xing Liu, Hong Gu, Yutong Liu, Yulun Zhang, and Xiaokang Yang. One-step diffusion model for face restoration. InCVPR,

  72. [72]

    The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results

    Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InCVPRW, 2026. 2

  73. [73]

    NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results

    Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results . In CVPRW, 2026. 2

  74. [74]

    To- wards real-world blind face restoration with generative facial prior

    Xintao Wang, Yu Li, Honglun Zhang, and Ying Shan. To- wards real-world blind face restoration with generative facial prior. InCVPR, 2021. 1, 2, 8

  75. [75]

    Real-esrgan: Training real-world blind super-resolution with pure synthetic data

    Xintao Wang, Liangbin Xie, Chao Dong, and Ying Shan. Real-esrgan: Training real-world blind super-resolution with pure synthetic data. In2021 IEEE/CVF International Con- ference on Computer Vision Workshops (ICCVW), pages 1905–1914, Montreal, BC, Canada, 2021. 7, 9

  76. [76]

    NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results

    Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wending Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results . In CVPRW, 2026. 2

  77. [77]

    Restoreformer++: Towards real- world blind face restoration from undegraded key-value pairs.IEEE TPAMI, 2023

    Zhouxia Wang, Jiawei Zhang, Tianshui Chen, Wenping Wang, and Ping Luo. Restoreformer++: Towards real- world blind face restoration from undegraded key-value pairs.IEEE TPAMI, 2023. 1

  78. [78]

    Dr2: Diffusion-based robust degradation remover for blind face restoration

    Zhixin Wang, Xiaoyun Zhang, Ziying Zhang, Huangjie Zheng, Mingyuan Zhou, Ya Zhang, and Yanfeng Wang. Dr2: Diffusion-based robust degradation remover for blind face restoration. InCVPR, 2023. 1, 2

  79. [79]

    Q-align: Teaching lmms for visual scoring via discrete text-defined levels

    Haoning Wu, Zicheng Zhang, Weixia Zhang, Chaofeng Chen, Chunyi Li, Liang Liao, Annan Wang, Erli Zhang, Wenxiu Sun, Qiong Yan, Xiongkuo Min, Guangtai Zhai, and Weisi Lin. Q-align: Teaching lmms for visual scoring via discrete text-defined levels. InICML, 2024. 3

  80. [80]

    One-step effective diffusion network for real-world image super-resolution

    Rongyuan Wu, Lingchen Sun, Zhiyuan Ma, and Lei Zhang. One-step effective diffusion network for real-world image super-resolution. InNeurIPS, 2024. 1, 2

Showing first 80 references.