Recognition: no theorem link
NTIRE 2026 Challenge on Bitstream-Corrupted Video Restoration: Methods and Results
Pith reviewed 2026-05-10 18:20 UTC · model grok-4.3
The pith
The NTIRE 2026 challenge supplies a benchmark dataset and protocol for restoring videos from bitstream corruption while summarizing submitted methods and observed trends.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The challenge creates a shared testbed of videos with realistic bitstream corruptions and uses it to evaluate multiple restoration methods under controlled conditions. Results are collected and analyzed to reveal which strategies best mitigate content distortion and artifacts, establishing a reference point for measuring progress on this specific form of video degradation.
What carries the argument
The BSCVR benchmark consisting of a dataset of videos with injected bitstream corruptions together with a standardized protocol that scores restored output against the original for visual fidelity and temporal consistency.
If this is right
- New restoration algorithms can be directly compared against the submitted entries using the released dataset and scoring rules.
- Technical trends identified among top methods, such as ways to handle temporal inconsistencies, can be incorporated into follow-on designs.
- The measured difficulty level indicates that additional research is needed on severe spatial-temporal distortions before practical deployment.
- The benchmark supports development of more robust video systems that maintain quality under imperfect transmission conditions.
Where Pith is reading between the lines
- The same evaluation setup could be reused to test generalization of methods to other transmission-related degradations beyond the modeled corruptions.
- High-performing approaches may transfer to adjacent problems such as live video error concealment in conferencing or surveillance streams.
- Insights on effective architectures could influence the addition of restoration modules inside future video decoders or post-processing pipelines.
Load-bearing premise
The corruption models and dataset used in the challenge accurately represent the distribution and severity of bitstream errors encountered in real-world video transmission and storage.
What would settle it
A side-by-side statistical comparison of error patterns, frequencies, and resulting visual distortions between the challenge dataset and a large collection of actual corrupted bitstreams captured from deployed streaming or broadcast systems.
Figures
read the original abstract
This paper reports on the NTIRE 2026 Challenge on Bitstream-Corrupted Video Restoration (BSCVR). The challenge aims to advance research on recovering visually coherent videos from corrupted bitstreams, whose decoding often produces severe spatial-temporal artifacts and content distortion. Built upon recent progress in bitstream-corrupted video recovery, the challenge provides a common benchmark for evaluating restoration methods under realistic corruption settings. We describe the dataset, evaluation protocol, and participating methods, and summarize the final results and main technical trends. The challenge highlights the difficulty of this emerging task and provides useful insights for future research on robust video restoration under practical bitstream corruption.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper reports on the NTIRE 2026 Challenge on Bitstream-Corrupted Video Restoration (BSCVR). It describes the challenge setup, including the dataset of videos with realistic bitstream corruptions that produce severe spatial-temporal artifacts, the evaluation protocol, the participating methods from multiple teams, the final results with rankings, and the main technical trends observed. The central claim is descriptive: the challenge supplies a common benchmark for this emerging task and yields insights into its difficulty and promising directions for robust video restoration.
Significance. If the summarized results and trends hold, the work establishes a valuable standardized benchmark and dataset for bitstream-corrupted video restoration, an area with clear practical relevance to video transmission and storage. By compiling performance data across diverse methods and identifying effective technical approaches, the report accelerates community progress and provides a reproducible reference point for future algorithm development. The collaborative format of the challenge itself strengthens the reliability of the reported observations.
Simulated Author's Rebuttal
We thank the referee for the positive assessment and recommendation to accept the manuscript. The referee's summary correctly captures the descriptive nature of the work and the value of the established benchmark for bitstream-corrupted video restoration.
Circularity Check
No significant circularity
full rationale
The paper is a factual summary of an NTIRE challenge, describing the dataset, evaluation protocol, participating methods, and observed results without any derivations, equations, predictions, or fitted parameters. No load-bearing steps exist that reduce by construction to self-citations, self-definitions, or renamed inputs. The central claim (a benchmark was run and trends observed) is descriptive and externally verifiable via the reported competition outcomes, making the document self-contained with no circularity.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing
Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cos- min Ancuti. NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[2]
NTIRE 2026 Nighttime Image Dehazing Challenge Report
Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cos- min Ancuti, et al. NTIRE 2026 Nighttime Image Dehazing Challenge Report . InProceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[3]
NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods
Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods . InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) Workshops,
2026
-
[4]
Basicvsr++: Improving video super- resolution with enhanced propagation and alignment
Kelvin CK Chan, Shangchen Zhou, Xiangyu Xu, and Chen Change Loy. Basicvsr++: Improving video super- resolution with enhanced propagation and alignment. InPro- ceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5972–5981, 2022. 2, 6
2022
-
[5]
Free-form video inpainting with 3d gated convolution and temporal patchgan
Ya-Liang Chang, Zhe Yu Liu, Kuan-Ying Lee, and Winston Hsu. Free-form video inpainting with 3d gated convolution and temporal patchgan. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 9066– 9075, 2019. 6
2019
-
[6]
Simple baselines for image restoration
Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration.arXiv preprint arXiv:2204.04676, 2022. 6, 8
-
[7]
The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview
Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xi- aoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) W...
2026
-
[8]
Bi-sequential video error concealment method using adaptive homography-based registration.IEEE Transactions on Circuits and Systems for Video Technology, 30(6):1535–1549, 2020
Byungjin Chung and Changhoon Yim. Bi-sequential video error concealment method using adaptive homography-based registration.IEEE Transactions on Circuits and Systems for Video Technology, 30(6):1535–1549, 2020. 3
2020
-
[9]
Low Light Image Enhancement Challenge at NTIRE 2026
George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali, Rizwan Ali Naqvi, Marcos Conde, Radu Timofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026 . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[10]
High FPS Video Frame Interpolation Challenge at NTIRE 2026
George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[11]
NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Chal- lenge Report
Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Ta- tui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Chal- lenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[12]
Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al
Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Trans- fer, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[13]
Mola: Moe lora with layer-wise ex- pert allocation
Chongyang Gao, Kezhen Chen, Jinmeng Rao, Ruibo Liu, Baochen Sun, Yawen Zhang, Daiyi Peng, Xiaoyuan Guo, and VS Subrahmanian. Mola: Moe lora with layer-wise ex- pert allocation. InFindings of the Association for Computa- tional Linguistics: NAACL 2025, pages 5097–5112, 2025. 5, 8
2025
-
[14]
NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results
Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[15]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)
Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[16]
NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild
Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InProceedings of the IEEE/CVF Conference on Computer Vision and Pa...
2026
-
[17]
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han, Chao Gao, Jinyang Liu, Jeff Zhang, and Sai Qian Zhang. Parameter-efficient fine-tuning for large models: A comprehensive survey.arXiv preprint arXiv:2403.14608,
work page internal anchor Pith review arXiv
-
[18]
Robust Deepfake De- tection, NTIRE 2026 Challenge: Report
Benedikt Hopf, Radu Timofte, et al. Robust Deepfake De- tection, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[19]
Lora: Low-rank adaptation of large language models
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen- Zhu, Yuanzhi Li, Shean Wang, Liang Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. Iclr, 1(2):3, 2022. 5, 8, 9
2022
-
[20]
Draft itu-t recommendation and final draft international standard of joint video specification
Video Team JVT Joint et al. Draft itu-t recommendation and final draft international standard of joint video specification. ITU-T Rec. H. 264/ISO/IEC 14496-10 AVC, 2003. 3
2003
-
[21]
NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge
Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Ko- rchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Doro- gova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani ´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In Proceedings of the IEEE/CVF Con...
2026
-
[22]
Younghee Kwon, Kwang In Kim, James Tompkin, Jin Hyung Kim, and Christian Theobalt. Efficient learning of image super-resolution and compression artifact removal with semi-local gaussian processes.IEEE transactions on pattern analysis and machine intelligence, 37(9):1792–1805,
-
[23]
The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview
Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[24]
NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results
Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[25]
NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results
Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer...
2026
-
[26]
Towards an end-to-end framework for flow-guided video inpainting
Zhen Li, Cheng-Ze Lu, Jianhua Qin, Chun-Le Guo, and Ming-Ming Cheng. Towards an end-to-end framework for flow-guided video inpainting. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17562–17571, 2022. 3, 9, 12
2022
-
[27]
Swinir: Image restoration us- ing swin transformer
Jingyun Liang, Jiezhang Cao, Guolei Sun, Kai Zhang, Luc Van Gool, and Radu Timofte. Swinir: Image restoration us- ing swin transformer. InProceedings of the IEEE/CVF inter- national conference on computer vision, pages 1833–1844,
-
[28]
Recurrent video restoration trans- former with guided deformable attention.Advances in Neu- ral Information Processing Systems, 35:378–393, 2022
Jingyun Liang, Yuchen Fan, Xiaoyu Xiang, Rakesh Ranjan, Eddy Ilg, Simon Green, Jiezhang Cao, Kai Zhang, Radu Timofte, and Luc V Gool. Recurrent video restoration trans- former with guided deformable attention.Advances in Neu- ral Information Processing Systems, 35:378–393, 2022. 2
2022
-
[29]
Vrt: A video restoration transformer.IEEE Transactions on Image Processing, 2024
Jingyun Liang, Jiezhang Cao, Yuchen Fan, Kai Zhang, Rakesh Ranjan, Yawei Li, Radu Timofte, and Luc Van Gool. Vrt: A video restoration transformer.IEEE Transactions on Image Processing, 2024. 2
2024
-
[30]
The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview
Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[31]
Conde, et al
Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: Re- alX3D Challenge Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[32]
Bitstream-corrupted video recov- ery: a novel benchmark dataset and method.Advances in Neural Information Processing Systems, 36, 2024
Tianyi Liu, Kejun Wu, Yi Wang, Wenyang Liu, Kim-Hui Yap, and Lap-Pui Chau. Bitstream-corrupted video recov- ery: a novel benchmark dataset and method.Advances in Neural Information Processing Systems, 36, 2024. 1, 2, 3, 6, 10, 12
2024
-
[33]
Towards blind bitstream-corrupted video recovery: A visual foundation model-driven framework
Tianyi Liu, Kejun Wu, Chen Cai, Yi Wang, Kim-Hui Yap, and Lap-Pui Chau. Towards blind bitstream-corrupted video recovery: A visual foundation model-driven framework. In Proceedings of the 33rd ACM International Conference on Multimedia, pages 7949–7958, 2025. 1, 3, 5, 9, 10, 11
2025
-
[34]
NTIRE 2026 X- AIGC Quality Assessment Challenge: Methods and Results
Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...
2026
-
[35]
NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results
Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timo- fte, et al. NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[36]
Ntire 2019 challenge on video deblurring and super- resolution: Dataset and study
Seungjun Nah, Sungyong Baik, Seokil Hong, Gyeongsik Moon, Sanghyun Son, Radu Timofte, and Kyoung Mu Lee. Ntire 2019 challenge on video deblurring and super- resolution: Dataset and study. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019. 2
2019
-
[37]
DINOv2: Learning Robust Visual Features without Supervision
Maxime Oquab, Timoth ´ee Darcet, Th ´eo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023. 5, 10
work page internal anchor Pith review Pith/arXiv arXiv 2023
-
[38]
NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results
Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[39]
Scalable Diffusion Models with Transformers
William Peebles and Saining Xie. Scalable diffusion models with transformers.arXiv preprint arXiv:2212.09748, 2022. 7
work page internal anchor Pith review arXiv 2022
-
[40]
NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results
Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 2
2026
-
[41]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1)
Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1) . InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[42]
The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results
Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timo- fte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[43]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2)
Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[44]
Optical flow estima- tion using a spatial pyramid network
Anurag Ranjan and Michael J Black. Optical flow estima- tion using a spatial pyramid network. InProceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 4161–4170, 2017. 8, 10, 12
2017
-
[45]
SAM 2: Segment Anything in Images and Videos
Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman R¨adle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junt- ing Pan, Kalyan Vasudev Alwala, Nicolas Carion, Chao- Yuan Wu, Ross Girshick, Piotr Doll´ar, and Christoph Feicht- enhofer. Sam 2: Segment anything in images and videos. arXiv preprint arXiv:...
work page internal anchor Pith review Pith/arXiv arXiv 2024
-
[46]
The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report
Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[47]
U- net: Convolutional networks for biomedical image segmen- tation
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U- net: Convolutional networks for biomedical image segmen- tation. InMedical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234–241. Springer, 2015. 8
2015
-
[48]
Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al
Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Rendering Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[49]
Oriane Sim ´eoni, Huy V . V o, Maximilian Seitzer, Federico Baldassarre, Maxime Oquab, Cijo Jose, Vasil Khalidov, Marc Szafraniec, Seungeun Yi, Micha ¨el Ramamonjisoa, Francisco Massa, Daniel Haziza, Luca Wehrstedt, Jianyuan Wang, Timoth´ee Darcet, Th´eo Moutakanni, Leonel Sentana, Claire Roberts, Andrea Vedaldi, Jamie Tolan, John Brandt, Camille Couprie,...
2025
-
[50]
The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results
Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[51]
The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results
Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Wor...
2026
-
[52]
NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results
Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[53]
Tdan: Temporally-deformable alignment network for video super-resolution
Yapeng Tian, Yulun Zhang, Yun Fu, and Chenliang Xu. Tdan: Temporally-deformable alignment network for video super-resolution. InProceedings of the IEEE/CVF con- ference on computer vision and pattern recognition, pages 3360–3369, 2020. 2
2020
-
[54]
Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings
Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[55]
Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge
Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[56]
Wan: Open and Advanced Large-Scale Video Generative Models
Team Wan, Ang Wang, Baole Ai, Bin Wen, Chaojie Mao, Chen-Wei Xie, Di Chen, Feiwu Yu, Haiming Zhao, Jianxiao Yang, et al. Wan: Open and advanced large-scale video gen- erative models.arXiv preprint arXiv:2503.20314, 2025. 6, 7
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[57]
The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results
Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[58]
NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results
Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[59]
Blind bitstream-corrupted video recovery via metadata- guided diffusion model
Shuyun Wang, Hu Zhang, Xin Shen, Dadong Wang, and Xin Yu. Blind bitstream-corrupted video recovery via metadata- guided diffusion model. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 22975– 22984, 2025. 1, 3
2025
-
[60]
Edvr: Video restoration with enhanced deformable convolutional networks
Xintao Wang, Kelvin CK Chan, Ke Yu, Chao Dong, and Chen Change Loy. Edvr: Video restoration with enhanced deformable convolutional networks. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019. 2
2019
-
[61]
Error control and conceal- ment for video communication: A review.Proceedings of the IEEE, 86(5):974–997, 1998
Yao Wang and Qin-Fan Zhu. Error control and conceal- ment for video communication: A review.Proceedings of the IEEE, 86(5):974–997, 1998. 3
1998
-
[62]
NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results
Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wending Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[63]
Chenfei Wu, Jiahao Li, Jingren Zhou, Junyang Lin, Kaiyuan Gao, Kun Yan, Sheng-ming Yin, Shuai Bai, Xiao Xu, Yilei Chen, et al. Qwen-image technical report.arXiv preprint arXiv:2508.02324, 2025. 5, 6
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[64]
Deep flow-guided video inpainting
Rui Xu, Xiaoxiao Li, Bolei Zhou, and Chen Change Loy. Deep flow-guided video inpainting. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3723–3732, 2019. 3
2019
-
[65]
Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report
Jiebin Yan, Chenyu Tu, Qinghua Lin, Zongwei WU, Weixia Zhang, Zhihua Wang, Peibei Cao, Yuming Fang, Xiaoning Liu, Zhuyun Zhou, Radu Timofte, et al. Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[66]
S. Ye, M. Ouaret, F. Dufaux, and T. Ebrahimi. Hybrid spatial and temporal error concealment for distributed video coding. In2008 IEEE International Conference on Multimedia and Expo, pages 633–636. IEEE, 2008. 3
2008
-
[67]
Progressive fusion video super-resolution network via exploiting non-local spatio-temporal correlations
Peng Yi, Zhongyuan Wang, Kui Jiang, Junjun Jiang, and Ji- ayi Ma. Progressive fusion video super-resolution network via exploiting non-local spatio-temporal correlations. InPro- ceedings of the IEEE/CVF international conference on com- puter vision, pages 3106–3115, 2019. 2
2019
-
[68]
NTIRE 2026 Challenge on High- Resolution Depth of non-Lambertian Surfaces
Pierluigi Zama Ramirez, Fabio Tosi, Luigi Di Stefano, Radu Timofte, Alex Costanzino, Matteo Poggi, Samuele Salti, Ste- fano Mattoccia, et al. NTIRE 2026 Challenge on High- Resolution Depth of non-Lambertian Surfaces . InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[69]
Swin-vec: Video swin transformer-based gan for video error conceal- ment of vvc.The Visual Computer, 40(10):7335–7347, 2024
Bing Zhang, Ran Ma, Yu Cao, and Ping An. Swin-vec: Video swin transformer-based gan for video error conceal- ment of vvc.The Visual Computer, 40(10):7335–7347, 2024. 3
2024
-
[70]
Plug-and-play image restora- tion with deep denoiser prior.IEEE Transactions on Pat- tern Analysis and Machine Intelligence, 44(10):6360–6376,
Kai Zhang, Yawei Li, Wangmeng Zuo, Lei Zhang, Luc Van Gool, and Radu Timofte. Plug-and-play image restora- tion with deep denoiser prior.IEEE Transactions on Pat- tern Analysis and Machine Intelligence, 44(10):6360–6376,
-
[71]
NTIRE 2026 Challenge Report on Anomaly Detection of Face Enhancement for UGC Images
Yan Zhong, Qiufang Ma, Zhen Wang, Tingting Jiang, Radu Timofte, et al. NTIRE 2026 Challenge Report on Anomaly Detection of Face Enhancement for UGC Images . InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[72]
Propainter: Improving propagation and transformer for video inpainting
Shangchen Zhou, Chongyi Li, Kelvin CK Chan, and Chen Change Loy. Propainter: Improving propagation and transformer for video inpainting. InProceedings of the IEEE/CVF international conference on computer vision, pages 10477–10486, 2023. 6
2023
-
[73]
ProPainter: Improving propagation and transformer for video inpainting
Shangchen Zhou, Chongyi Li, Kelvin C.K Chan, and Chen Change Loy. ProPainter: Improving propagation and transformer for video inpainting. InProceedings of IEEE International Conference on Computer Vision (ICCV), 2023. 3 Acknowledgements This work was conducted in the JC STEM Lab of Machine Learning and Computer Vision funded by The Hong Kong Jockey Club C...
2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.