Recognition: unknown
LoViF 2026 Challenge on Real-World All-in-One Image Restoration: Methods and Results
Pith reviewed 2026-05-10 02:47 UTC · model grok-4.3
The pith
The LoViF 2026 Challenge provides a unified benchmark for real-world all-in-one image restoration across multiple degradations.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The LoViF Challenge establishes a unified benchmark for real-world all-in-one image restoration under diverse degradation conditions including blur, low-light, haze, rain, and snow. It received nine valid submissions from 124 registered participants, and the detailed analysis of these methods highlights effective approaches while setting a reference point for future research in unified real-world image restoration.
What carries the argument
The unified benchmark that evaluates restoration models for robustness and generalization across multiple degradation categories in a common framework.
If this is right
- Models can be compared directly on their ability to handle several real-world degradations simultaneously without knowing the type in advance.
- Effective techniques identified in the submissions become reference points for building more general restoration systems.
- The benchmark allows consistent tracking of progress in low-level vision tasks that were previously evaluated separately.
- Future methods can be tested against the same standard to demonstrate improvement over current submissions.
Where Pith is reading between the lines
- All-in-one models could replace multiple specialized tools in applications such as mobile photography or surveillance video.
- If the benchmark images capture typical real-world mixtures, research effort may shift away from single-degradation solutions.
- Extending the challenge to video sequences or additional degradation types would test whether the same unified approach scales.
Load-bearing premise
The nine submitted methods and the chosen test images fairly represent the state of the art and the full variety of real-world degradations without selection bias or undisclosed tuning.
What would settle it
A new all-in-one restoration method that substantially outperforms every submitted entry on the exact same test images, or new evidence that the test set misses common real-world degradation combinations.
Figures
read the original abstract
This paper presents a review for the LoViF Challenge on Real-World All-in-One Image Restoration. The challenge aimed to advance research on real-world all-in-one image restoration under diverse real-world degradation conditions, including blur, low-light, haze, rain, and snow. It provided a unified benchmark to evaluate the robustness and generalization ability of restoration models across multiple degradation categories within a common framework. The competition attracted 124 registered participants and received 9 valid final submissions with corresponding fact sheets, significantly contributing to the progress of real-world all-in-one image restoration. This report provides a detailed analysis of the submitted methods and corresponding results, emphasizing recent progress in unified real-world image restoration. The analysis highlights effective approaches and establishes a benchmark for future research in real-world low-level vision.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript reports on the LoViF 2026 Challenge for real-world all-in-one image restoration. It describes the challenge goal of evaluating unified models across multiple real-world degradations (blur, low-light, haze, rain, snow), states that 124 participants registered and 9 valid final submissions with fact sheets were received, analyzes the submitted methods, and concludes that the event and its benchmark contribute to progress in the field.
Significance. Competition reports of this type can usefully document participation levels and catalog current approaches in a rapidly evolving area. The reported numbers establish interest in unified restoration frameworks, and the provision of fact sheets plus method analysis offers a snapshot that future work can reference. No machine-checked proofs or parameter-free derivations are present, but the factual recording of submissions and the emphasis on a common benchmark are the primary strengths if the underlying evaluation is reproducible.
minor comments (2)
- [Abstract] The abstract states that the challenge 'significantly contributing to the progress'; this phrasing is common but would be stronger if tied to concrete observations from the method analysis (e.g., which architectural choices appeared most effective across degradations).
- The manuscript would benefit from a brief explicit statement of the test-set construction criteria and the precise metrics used for ranking, so readers can independently judge whether the 9 submissions fairly sample the space of real-world degradations.
Simulated Author's Rebuttal
We thank the referee for the positive review and the recommendation to accept the manuscript. The report accurately captures the challenge's goals, participation statistics, and the value of the provided benchmark and method analysis for the field of real-world all-in-one image restoration.
Circularity Check
No significant circularity; factual competition summary
full rationale
The document is a competition report summarizing participation (124 registrations, 9 submissions), method descriptions, and benchmark results for real-world all-in-one restoration. No equations, derivations, predictions, fitted parameters, or technical claims exist that could reduce to self-definition, fitted inputs, or self-citation chains. Central statements are organizational facts about the event and submitted approaches, with no load-bearing premises requiring external verification beyond the reported outcomes. This is self-contained factual reporting with no circular steps.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Unirestore: Unified perceptual and task-oriented image restoration model using diffusion prior
I Chen, Wei-Ting Chen, Yu-Wei Liu, Yuan-Chun Chiang, Sy-Yen Kuo, Ming-Hsuan Yang, et al. Unirestore: Unified perceptual and task-oriented image restoration model using diffusion prior. InCVPR, 2025. 7
2025
-
[2]
Simple baselines for image restoration
Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. InECCV. Springer,
-
[3]
Learn- ing a sparse transformer network for effective image derain- ing
Xiang Chen, Hao Li, Mingqiang Li, and Jinshan Pan. Learn- ing a sparse transformer network for effective image derain- ing. InCVPR, 2023. 2
2023
-
[4]
Xiang Chen, Jinshan Pan, Jiangxin Dong, Jian Yang, and Jinhui Tang. Foundir-v2: Optimizing pre-training data mix- tures for image restoration foundation model.arXiv preprint arXiv:2512.09282, 2025. 1
-
[5]
Bio-inspired im- age restoration
Yuning Cui, Wenqi Ren, and Alois Knoll. Bio-inspired im- age restoration. InNeurIPS, 2025. 5
2025
-
[6]
Visual- in-visual: A unified and efficient baseline for image restora- tion.IEEE TPAMI, 2026
Yuning Cui, Wenqi Ren, Boxin Shi, and Alois Knoll. Visual- in-visual: A unified and efficient baseline for image restora- tion.IEEE TPAMI, 2026. 7
2026
-
[7]
Guanglu Dong, Chunlei Li, Chao Ren, Jingliang Hu, Yilei Shi, Xiao Xiang Zhu, and Lichao Mou. Learning domain- aware task prompt representations for multi-domain all-in- one image restoration.arXiv preprint arXiv:2603.01725,
-
[8]
Scaling rectified flow transformers for high-resolution image synthesis
Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim Entezari, Jonas M ¨uller, Harry Saini, Yam Levi, Dominik Lorenz, Axel Sauer, et al. Scaling rectified flow transformers for high-resolution image synthesis. InICML, 2024. 3
2024
-
[9]
Weatherbench: A real-world bench- mark dataset for all-in-one adverse weather image restora- tion
Qiyuan Guan, Qianfeng Yang, Xiang Chen, Tianyu Song, Guiyue Jin, and Jiyu Jin. Weatherbench: A real-world bench- mark dataset for all-in-one adverse weather image restora- tion. InACM MM, 2025. 2
2025
-
[10]
A survey on all-in-one image restoration: Tax- onomy, evaluation and future trends.IEEE TPAMI, 2025
Junjun Jiang, Zengyuan Zuo, Gang Wu, Kui Jiang, and Xi- anming Liu. A survey on all-in-one image restoration: Tax- onomy, evaluation and future trends.IEEE TPAMI, 2025. 1
2025
-
[11]
Perceptual losses for real-time style transfer and super-resolution
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV. Springer, 2016. 6
2016
-
[12]
Efficient frequency domain-based trans- formers for high-quality image deblurring
Lingshun Kong, Jiangxin Dong, Jianjun Ge, Mingqiang Li, and Jinshan Pan. Efficient frequency domain-based trans- formers for high-quality image deblurring. InCVPR, 2023. 2
2023
-
[13]
All-in-one image restoration for unknown cor- ruption
Boyun Li, Xiao Liu, Peng Hu, Zhongqin Wu, Jiancheng Lv, and Xi Peng. All-in-one image restoration for unknown cor- ruption. InCVPR, 2022. 1
2022
-
[14]
Foundir: Unleashing million-scale training data to advance foundation models for image restoration
Hao Li, Xiang Chen, Jiangxin Dong, Jinhui Tang, and Jin- shan Pan. Foundir: Unleashing million-scale training data to advance foundation models for image restoration. InICCV,
-
[15]
Ntire 2025 challenge on day and night raindrop removal for dual-focused images: Methods and results
Xin Li, Yeying Jin, Xin Jin, Zongwei Wu, Bingchen Li, Yufei Wang, Wenhan Yang, Yu Li, Zhibo Chen, Bihan Wen, et al. Ntire 2025 challenge on day and night raindrop removal for dual-focused images: Methods and results. InCVPR Work- shop, 2025. 2, 5
2025
-
[16]
LoViF 2026 the first challenge on human- oriented semantic image quality assessment: Methods and results
Xin Li, Daoli Xu, Wei Luo, Guoqiang Xiang, Haoran Li, Chengyu Zhuang, Zhibo Chen, Jian Guan, and Weipin- gand others Li. LoViF 2026 the first challenge on human- oriented semantic image quality assessment: Methods and results. InCVPR Workshop, 2026. 2
2026
-
[17]
Enhanced deep residual networks for single image super-resolution
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. InCVPRW, pages 136–144, 2017. 6
2017
-
[18]
LoViF 2026 the first challenge on holistic quality assessment for 4d world model (physcore)
Wei Luo, Yiting Lu, Xin Li, Haoran Li, Fengbin Guan, Chen Gao, Xin Jin, Yong Li, Zhibo Chen, et al. LoViF 2026 the first challenge on holistic quality assessment for 4d world model (physcore). InCVPR Workshop, 2026. 2
2026
-
[19]
Promptir: Prompting for all-in- one image restoration.NeurIPS, 2024
Vaishnav Potlapalli, Syed Waqas Zamir, Salman H Khan, and Fahad Shahbaz Khan. Promptir: Prompting for all-in- one image restoration.NeurIPS, 2024. 1
2024
-
[20]
LoViF 2026 the first challenge on weather removal in videos
Chenghao Qian, Xin Li, Yeying Jin, Shangquan Sun, et al. LoViF 2026 the first challenge on weather removal in videos. InCVPR Workshop, 2026. 2
2026
-
[21]
Strrnet: Semantics-guided two-stage rain- drop removal network
Qiyu Rong, Hongyuan Jing, Mengmeng Zhang, Jinlong Li, and Mengfei Han. Strrnet: Semantics-guided two-stage rain- drop removal network. InCVPR, 2025. 5
2025
-
[22]
Com- plexity experts are task-discriminative learners for any image restoration
Eduard Zamfir, Zongwei Wu, Nancy Mehta, Yuedong Tan, Danda Pani Paudel, Yulun Zhang, and Radu Timofte. Com- plexity experts are task-discriminative learners for any image restoration. InCVPR, 2025. 4
2025
-
[23]
The 1st LoViF challenge on efficient vlm for multimodal cre- ative quality scoring: Methods and results
Jusheng Zhang, Qinhan Lyu, Sizhuo Ma, Sheng Cao, Jian Wang, Xin Li, Keze Wang, Yongsen Zheng, Jing Yang, et al. The 1st LoViF challenge on efficient vlm for multimodal cre- ative quality scoring: Methods and results. InCVPR Work- shop, 2026. 2
2026
-
[24]
Selective hourglass mapping for universal image restoration based on diffusion model
Dian Zheng, Xiao-Ming Wu, Shuzhou Yang, Jian Zhang, Jian-Fang Hu, and Wei-Shi Zheng. Selective hourglass mapping for universal image restoration based on diffusion model. InCVPR, 2024. 1
2024
-
[25]
Wave-mamba: Wavelet state space model for ultra-high- definition low-light image enhancement
Wenbin Zou, Hongxia Gao, Weipeng Yang, and Tongtong Liu. Wave-mamba: Wavelet state space model for ultra-high- definition low-light image enhancement. InACM MM, 2024. 2
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.