Recognition: unknown
The First Controllable Bokeh Rendering Challenge at NTIRE 2026
Pith reviewed 2026-05-08 16:13 UTC · model grok-4.3
The pith
The first NTIRE controllable bokeh challenge shows that eight teams mostly refined an existing baseline for rendering depth-of-field effects on complex portraits.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
This study presents the outcomes of the first Controllable Bokeh Rendering Challenge at NTIRE and highlights the most effective submitted methodologies. In total, 44 participants registered for the competition, of which 8 teams submitted valid solutions after the conclusion of the final test phase. All submissions were evaluated on unseen images, focusing on portraits and intricate subjects with complex and visually appealing bokeh phenomena. In addition to the first track focusing on established quantitative fidelity metrics, we conducted a qualitative user study with a panel of experts for a second track focusing on perceptual assessment. As this was the inaugural challenge on this topic,
What carries the argument
The Bokehlicious baseline method, which most submitted solutions refined and extended to control bokeh shape, size, and placement while preserving subject sharpness.
If this is right
- Future work can treat the submitted solutions as reference points when developing new controllable bokeh algorithms.
- Dual-track evaluation combining metrics with expert judgment becomes a practical standard for assessing visual realism.
- Datasets focused on portraits with complex background structures are now validated as useful benchmarks.
- Incremental improvement of existing pipelines remains competitive when entirely new architectures are not yet mature.
Where Pith is reading between the lines
- The early stage of the field is indicated by the dominance of baseline refinement over novel designs.
- Perceptual studies may reveal quality gaps that pure quantitative metrics miss in bokeh synthesis.
- Organizers of future challenges could expand test scenes beyond portraits to test generalization.
Load-bearing premise
That the eight submitted solutions together with the chosen quantitative and perceptual evaluation tracks adequately represent current progress in controllable bokeh rendering.
What would settle it
A new method that achieves substantially higher scores than all eight entries on both the quantitative metrics and the expert perceptual ratings on the same unseen test set.
Figures
read the original abstract
This study presents the outcomes of the first Controllable Bokeh Rendering Challenge at NTIRE and highlights the most effective submitted methodologies. In total, 44 participants registered for the competition, of which 8 teams submitted valid solutions after the conclusion of the final test phase. All submissions were evaluated on unseen images, focusing on portraits and intricate subjects with complex and visually appealing bokeh phenomena. In addition to the first track focusing on established quantitative fidelity metrics, we conducted a qualitative user study with a panel of experts for a second track focusing on perceptual assessment. As this was the inaugural challenge on this topic, most of the participants focused on refining and extending the Bokehlicious baseline method.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript reports the outcomes of the first NTIRE 2026 Controllable Bokeh Rendering Challenge. It states that 44 participants registered, 8 teams submitted valid solutions, all evaluated on unseen test images (portraits and complex subjects) using a quantitative fidelity track plus an expert perceptual study track. The paper observes that most entries refined and extended the Bokehlicious baseline.
Significance. As the inaugural challenge on controllable bokeh rendering, the report documents participation rates, evaluation protocols, and the current reliance on a single baseline. This establishes an initial public benchmark and evaluation framework for the subfield, which can guide future submissions if the manuscript includes concrete metric values, rankings, and analysis of successful approaches.
major comments (1)
- [Abstract / Results] The abstract and provided text state that the challenge used quantitative metrics and a perceptual study but report no numerical results, rankings, or analysis of why particular refinements to Bokehlicious succeeded or failed. This omission is load-bearing for the central claim of 'highlighting the most effective submitted methodologies' and prevents readers from assessing progress.
minor comments (2)
- [Evaluation] Clarify the exact quantitative metrics used in the first track and the protocol for the expert perceptual study (e.g., number of experts, rating scale, statistical analysis).
- [Introduction] Provide a brief description or citation for the Bokehlicious baseline so readers unfamiliar with it can understand what refinements were made.
Simulated Author's Rebuttal
We thank the referee for the detailed review and constructive suggestion. We agree that the manuscript must include concrete results to support its claims and will revise accordingly.
read point-by-point responses
-
Referee: [Abstract / Results] The abstract and provided text state that the challenge used quantitative metrics and a perceptual study but report no numerical results, rankings, or analysis of why particular refinements to Bokehlicious succeeded or failed. This omission is load-bearing for the central claim of 'highlighting the most effective submitted methodologies' and prevents readers from assessing progress.
Authors: We agree that the current version of the manuscript does not report the specific numerical fidelity scores, final rankings from either track, or analysis of which refinements to the Bokehlicious baseline proved most effective. This information is necessary to substantiate the claim of highlighting the most effective methodologies. In the revised manuscript we will add (1) the quantitative metric values for all eight valid submissions on the unseen test set, (2) the rankings produced by both the fidelity track and the expert perceptual study, and (3) a concise analysis of the architectural and training modifications that distinguished the top-performing entries from the baseline. These additions will be placed in a new Results section and referenced in the abstract. revision: yes
Circularity Check
Factual competition report with no derivations or self-referential claims
full rationale
This paper is a report on the outcomes of the first Controllable Bokeh Rendering Challenge at NTIRE 2026. It states participation numbers (44 registered, 8 valid submissions), describes the evaluation protocol (quantitative fidelity metrics plus expert perceptual study), and notes that most entries refined the Bokehlicious baseline. No equations, derivations, fitted parameters, predictions, or load-bearing self-citations appear in the provided text or abstract. The central content is a factual summary of event participation and results with no chain that reduces any claim to its own inputs by construction.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
CRC Press, 2012
Elizabeth Allen and Sophie Triantaphillidou.The manual of photography. CRC Press, 2012. 2
2012
-
[2]
NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing
Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cos- min Ancuti. NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[3]
NTIRE 2026 Nighttime Image Dehazing Challenge Report
Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cos- min Ancuti, et al. NTIRE 2026 Nighttime Image Dehazing Challenge Report . InProceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[4]
Defocus magnification
Soonmin Bae and Fr ´edo Durand. Defocus magnification. In Computer graphics forum, volume 26-3, pages 571–579. Wi- ley Online Library, 2007. 1
2007
-
[5]
Fast bilateral-space stereo for synthetic de- focus
Jonathan T Barron, Andrew Adams, YiChang Shih, and Car- los Hern´andez. Fast bilateral-space stereo for synthetic de- focus. InCVPR, 2015. 1
2015
-
[6]
Sterefo: Efficient image refocusing with stereo vision
Benjamin Busam, Matthieu Hog, Steven McDonagh, and Gregory Slabaugh. Sterefo: Efficient image refocusing with stereo vision. InICCVW, 2019. 1
2019
-
[7]
NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods
Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods . InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) Workshops,
2026
-
[8]
Simple baselines for image restoration
Liangyu Chen, Xiaojie Chu, Xiangyu Zhang, and Jian Sun. Simple baselines for image restoration. InECCV, 2022. 4, 7
2022
-
[9]
The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview
Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xi- aoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) W...
2026
-
[10]
Low Light Image Enhancement Challenge at NTIRE 2026
George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali, Rizwan Ali Naqvi, Marcos Conde, Radu Timofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026 . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[11]
High FPS Video Frame Interpolation Challenge at NTIRE 2026
George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[12]
Lens-to-lens bokeh effect transformation
Marcos V Conde, Manuel Kolmet, Tim Seizinger, Tom E Bishop, Radu Timofte, Xiangyu Kong, Dafeng Zhang, Jin- long Wu, Fan Wang, Juewen Peng, et al. Lens-to-lens bokeh effect transformation. ntire 2023 challenge report. InCVPR,
2023
-
[13]
Dis- tributed ray tracing
Robert L Cook, Thomas Porter, and Loren Carpenter. Dis- tributed ray tracing. InProceedings of the 11th annual con- ference on Computer graphics and interactive techniques, pages 137–145, 1984. 1
1984
-
[14]
The phenonenon of eclipsed bokeh
Paul Debevec. The phenonenon of eclipsed bokeh. InACM SIGGRAPH 2020 Posters, SIGGRAPH ’20, New York, NY , USA, 2020. Association for Computing Machinery. 2
2020
-
[15]
NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Chal- lenge Report
Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Ta- tui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Chal- lenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[16]
Stacked deep multi-scale hierarchical network for fast bokeh effect rendering from a single image
Saikat Dutta, Sourya Dipta Das, Nisarg A Shah, and Anil Kumar Tiwari. Stacked deep multi-scale hierarchical network for fast bokeh effect rendering from a single image. InCVPRW, 2021. 2
2021
-
[17]
Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al
Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Trans- fer, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[18]
NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results
Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[19]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)
Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[20]
NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild
Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InProceedings of the IEEE/CVF Conference on Computer Vision and Pa...
2026
-
[21]
Cinematic bokeh rendering for real scenes
Thomas Hach, Johannes Steurer, Arvind Amruth, and Artur Pappenheim. Cinematic bokeh rendering for real scenes. In CVMP, 2015. 1, 2
2015
-
[22]
Masked autoencoders are scalable vision learners
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll´ar, and Ross Girshick. Masked autoencoders are scalable vision learners. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16000– 16009, 2022. 8
2022
-
[23]
Robust Deepfake De- tection, NTIRE 2026 Challenge: Report
Benedikt Hopf, Radu Timofte, et al. Robust Deepfake De- tection, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[24]
Bokehflow: Depth-free controllable bokeh rendering via flow matching
Yachuan Huang, Xianrui Luo, Qiwen Wang, Liao Shen, Ji- aqi Li, Huiqiang Sun, Zihao Huang, Wei Jiang, and Zhiguo Cao. Bokehflow: Depth-free controllable bokeh rendering via flow matching. InProceedings of the AAAI Confer- ence on Artificial Intelligence, volume 40, pages 5167–5175,
-
[25]
Rendering natural camera bokeh effect with deep learning
Andrey Ignatov, Jagruti Patel, and Radu Timofte. Rendering natural camera bokeh effect with deep learning. InCVPRW,
-
[26]
Aim 2020 challenge on rendering realistic bokeh
Andrey Ignatov, Radu Timofte, Ming Qian, Congyu Qiao, Jiamin Lin, Zhenyu Guo, Chenghua Li, Cong Leng, Jian Cheng, Juewen Peng, et al. Aim 2020 challenge on rendering realistic bokeh. InECCVW, 2020. 1
2020
-
[27]
Repurpos- ing diffusion-based image generators for monocular depth estimation
Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Met- zger, Rodrigo Caye Daudt, and Konrad Schindler. Repurpos- ing diffusion-based image generators for monocular depth estimation. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9492–9502,
-
[28]
Marigold: Affordable adaptation of diffusion- based image generators for image analysis, 2025
Bingxin Ke, Kevin Qu, Tianfu Wang, Nando Metzger, Shengyu Huang, Bo Li, Anton Obukhov, and Konrad Schindler. Marigold: Affordable adaptation of diffusion- based image generators for image analysis, 2025. 7
2025
-
[29]
What is ’bokeh’?Photo Techniques mag- azine, May/June:28, 1997
John Kennerdell. What is ’bokeh’?Photo Techniques mag- azine, May/June:28, 1997. 1
1997
-
[30]
NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge
Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Ko- rchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Doro- gova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani ´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In Proceedings of the IEEE/CVF Con...
2026
-
[31]
Nafbet: Bokeh effect transformation with param- eter analysis block based on nafnet
Xiangyu Kong, Fan Wang, Dafeng Zhang, Jinlong Wu, and Zikun Liu. Nafbet: Bokeh effect transformation with param- eter analysis block based on nafnet. InCVPRW, 2023. 2
2023
-
[32]
Depth-of-field render- ing by pyramidal image processing.Computer graphics fo- rum, 26(3):645–654, 2007
Martin Kraus and Magnus Strengert. Depth-of-field render- ing by pyramidal image processing.Computer graphics fo- rum, 26(3):645–654, 2007. 1
2007
-
[33]
Imagenet classification with deep convolutional neural net- works.Advances in neural information processing systems, 25, 2012
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural net- works.Advances in neural information processing systems, 25, 2012. 2
2012
-
[34]
Bokeh-loss gan: multi-stage adversarial training for realis- tic edge-aware bokeh
Brian Lee, Fei Lei, Huaijin Chen, and Alexis Baudron. Bokeh-loss gan: multi-stage adversarial training for realis- tic edge-aware bokeh. InECCVW, 2023. 1, 2
2023
-
[35]
The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview
Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[36]
NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results
Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[37]
NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results
Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer...
2026
-
[38]
The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview
Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[39]
Conde, et al
Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: Re- alX3D Challenge Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[40]
NTIRE 2026 X- AIGC Quality Assessment Challenge: Methods and Results
Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...
2026
-
[41]
Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and radiance priors.Information Fusion, 89:320–335, 2023
Xianrui Luo, Juewen Peng, Ke Xian, Zijin Wu, and Zhiguo Cao. Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and radiance priors.Information Fusion, 89:320–335, 2023. 1
2023
-
[42]
Neural bokeh: Learning lens blur for computational videography and out-of-focus mixed reality
David Mandl, Shohei Mori, Peter Mohr, Yifan Peng, Tobias Langlotz, Dieter Schmalstieg, and Denis Kalkofen. Neural bokeh: Learning lens blur for computational videography and out-of-focus mixed reality. In2024 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2024. 1, 2
2024
-
[43]
Routledge, 2019
Gustavo Mercado.The filmmaker’s eye: The language of the lens: The power of lenses and the expressive cinematic image. Routledge, 2019. 1
2019
-
[44]
NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results
Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timo- fte, et al. NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[45]
Chun-Wei Tuan Mu, Cheng-De Fan, Jia-Bin Huang, and Yu- Lun Liu. Generative refocusing: Flexible defocus control from a single image.arXiv preprint arXiv:2512.16923, 2025. 1, 2
-
[46]
Bokeh ef- fect rendering with vision transformers.Authorea Preprints,
Hariharan Nagasubramaniam and Rabih Younes. Bokeh ef- fect rendering with vision transformers.Authorea Preprints,
-
[47]
NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results
Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[48]
Bokehme: When neural rendering meets classical rendering
Juewen Peng, Zhiguo Cao, Xianrui Luo, Hao Lu, Ke Xian, and Jianming Zhang. Bokehme: When neural rendering meets classical rendering. InCVPR, 2022. 1
2022
-
[49]
Bokehme++: Harmonious fusion of classical and neural rendering for ver- satile bokeh creation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024
Juewen Peng, Zhiguo Cao, Xianrui Luo, Ke Xian, Wenfeng Tang, Jianming Zhang, and Guosheng Lin. Bokehme++: Harmonious fusion of classical and neural rendering for ver- satile bokeh creation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024. 1, 2
2024
-
[50]
Inter- active portrait bokeh rendering system
Juewen Peng, Xianrui Luo, Ke Xian, and Zhiguo Cao. Inter- active portrait bokeh rendering system. InICIP, 2021. 1
2021
-
[51]
Se- lective bokeh effect transformation
Juewen Peng, Zhiyu Pan, Chengxin Liu, Xianrui Luo, Huiqiang Sun, Liao Shen, Ke Xian, and Zhiguo Cao. Se- lective bokeh effect transformation. InCVPR, 2023. 7
2023
-
[52]
Mpib: An mpi-based bokeh ren- dering framework for realistic partial occlusion effects
Juewen Peng, Jianming Zhang, Xianrui Luo, Hao Lu, Ke Xian, and Zhiguo Cao. Mpib: An mpi-based bokeh ren- dering framework for realistic partial occlusion effects. In ECCV, 2022. 1, 2
2022
-
[53]
NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results
Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 2
2026
-
[54]
Film: Visual reasoning with a general conditioning layer
Ethan Perez, Florian Strub, Harm De Vries, Vincent Du- moulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. InProceedings of the AAAI con- ference on artificial intelligence, volume 32, 2018. 7
2018
-
[55]
A lens and aper- ture camera model for synthetic image generation.ACM SIGGRAPH Computer Graphics, 15(3):297–305, 1981
Michael Potmesil and Indranil Chakravarty. A lens and aper- ture camera model for synthetic image generation.ACM SIGGRAPH Computer Graphics, 15(3):297–305, 1981. 1
1981
-
[56]
Depth-guided dense dynamic fil- tering network for bokeh effect rendering
Kuldeep Purohit, Maitreya Suin, Praveen Kandula, and Ra- jagopalan Ambasamudram. Depth-guided dense dynamic fil- tering network for bokeh effect rendering. InICCVW, 2019. 2
2019
-
[57]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1)
Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1) . InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[58]
The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results
Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timo- fte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[59]
NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2)
Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[60]
The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report
Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[61]
Efficient multi-lens bokeh effect rendering and transformation
Tim Seizinger, Marcos V Conde, Manuel Kolmet, Tom E Bishop, and Radu Timofte. Efficient multi-lens bokeh effect rendering and transformation. InCVPRW, 2023. 1, 2, 7
2023
-
[62]
Bokehlicious: Pho- torealistic bokeh rendering with controllable apertures
Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V Conde, Zongwei Wu, and Radu Timofte. Bokehlicious: Pho- torealistic bokeh rendering with controllable apertures. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8908–8917, 2025. 1, 2, 4, 5, 6
2025
-
[63]
Yichen Sheng, Zixun Yu, Lu Ling, Zhiwen Cao, Xuaner Zhang, Xin Lu, Ke Xian, Haiting Lin, and Bedrich Benes. Dr. bokeh: Differentiable occlusion-aware bokeh rendering. InCVPR, 2024. 1, 2
2024
-
[64]
Score-based generative modeling through stochastic differential equa- tions
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Ab- hishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equa- tions. InInternational Conference on Learning Represen- tations, 2021. 8
2021
-
[65]
The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results
Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[66]
The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results
Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Wor...
2026
-
[67]
NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results
Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[68]
Seven ways to improve example-based single image super resolu- tion
Radu Timofte, Rasmus Rothe, and Luc Van Gool. Seven ways to improve example-based single image super resolu- tion. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 1865–1873, 2016. 8
2016
-
[69]
Score-based self- supervised mri denoising.arXiv preprint arXiv:2505.05631,
Jiachen Tu, Yaokun Shi, and Fan Lam. Score-based self- supervised mri denoising.arXiv preprint arXiv:2505.05631,
-
[70]
Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings
Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[71]
Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge
Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[72]
Synthetic depth-of-field with a single-camera mobile phone
Neal Wadhwa, Rahul Garg, David E Jacobs, Bryan E Feld- man, Nori Kanazawa, Robert Carroll, Yair Movshovitz- Attias, Jonathan T Barron, Yael Pritch, and Marc Levoy. Synthetic depth-of-field with a single-camera mobile phone. ACM Transactions on Graphics (ToG), 37(4):1–13, 2018. 1, 2
2018
-
[73]
The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results
Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 2
2026
-
[74]
NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results
Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[75]
Deeplens: shallow depth of field from a single image.ACM Transactions on Graphics (TOG), 37(6):1–11, 2018
Lijun Wang, Xiaohui Shen, Jianming Zhang, Oliver Wang, Zhe Lin, Chih-Yao Hsieh, Sarah Kong, and Huchuan Lu. Deeplens: shallow depth of field from a single image.ACM Transactions on Graphics (TOG), 37(6):1–11, 2018. 1
2018
-
[76]
NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results
Yingqian Wang, Zhengyu Liang, Fengyuan Zhang, Wending Zhao, Longguang Wang, Juncheng Li, Jungang Yang, Radu Timofte, Yulan Guo, et al. NTIRE 2026 Challenge on Light Field Image Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[77]
Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600–612, 2004
Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Si- moncelli. Image quality assessment: from error visibility to structural similarity.IEEE transactions on image processing, 13(4):600–612, 2004. 2
2004
-
[78]
Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform.Patterns, 3(7), 2022
Zhen Xu, Sergio Escalera, Adrien Pav ˜ao, Magali Richard, Wei-Wei Tu, Quanming Yao, Huan Zhao, and Isabelle Guyon. Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform.Patterns, 3(7), 2022. 3
2022
-
[79]
Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report
Jiebin Yan, Chenyu Tu, Qinghua Lin, Zongwei WU, Weixia Zhang, Zhihua Wang, Peibei Cao, Yuming Fang, Xiaoning Liu, Zhuyun Zhou, Radu Timofte, et al. Efficient Low Light Image Enhancement: NTIRE 2026 Challenge Report . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 2
2026
-
[80]
Depth any- thing v2.Advances in Neural Information Processing Sys- tems, 37:21875–21911, 2024
Lihe Yang, Bingyi Kang, Zilong Huang, Zhen Zhao, Xiao- gang Xu, Jiashi Feng, and Hengshuang Zhao. Depth any- thing v2.Advances in Neural Information Processing Sys- tems, 37:21875–21911, 2024. 5
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.