pith. machine review for the scientific record. sign in

arxiv: 2604.17070 · v2 · submitted 2026-04-18 · 💻 cs.CV

Recognition: unknown

NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge Report

Aakash Ralhan, Abdullah Naeem, Akbarali Vakhitov, Amitabh Tripathi, Anav Katwal, Andrei Dumitriu, Anjana Nanditha, Asuka Shin, Ayon Dey, Chun'an Yu, Florin Miron, Florin Tatui, Gaurav Mahesh, Gejalakshmi N, Gundluri Yuvateja Reddy, Guoyi Xu, Harshitha Palaram, Hiroto Shirono, Jeevitha S, Jiachen Tu, Jiajia Liu, Jiji CV, Junhao Chen, Kosuke Shigematsu, Md Tamjidul Hoque, Modugumudi Mahesh, Radu Timofte, Radu Tudor Ionescu, Sang-Chul Lee, Santosh Kumar Vipparthi, Subrahmanyam Murala, Xinger Li, Yang Yang, Yaokun Shi, Yaoxin Jiang

Authors on Pith no claims yet

Pith reviewed 2026-05-10 06:28 UTC · model grok-4.3

classification 💻 cs.CV
keywords rip current detectionimage segmentationNTIRE challengecomputer visionRipVIS benchmarkpretrained modelsbeach safetyhazard detection
0
0 comments X

The pith

Pretrained general vision models achieve strong performance on rip current detection and segmentation across diverse beaches.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents the NTIRE 2026 Rip Current Detection and Segmentation Challenge, which tests automatic identification of hazardous rip currents in images from varied global locations and conditions. It introduces a diverse dataset sourced from over ten countries with multiple camera orientations and sea states, then evaluates nine valid submissions that mostly apply pretrained vision models plus augmentation and post-processing. The outcomes indicate that advances in general-purpose models transfer effectively to this safety-critical task. A reader would care because rip currents cause numerous beach fatalities each year, and reliable automated detection could support real-time warning systems.

Core claim

The challenge results on the RipVIS benchmark show that participant solutions relying on robust pretrained models, combined with strong augmentation and post-processing, produce competitive composite scores on detection and segmentation, suggesting that rip current understanding benefits strongly from the progress in general-purpose vision models while leaving ample room for future methods tailored to their unique visual structure.

What carries the argument

The RipVIS benchmark dataset paired with a composite ranking score that combines F1 and F2 metrics at IoU thresholds of 50 and 40:95 to evaluate both detection and segmentation tasks.

If this is right

  • Pretrained models with augmentation and post-processing form an effective baseline for rip current tasks.
  • General-purpose vision progress directly aids safety applications involving variable nearshore flows.
  • The benchmark dataset supports standardized future comparisons in this domain.
  • Tailored methods focused on rip-specific visual cues could close remaining performance gaps.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Existing vision systems could be integrated into beach monitoring cameras to provide alerts without requiring entirely new model development.
  • The dataset's multi-country coverage suggests models that generalize across viewpoints may scale to global safety tools.
  • Extending evaluation to video inputs would test whether the same approaches maintain consistency over time.

Load-bearing premise

The composite evaluation score combining F1 and F2 at different IoU thresholds accurately reflects practical performance for rip current detection in real-world deployment scenarios.

What would settle it

A controlled deployment test of the top three submitted models on a new beach location and sea state outside the dataset, measuring their precision against expert human annotations under live conditions.

Figures

Figures reproduced from arXiv: 2604.17070 by Aakash Ralhan, Abdullah Naeem, Akbarali Vakhitov, Amitabh Tripathi, Anav Katwal, Andrei Dumitriu, Anjana Nanditha, Asuka Shin, Ayon Dey, Chun'an Yu, Florin Miron, Florin Tatui, Gaurav Mahesh, Gejalakshmi N, Gundluri Yuvateja Reddy, Guoyi Xu, Harshitha Palaram, Hiroto Shirono, Jeevitha S, Jiachen Tu, Jiajia Liu, Jiji CV, Junhao Chen, Kosuke Shigematsu, Md Tamjidul Hoque, Modugumudi Mahesh, Radu Timofte, Radu Tudor Ionescu, Sang-Chul Lee, Santosh Kumar Vipparthi, Subrahmanyam Murala, Xinger Li, Yang Yang, Yaokun Shi, Yaoxin Jiang.

Figure 1
Figure 1. Figure 1: Examples from the RipVIS dataset [23], which also forms the basis of the RipDetSeg Challenge. The four columns illustrate different camera orientations: (a) aerial bird’s-eye, (b) aerial tilted, (c) elevated beachfront, and (d) water-level beachfront. The examples highlight the diversity of rip currents across locations, types, and viewpoints. Rip currents are visible through disrupted wave-breaking patter… view at source ↗
Figure 3
Figure 3. Figure 3: Overview of Team SiGMoid’s pipeline. A YOLO11m [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Overview of Team Riposte’s pipeline. Training only [PITH_FULL_IMAGE:figures/full_fig_p005_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Team Soloseg’s schematic illustration of the standard [PITH_FULL_IMAGE:figures/full_fig_p006_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Team KMG’s two stage pipeline using YOLOv13 and [PITH_FULL_IMAGE:figures/full_fig_p006_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Team RIP YuvatejaReddy system pipeline.Images are processed by the YOLOv8s-seg, outputting both bounding box co￾ordinates and corresponding polygons. per image. These predictions are post-processed using IoU- and IoA-based merging to combine highly overlapping boxes and suppress redundant detections. In parallel, a DI￾NOv3 detector implemented in MMDetection is applied at high resolution and its output is … view at source ↗
Figure 8
Figure 8. Figure 8: Team VisionX’s mask-centric pipeline. Multi-scale in [PITH_FULL_IMAGE:figures/full_fig_p007_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Team NTR’s inference pipeline. YOLO11x detection [PITH_FULL_IMAGE:figures/full_fig_p008_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Overview of Team Amitabh’s YOLO11s-seg pipeline. [PITH_FULL_IMAGE:figures/full_fig_p008_10.png] view at source ↗
read the original abstract

This report presents the NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge, which targets automatic rip current understanding in images. Rip currents are hazardous nearshore flows that cause many beach-related fatalities worldwide, yet remain difficult to identify because their visual appearance varies substantially across beaches, viewpoints, and sea states. To advance research on this safety-critical problem, the challenge builds on the RipVIS benchmark, evaluating both detection and segmentation. The dataset is diverse, sourced from more than $10$ countries, with $4$ camera orientations and diverse beach and sea conditions. This report describes the dataset, challenge protocol, evaluation methodology, final results, and summarizes the main insights from the submitted methods. The challenge attracted $159$ registered participants and produced $9$ valid test submissions across the two tasks. Final rankings are based on a composite score that combines $F_1[50]$, $F_2[50]$, $F_1[40\!:\!95]$, and $F_2[40\!:\!95]$. Most participant solutions relied on pretrained models, combined with strong augmentation and post-processing design. These results suggest that rip current understanding benefits strongly from the robust general-purpose vision models' progress, while leaving ample room for future methods tailored to their unique visual structure.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 3 minor

Summary. The manuscript reports on the NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Challenge. It describes the RipVIS benchmark dataset (diverse images from >10 countries, 4 camera orientations, varied beach/sea conditions), the two tasks (detection and segmentation), the evaluation protocol based on a composite score of F1[50], F2[50], F1[40:95] and F2[40:95], participation statistics (159 registered, 9 valid test submissions), the final rankings, and the main insights that most submissions used pretrained general-purpose vision models plus augmentation and post-processing, suggesting that rip current understanding benefits from general vision model progress while leaving room for methods tailored to rip currents' unique visual structure.

Significance. If the reported participation, rankings, and method summaries hold, the report is significant for establishing a standardized, diverse benchmark on a safety-critical task with direct potential to reduce beach fatalities. It provides a clear baseline showing transferability of recent general-purpose CV advances (via pretrained models) to this domain and identifies open challenges for specialized techniques. The factual, descriptive nature of the report, with no unsubstantiated causal claims, makes it a useful community resource for tracking progress on rip current detection and segmentation.

major comments (1)
  1. [Evaluation protocol] Evaluation protocol section: the composite score (F1[50] + F2[50] + F1[40:95] + F2[40:95]) is used to produce final rankings and underpins the interpretation of which methods succeed, yet the manuscript provides no justification, weighting rationale, or correlation analysis showing that this metric accurately reflects practical real-world performance for rip current detection in deployment scenarios.
minor comments (3)
  1. [Abstract and Dataset] Abstract and dataset description: the claim of sourcing from 'more than 10 countries' should be accompanied by the exact count and per-country distribution in the main text to support reproducibility and diversity claims.
  2. [Results] Results section: while the report notes that most solutions rely on pretrained models with augmentation and post-processing, a table or summary quantifying the performance gap versus non-pretrained baselines would strengthen the observational insight about benefits from general vision progress.
  3. [Participation] Participation details: the drop from 159 registered participants to 9 valid submissions is reported but not discussed; a short note on common failure modes or submission issues would aid future challenge organizers.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback on the evaluation protocol. We agree that additional justification is warranted and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: Evaluation protocol section: the composite score (F1[50] + F2[50] + F1[40:95] + F2[40:95]) is used to produce final rankings and underpins the interpretation of which methods succeed, yet the manuscript provides no justification, weighting rationale, or correlation analysis showing that this metric accurately reflects practical real-world performance for rip current detection in deployment scenarios.

    Authors: We acknowledge the need for explicit justification. The composite score was chosen to balance precision-recall trade-offs (via F1 and F2) while incorporating both a standard IoU=0.5 threshold and the COCO-style averaged [0.4:0.95] range for robustness to localization quality. F2 weighting prioritizes recall, which aligns with the safety-critical nature of rip current detection where missing a hazard is far costlier than false alarms. This follows conventions from established benchmarks such as COCO and Pascal VOC. A direct correlation study with real-world deployment performance is not feasible here, as it would require operational data from beach safety systems that is unavailable to the challenge organizers. In the revision we will add a concise rationale paragraph in the Evaluation Protocol section, including the safety-motivated weighting and an explicit statement of this limitation. revision: yes

Circularity Check

0 steps flagged

No significant circularity

full rationale

The paper is a descriptive challenge report summarizing dataset construction, evaluation protocol, participant submissions, and observed performance trends. It contains no mathematical derivations, fitted parameters, predictions, or load-bearing self-citations. All statements follow directly from reported external submissions and standard metrics without any reduction to internal definitions or ansatzes.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

No free parameters, axioms, or invented entities are involved as this is a descriptive challenge report without theoretical derivations or modeling.

pith-pipeline@v0.9.0 · 5710 in / 1100 out tokens · 51500 ms · 2026-05-10T06:28:49.153979+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

91 extracted references · 6 canonical work pages · 4 internal anchors

  1. [1]

    Ibn-driven rip current analysis using uavs for next-generation coastal surveillance.IEEE Internet of Things Journal, 2025

    Shehzad Ali, Muhammad Saqib, Abdul Khader Jilani Sauda- gar, Muhammad Sajjad, Mohammad Hijji, Yazeed Masaud Alkhrijah, Khan Muhammad, and Victor Hugo C De Al- buquerque. Ibn-driven rip current analysis using uavs for next-generation coastal surveillance.IEEE Internet of Things Journal, 2025. 1

  2. [2]

    NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing

    Radu Ancuti, Codruta Ancuti, Radu Timofte, and Cos- min Ancuti. NT-HAZE: A Benchmark Dataset for Re- alistic Night-time Image Dehazing . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  3. [3]

    NTIRE 2026 Nighttime Image Dehazing Challenge Report

    Radu Ancuti, Alexandru Brateanu, Florin Vasluianu, Raul Balmez, Ciprian Orhei, Codruta Ancuti, Radu Timofte, Cos- min Ancuti, et al. NTIRE 2026 Nighttime Image Dehazing Challenge Report . InProceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  4. [4]

    Brander, Dale Dominey-Howes, C

    R. Brander, Dale Dominey-Howes, C. Champion, O. Del Vecchio, and B. Brighton. Brief Communication: A new perspective on the Australian rip current hazard.Nat- ural Hazards and Earth System Sciences, 13(6):1687–1690,

  5. [5]

    Brander and A.D

    Robert W. Brander and A.D. Short. Morphodynamics of a large-scale rip current system at Muriwai Beach, New Zealand.Marine Geology, 165(1-4):27–39, 2000. 1

  6. [6]

    Chris Brewster, Richard E

    B. Chris Brewster, Richard E. Gould, and Robert W. Brander. Estimations of rip current rescues and drowning in the United States.Natural Hazards and Earth System Sciences, 19(2): 389–397, 2019. 1

  7. [7]

    NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods

    Jie Cai, Kangning Yang, Zhiyuan Li, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on Single Image Re- flection Removal in the Wild: Datasets, Results, and Meth- ods . InProceedings of the IEEE/CVF Conference on Com- puter Vision and Pattern Recognition (CVPR) Workshops,

  8. [8]

    Castelle, Tim Scott, R.W

    B. Castelle, Tim Scott, R.W. Brander, and R.J. McCarroll. Rip current types, circulation and hazard.Earth-Science Re- views, 163:1–21, 2016. 1

  9. [9]

    Oriented object detection for complex hydrodynamic features: A multi-platform rip current identification system.EGUsphere, 2026:1–29, 2026

    Albert Catal `a-Gonell, Jes ´us Soriano-Gonz ´alez, Elena S´anchez-Garc´ıa, Francisco Fabi ´an Criado-Sudau, Josep Oliver-Sans´o, Valentin Kozlov, Khadijeh Alibabaei, Jos´e Luis Lisani, and `Angels Fern ´andez-Mora. Oriented object detection for complex hydrodynamic features: A multi-platform rip current identification system.EGUsphere, 2026:1–29, 2026. 1

  10. [10]

    The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview

    Zheng Chen, Kai Liu, Jingkai Wang, Xianglong Yan, Jianze Li, Ziqing Zhang, Jue Gong, Jiatong Li, Lei Sun, Xi- aoyang Liu, Radu Timofte, Yulun Zhang, et al. The Fourth Challenge on Image Super-Resolution (×4) at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) W...

  11. [11]

    Masked-attention mask transformer for universal image segmentation

    Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexan- der Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. InProceed- ings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1290–1299, 2022. 7

  12. [12]

    Explainable Rip Current Detection and Visualization with XAI EigenCAM

    Juno Choi, Muralidharan Rajendran, and Yong Cheol Suh. Explainable Rip Current Detection and Visualization with XAI EigenCAM. InProceedings of 26th International Con- ference on Advanced Communications Technology, pages 1– 6, 2024. 1

  13. [13]

    Box2rip: Instance segmentation of amorphous rip currents via box-supervised learning.IEEE Access, 2025

    Juno Choi, Muralidharan Rajendran, and Yong Cheol Suh. Box2rip: Instance segmentation of amorphous rip currents via box-supervised learning.IEEE Access, 2025. 1

  14. [14]

    Low Light Image Enhancement Challenge at NTIRE 2026

    George Ciubotariu, Sharif S M A, Abdur Rehman, Fayaz Ali Dharejo, Rizwan Ali Naqvi, Marcos Conde, Radu Tim- ofte, et al. Low Light Image Enhancement Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  15. [15]

    High FPS Video Frame Interpolation Challenge at NTIRE 2026

    George Ciubotariu, Zhuyun Zhou, Yeying Jin, Zongwei Wu, Radu Timofte, et al. High FPS Video Frame Interpolation Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  16. [16]

    The cityscapes dataset for semantic urban scene understanding

    Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. InProceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3213–3223, 2016. 1

  17. [17]

    Randaugment: Practical automated data augmen- tation with a reduced search space

    Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmen- tation with a reduced search space. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops, pages 702–703, 2020. 5

  18. [18]

    A.H. Da F. Klein, G.G. Santana, F.L. Diehl, and J.T. De Menezes. Analysis of hazards associated with sea bathing: results of five years work in oceanic beaches of Santa Catarina state, southern Brazil.Journal of Coastal Re- search, pages 107–116, 2003. 1

  19. [19]

    Automated rip current detection with region based convolutional neural networks.Coastal Engineering, 166:103859, 2021

    Akila de Silva, Issei Mori, Gregory Dusek, James Davis, and Alex Pang. Automated rip current detection with region based convolutional neural networks.Coastal Engineering, 166:103859, 2021. 2, 3. 1

  20. [20]

    RipViz: Find- ing Rip Currents by Learning Pathline Behavior.IEEE Transactions on Visualization and Computer Graphics, 30 (7):3930–3944, 2024

    Akila de Silva, Mona Zhao, Donald Stewart, Fahim Hasan, Gregory Dusek, James Davis, and Alex Pang. RipViz: Find- ing Rip Currents by Learning Pathline Behavior.IEEE Transactions on Visualization and Computer Graphics, 30 (7):3930–3944, 2024

  21. [21]

    Rip Current Segmentation: A novel benchmark and YOLOv8 baseline results

    Andrei Dumitriu, Florin Tatui, Florin Miron, Radu Tudor Ionescu, and Radu Timofte. Rip Current Segmentation: A novel benchmark and YOLOv8 baseline results. InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pages 1261–1271, 2023

  22. [22]

    AIM 2025 rip current segmentation (ripseg) challenge report

    Andrei Dumitriu, Florin Miron, Florin Tatui, Radu Tudor Ionescu, Radu Timofte, Aakash Ralhan, Florin-Alexandru Vasluianu, et al. AIM 2025 rip current segmentation (ripseg) challenge report. InProceedings of the IEEE/CVF Interna- tional Conference on Computer Vision (ICCV) Workshops,

  23. [23]

    Ripvis: Rip currents video instance segmentation benchmark for beach monitor- ing and safety

    Andrei Dumitriu, Florin Tatui, Florin Miron, Aakash Ralhan, Radu Tudor Ionescu, and Radu Timofte. Ripvis: Rip currents video instance segmentation benchmark for beach monitor- ing and safety. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3427–3437, 2025. 2

  24. [24]

    NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Chal- lenge Report

    Andrei Dumitriu, Aakash Ralhan, Florin Miron, Florin Ta- tui, Radu Tudor Ionescu, Radu Timofte, et al. NTIRE 2026 Rip Current Detection and Segmentation (RipDetSeg) Chal- lenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  25. [25]

    Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al

    Omar Elezabi, Marcos V . Conde, Zongwei Wu, Yeying Jin, Radu Timofte, et al. Photography Retouching Trans- fer, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  26. [26]

    NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results

    Bochen Guan, Jinlong Li, Kangning Yang, Chuang Ke, Jie Cai, Florin Vasluianu, Radu Timofte, et al. NTIRE 2026 Challenge on End-to-End Financial Receipt Restoration and Reasoning from Degraded Images: Datasets, Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  27. [27]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3)

    Ya-nan Guan, Shaonan Zhang, Hang Guo, Yawen Wang, Xinying Fan, Jie Liang, Hui Zeng, Guanyi Qin, Lishen Qu, Tao Dai, Shu-Tao Xia, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: AI Flash Portrait (Track 3) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  28. [28]

    NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild

    Aleksandr Gushchin, Khaled Abud, Ekaterina Shumitskaya, Artem Filippov, Georgii Bychkov, Sergey Lavrushkin, Mikhail Erofeev, Anastasia Antsiferova, Changsheng Chen, Shunquan Tan, Radu Timofte, Dmitriy Vatolin, et al. NTIRE 2026 Challenge on Robust AI-Generated Image Detection in the Wild . InProceedings of the IEEE/CVF Conference on Computer Vision and Pa...

  29. [29]

    Mask R-CNN

    Kaiming He, Georgia Gkioxari, Piotr Doll ´ar, and Ross Gir- shick. Mask R-CNN. InProceedings of the IEEE Interna- tional Conference on Computer Vision (ICCV), pages 2961– 2969, 2017. 1

  30. [30]

    Robust Deepfake De- tection, NTIRE 2026 Challenge: Report

    Benedikt Hopf, Radu Timofte, et al. Robust Deepfake De- tection, NTIRE 2026 Challenge: Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  31. [31]

    Ultralytics yolo11, 2024

    Glenn Jocher and Jing Qiu. Ultralytics yolo11, 2024. 5

  32. [32]

    Ultralytics yolov8, 2023

    Glenn Jocher, Ayush Chaurasia, and Jing Qiu. Ultralytics yolov8, 2023. 1, 6

  33. [33]

    NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge

    Aleksei Khalin, Egor Ershov, Artem Panshin, Sergey Ko- rchagin, Georgiy Lobarev, Arseniy Terekhin, Sofiia Doro- gova, Amir Shamsutdinov, Yasin Mamedov, Bakhtiyar Khalfin, Bogdan Sheludko, Emil Zilyaev, Nikola Bani ´c, Georgy Perevozchikov, Radu Timofte, et al. NTIRE 2026 Low-light Enhancement: Twilight Cowboy Challenge . In Proceedings of the IEEE/CVF Con...

  34. [34]

    RipFinder: Real-time rip current detection on mobile devices.Frontiers in Marine Science, 12:1549513, 2025

    Fahim Khan, Akila De Silva, Ashleigh Palinkas, Gregory Dusek, James Davis, and Alex Pang. RipFinder: Real-time rip current detection on mobile devices.Frontiers in Marine Science, 12:1549513, 2025. 2

  35. [35]

    Rip- scout: Realtime ml-assisted rip current detection and auto- mated data collection using uavs.IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,

    Fahim Khan, Donald Stewart, Akila de Silva, Ashleigh Palinkas, Gregory Dusek, James Davis, and Alex Pang. Rip- scout: Realtime ml-assisted rip current detection and auto- mated data collection using uavs.IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,

  36. [36]

    Adam: A Method for Stochastic Optimization

    Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization.arXiv preprint arXiv:1412.6980,

  37. [37]

    Segment anything

    Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer White- head, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 4015–4026,

  38. [38]

    arXiv preprint arXiv:2506.17733 (2025)

    Mengqi Lei, Siqi Li, Yihong Wu, Han Hu, You Zhou, Xinhu Zheng, Guiguang Ding, Shaoyi Du, Zongze Wu, and Yue Gao. Yolov13: Real-time object detection with hypergraph-enhanced adaptive visual perception.arXiv preprint arXiv:2506.17733, 2025. 6

  39. [39]

    The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Jiatong Li, Zheng Chen, Kai Liu, Jingkai Wang, Zihan Zhou, Xiaoyang Liu, Libo Zhu, Radu Timofte, Yulun Zhang, et al. The First Challenge on Mobile Real-World Image Super- Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  40. [40]

    Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection.Advances in neural information processing systems, 33:21002–21012, 2020

    Xiang Li, Wenhai Wang, Lijun Wu, Shuo Chen, Xiaolin Hu, Jun Li, Jinhui Tang, and Jian Yang. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection.Advances in neural information processing systems, 33:21002–21012, 2020. 8

  41. [41]

    NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results

    Xin Li, Jiachao Gong, Xijun Wang, Shiyao Xiong, Bingchen Li, Suhang Yao, Chao Zhou, Zhibo Chen, Radu Timofte, et al. NTIRE 2026 Challenge on Short-form UGC Video Restoration in the Wild with Generative Models: Datasets, Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  42. [42]

    NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results

    Xin Li, Yeying Jin, Suhang Yao, Beibei Lin, Zhaoxin Fan, Wending Yan, Xin Jin, Zongwei Wu, Bingchen Li, Peishu Shi, Yufei Yang, Yu Li, Zhibo Chen, Bihan Wen, Robby Tan, Radu Timofte, et al. NTIRE 2026 The Second Challenge on Day and Night Raindrop Removal for Dual-Focused Images: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer...

  43. [43]

    Microsoft COCO: Common objects in context

    Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In Proceedings of 13th European conference on Computer Vi- sion (ECCV), pages 740–755. Springer, 2014. 1

  44. [44]

    The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview

    Kai Liu, Haoyang Yue, Zeli Lin, Zheng Chen, Jingkai Wang, Jue Gong, Radu Timofte, Yulun Zhang, et al. The First Chal- lenge on Remote Sensing Infrared Image Super-Resolution at NTIRE 2026: Benchmark Results and Method Overview . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  45. [45]

    Conde, et al

    Shuhong Liu, Ziteng Cui, Chenyu Bao, Xuangeng Chu, Lin Gu, Bin Ren, Radu Timofte, Marcos V . Conde, et al. 3D Restoration and Reconstruction in Adverse Conditions: Re- alX3D Challenge Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  46. [46]

    NTIRE 2026 X- AIGC Quality Assessment Challenge: Methods and Results

    Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Qiang Hu, Jiezhang Cao, Yu Zhou, Wei Sun, Farong Wen, Zitong Xu, Yingjie Zhou, Huiyu Duan, Lu Liu, Jiarui Wang, Siqi Luo, Chunyi Li, Li Xu, Zicheng Zhang, Yue Shi, Yubo Wang, Minghong Zhang, Chunchao Guo, Zhichao Hu, Mingtao Chen, Xiele Wu, Xin Ma, Zhaohe Lv, Yuanhao Xue, Jiaqi Wang, Xinxing Sha, Radu Timofte, et...

  47. [47]

    A deep learning-based pipeline for detecting rip currents from satellite imagery.Re- mote Sensing, 18(2):368, 2026

    Yuli Liu, Yifei Yang, Xiang Li, Fan Yang, Huarong Xie, Wei Wang, and Changming Dong. A deep learning-based pipeline for detecting rip currents from satellite imagery.Re- mote Sensing, 18(2):368, 2026. 2

  48. [48]

    Decoupled Weight Decay Regularization

    Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization.arXiv preprint arXiv:1711.05101, 2017. 5

  49. [49]

    James B. Lushine. A study of rip current drownings and related weather factors.National Weather Digest, 16(3):13– 19, 1991. 1

  50. [50]

    Machine learning appli- cations in detecting rip channels from images.Applied Soft Computing, 78:84–93, 2019

    Corey Maryan, Md Tamjidul Hoque, Christopher Michael, Elias Ioup, and Mahdi Abdelguerfi. Machine learning appli- cations in detecting rip channels from images.Applied Soft Computing, 78:84–93, 2019. 2

  51. [51]

    McGill and Jean T

    Sean P. McGill and Jean T. Ellis. Rip current and channel detection using surfcams and optical flow.Shore & Beach, 90(1):50, 2022

  52. [52]

    Flow-based rip current detection and visualiza- tion.IEEE Access, 10:6483–6495, 2022

    Issei Mori, Akila de Silva, Gregory Dusek, James Davis, and Alex Pang. Flow-based rip current detection and visualiza- tion.IEEE Access, 10:6483–6495, 2022. 2

  53. [53]

    NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results

    Andrey Moskalenko, Alexey Bryncev, Ivan Kosmynin, Kira Shilovskaya, Mikhail Erofeev, Dmitry Vatolin, Radu Timo- fte, et al. NTIRE 2026 Challenge on Video Saliency Predic- tion: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  54. [54]

    What is a rip current? https://oceanservice.noaa.gov/facts/ripcurrent.html, 2023

    National Oceanic and Atmospheric Admin- istration (NOAA). What is a rip current? https://oceanservice.noaa.gov/facts/ripcurrent.html, 2023. Accessed: March, 2023. 1

  55. [55]

    NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results

    Hyunhee Park, Eunpil Park, Sangmin Lee, Radu Timofte, et al. NTIRE 2026 Challenge on Efficient Burst HDR and Restoration: Datasets, Methods, and Results . InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  56. [56]

    NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results

    Georgy Perevozchikov, Daniil Vladimirov, Radu Timofte, et al. NTIRE 2026 Challenge on Learned Smartphone ISP with Unpaired Data: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR) Workshops, 2026. 3

  57. [57]

    Detecting and Visualizing Rip Current Using Optical Flow

    Shweta Philip and Alex Pang. Detecting and Visualizing Rip Current Using Optical Flow. InProceedings of the Euro- graphics / IEEE VGTC Conference on Visualization: Short Papers, pages 19–23, 2016. 2

  58. [58]

    Madina Hayva Putri, Umar Zaky, and Bayu Argadyanto Prabawa. Optimizing data augmentation parameters in yolov11 for enhanced rip current detection on small datasets from depok-parangtritis coastline.Jurnal Teknik Informatika (Jutif), 6(5):3938–3957, 2025

  59. [59]

    RipGAN: A GAN-based rip current data augmenta- tion method

    Shenyang Qian, Mitchell Harley, Imran Razzak, and Yang Song. RipGAN: A GAN-based rip current data augmenta- tion method. InProceedings of the IEEE International Con- ference on Robotics and Automation, 2025. 2

  60. [60]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1)

    Guanyi Qin, Jie Liang, Bingbing Zhang, Lishen Qu, Ya-nan Guan, Hui Zeng, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Professional Image Quality Assessment (Track 1) . InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  61. [61]

    The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results

    Xingyu Qiu, Yuqian Fu, Jiawei Geng, Bin Ren, Jiancheng Pan, Zongwei Wu, Hao Tang, Yanwei Fu, Radu Timo- fte, Nicu Sebe, Mohamed Elhoseiny, et al. The Second Challenge on Cross-Domain Few-Shot Object Detection at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  62. [62]

    NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2)

    Lishen Qu, Yao Liu, Jie Liang, Hui Zeng, Wen Dai, Ya-nan Guan, Guanyi Qin, Shihao Zhou, Jufeng Yang, Lei Zhang, Radu Timofte, et al. NTIRE 2026 The 3rd Restore Any Image Model (RAIM) Challenge: Multi-Exposure Image Fusion in Dynamic Scenes (Track2) . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  63. [63]

    Interpretable deep learning applied to rip cur- rent detection and localization.Remote Sensing, 14(23): 6048, 2022

    Neelesh Rampal, Tom Shand, Adam Wooler, and Christo Rautenbach. Interpretable deep learning applied to rip cur- rent detection and localization.Remote Sensing, 14(23): 6048, 2022. 2

  64. [64]

    Ripnet: A lightweight one-class deep neural network for the identification of rip currents

    Ashraf Haroon Rashid, Imran Razzak, Muhammad Tanveer, and Antonio Robles-Kelly. Ripnet: A lightweight one-class deep neural network for the identification of rip currents. In Proceedings of 27th International Conference on Neural In- formation Processing, pages 172–179, 2020

  65. [65]

    RipDet: A fast and lightweight deep neural network for rip currents detection

    Ashraf Haroon Rashid, Imran Razzak, Muhammad Tanveer, and Antonio Robles-Kelly. RipDet: A fast and lightweight deep neural network for rip currents detection. InProceed- ings of 2021 International Joint Conference on Neural Net- works, pages 1–6, 2021

  66. [66]

    Tanveer, and Michael Hobbs

    Ashraf Haroon Rashid, Imran Razzak, M. Tanveer, and Michael Hobbs. Reducing rip current drowning: An im- proved residual based lightweight deep architecture for rip detection.ISA Transactions, 132:199–207, 2023. 2

  67. [67]

    SAM 2: Segment Anything in Images and Videos

    Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman R¨adle, Chloe Rolland, Laura Gustafson, et al. SAM 2: Segment Anything in Images and Videos.arXiv preprint arXiv:2408.00714, 2024. 1

  68. [68]

    The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report

    Bin Ren, Hang Guo, Yan Shu, Jiaqi Ma, Ziteng Cui, Shuhong Liu, Guofeng Mei, Lei Sun, Zongwei Wu, Fahad Shahbaz Khan, Salman Khan, Radu Timofte, Yawei Li, et al. The Eleventh NTIRE 2026 Efficient Super-Resolution Challenge Report . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  69. [69]

    Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al

    Tim Seizinger, Florin-Alexandru Vasluianu, Marcos V . Conde, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. The First Controllable Bokeh Rendering Challenge at NTIRE 2026 . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  70. [70]

    DINOv3

    Oriane Sim ´eoni, Huy V V o, Maximilian Seitzer, Federico Baldassarre, Maxime Oquab, Cijo Jose, Vasil Khalidov, Marc Szafraniec, Seungeun Yi, Micha ¨el Ramamonjisoa, et al. Dinov3.arXiv preprint arXiv:2508.10104, 2025. 6

  71. [71]

    Weighted boxes fusion: Ensembling boxes from different ob- ject detection models.Image and Vision Computing, 107: 104117, 2021

    Roman Solovyev, Weimin Wang, and Tatiana Gabruseva. Weighted boxes fusion: Ensembling boxes from different ob- ject detection models.Image and Vision Computing, 107: 104117, 2021. 6

  72. [72]

    Rip current detection in nearshore areas through uav video analysis with almost local-isometric embedding techniques on sphere.arXiv preprint arXiv:2304.11783, 2023

    Anchen Sun and Kaiqi Yang. Rip current detection in nearshore areas through uav video analysis with almost local-isometric embedding techniques on sphere.arXiv preprint arXiv:2304.11783, 2023. 2

  73. [73]

    The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results

    Lei Sun, Hang Guo, Bin Ren, Shaolin Su, Xian Wang, Danda Pani Paudel, Luc Van Gool, Radu Timofte, Yawei Li, et al. The Third Challenge on Image Denoising at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  74. [74]

    The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results

    Lei Sun, Weilun Li, Xian Wang, Zhendong Li, Letian Shi, Dannong Xu, Deheng Zhang, Mengshun Hu, Shuang Guo, Shaolin Su, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. The Second Challenge on Event-Based Image Deblurring at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Wor...

  75. [75]

    NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results

    Lei Sun, Xiaolong Qian, Qi Jiang, Xian Wang, Yao Gao, Kailun Yang, Kaiwei Wang, Radu Timofte, Danda Pani Paudel, Luc Van Gool, et al. NTIRE 2026 The First Challenge on Blind Computational Aberration Correction: Methods and Results . InProceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  76. [76]

    Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings

    Florin-Alexandru Vasluianu, Tim Seizinger, Jeffrey Chen, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Learning- Based Ambient Lighting Normalization: NTIRE 2026 Chal- lenge Results and Findings . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2026. 3

  77. [77]

    Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge

    Florin-Alexandru Vasluianu, Tim Seizinger, Zhuyun Zhou, Zongwei Wu, Radu Timofte, et al. Advances in Single- Image Shadow Removal: Results from the NTIRE 2026 Challenge . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  78. [78]

    Ripalert: A future-frame-aware framework for rip current forecasting and early alerting

    Meng Wan, Qi Su, Zhixin Xia, Kanglin Chen, Jue Wang, Tiantian Liu, Rongqiang Cao, Hui Cui, Peng Shi, Yangang Wang, et al. Ripalert: A future-frame-aware framework for rip current forecasting and early alerting. InProceedings of the AAAI Conference on Artificial Intelligence, pages 39368– 39377, 2026. 2

  79. [79]

    The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results

    Jingkai Wang, Jue Gong, Zheng Chen, Kai Liu, Jiatong Li, Yulun Zhang, Radu Timofte, et al. The Second Challenge on Real-World Face Restoration at NTIRE 2026: Methods and Results . InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Work- shops, 2026. 3

  80. [80]

    NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results

    Longguang Wang, Yulan Guo, Yingqian Wang, Juncheng Li, Sida Peng, Ye Zhang, Radu Timofte, Minglin Chen, Yi Wang, Qibin Hu, Wenjie Lei, et al. NTIRE 2026 Challenge on 3D Content Super-Resolution: Methods and Results . In Proceedings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR) Workshops, 2026. 3

Showing first 80 references.