Recognition: 2 theorem links
· Lean TheoremCAMotion: A High-Quality Benchmark for Camouflaged Moving Object Detection in the Wild
Pith reviewed 2026-05-10 18:34 UTC · model grok-4.3
The pith
A new benchmark dataset for camouflaged moving object detection in video covers diverse species and challenging conditions.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We construct CAMotion, a high-quality benchmark that covers a wide range of species for camouflaged moving object detection in the wild. CAMotion comprises various sequences with multiple challenging attributes such as uncertain edge, occlusion, motion blur, and shape complexity. The sequence annotation details and statistical distribution are presented from various perspectives, allowing CAMotion to provide in-depth analyses on the camouflaged object's motion characteristics in different challenging scenarios. Additionally, we conduct a comprehensive evaluation of existing SOTA models on CAMotion, and discuss the major challenges in VCOD task.
What carries the argument
The CAMotion benchmark itself, a set of annotated video sequences designed to capture camouflaged moving objects under varied real-world conditions and to supply statistical views of their motion attributes.
If this is right
- Algorithms can be trained and tested on a larger and more varied collection of camouflaged video sequences than was previously available.
- Motion characteristics of camouflaged objects can be studied across multiple species and under specific challenges such as blur or occlusion.
- Weaknesses in current models become easier to identify through standardized evaluation on the supplied sequences and statistics.
- Further progress in video camouflaged object detection becomes possible once researchers have access to the released benchmark and its annotations.
Where Pith is reading between the lines
- The availability of species-level diversity may allow future work to examine whether detection difficulty varies systematically by animal type or habitat.
- The provided attribute annotations could support the creation of targeted test splits that isolate individual challenges such as motion blur.
- If the benchmark is widely adopted, it may reduce reliance on synthetic or narrowly scoped data in related video detection studies.
Load-bearing premise
The new video sequences and their annotations possess enough quality and variety to support deeper analyses and more informative algorithm tests than earlier datasets provide.
What would settle it
A direct comparison showing that state-of-the-art models produce essentially the same accuracy rankings and error patterns on CAMotion as they do on prior smaller datasets would indicate that the new benchmark does not add the intended analytical power.
Figures
read the original abstract
Discovering camouflaged objects is a challenging task in computer vision due to the high similarity between camouflaged objects and their surroundings. While the problem of camouflaged object detection over sequential video frames has received increasing attention, the scale and diversity of existing video camouflaged object detection (VCOD) datasets are greatly limited, which hinders the deeper analysis and broader evaluation of recent deep learning-based algorithms with data-hungry training strategy. To break this bottleneck, in this paper, we construct CAMotion, a high-quality benchmark covers a wide range of species for camouflaged moving object detection in the wild. CAMotion comprises various sequences with multiple challenging attributes such as uncertain edge, occlusion, motion blur, and shape complexity, etc. The sequence annotation details and statistical distribution are presented from various perspectives, allowing CAMotion to provide in-depth analyses on the camouflaged object's motion characteristics in different challenging scenarios. Additionally, we conduct a comprehensive evaluation of existing SOTA models on CAMotion, and discuss the major challenges in VCOD task. The benchmark is available at https://www.camotion.focuslab.net.cn, we hope that our CAMotion can lead to further advancements in the research community.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces CAMotion, a new benchmark dataset for video camouflaged object detection (VCOD) comprising diverse video sequences across a wide range of species in the wild. It incorporates multiple challenging attributes such as uncertain edges, occlusion, motion blur, and shape complexity, provides detailed sequence annotations and statistical distributions analyzed from various perspectives to support in-depth motion characteristic studies, and includes a comprehensive evaluation of existing state-of-the-art models along with discussion of major VCOD challenges. The benchmark is made publicly available.
Significance. If the claims regarding dataset quality, diversity, and evaluation hold, this contribution is significant because existing VCOD datasets are limited in scale and diversity, impeding deeper analyses and broader testing of data-intensive deep learning methods. The provision of attribute-annotated sequences, multi-perspective statistics on camouflaged object motion, and SOTA model benchmarks directly addresses this gap and can drive further progress in the field. The public release with a dedicated website is a clear strength that supports reproducibility and community use.
minor comments (2)
- [Abstract] Abstract: The description of the benchmark's scale (e.g., number of sequences, total frames, or species covered) is stated qualitatively; adding one or two concrete figures here would immediately convey the improvement over prior VCOD datasets.
- [Experiments] The evaluation section would benefit from explicit cross-references between the attribute statistics (e.g., frequency of occlusion or motion blur) and the per-attribute performance breakdowns of the tested models.
Simulated Author's Rebuttal
We thank the referee for the positive summary of our work and the recommendation for minor revision. We are glad that the significance of the CAMotion benchmark for video camouflaged object detection is recognized, particularly regarding its scale, diversity, attribute annotations, and public availability.
Circularity Check
No significant circularity
full rationale
The paper introduces CAMotion, a new video dataset for camouflaged moving object detection. Its central contribution is the external collection, annotation, and statistical characterization of real-world sequences, with no equations, derivations, fitted parameters, or predictive modeling present. No self-definitional steps, fitted-input predictions, or load-bearing self-citation chains exist; the benchmark's value rests on independent data curation rather than any internal reduction to its own inputs. This is a standard non-circular dataset release.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Existing VCOD datasets are greatly limited in scale and diversity, hindering deeper analysis of deep learning algorithms.
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We construct CAMotion, a high-quality benchmark covers a wide range of species for camouflaged moving object detection in the wild. CAMotion comprises various sequences with multiple challenging attributes such as uncertain edge, occlusion, motion blur, and shape complexity, etc.
-
IndisputableMonolith/Foundation/AlexanderDuality.leanalexander_duality_circle_linking unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We conduct a comprehensive evaluation of existing SOTA models on CAMotion, and discuss the major challenges in VCOD task.
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
explicitly handles motion cues via a frozen pre-trained optical flow fundamental model. VSCode [ 51] and VSCode- v2 [ 52] propose generalist models for multimodal binary segmentation tasks, taking RGB image and optical flow as input to perform frame-by-frame camouflaged object discovery across videos. With the emergence of visual foundation models, severa...
-
[2]
further leverages the strong generalizability in natural videos of SAM2 [ 57] to address the VCOD task. However, due to the limitation posed by the low diversity of MoCA- Mask, most VCOD methods require pre-training on image datasets, e.g., COD10K, and more importantly, this constraint impedes the further advancement of this task. 2.3 Motion Segmentation ...
-
[3]
[ 65] leverages SAM to capture motion cues from optical flow, and uses the flow as input prompts
introduce an appearance-based refinement method that leverages temporal consistency in video streams to correct inaccurate flow-based proposals. [ 65] leverages SAM to capture motion cues from optical flow, and uses the flow as input prompts. Besides, [ 66] takes as input the volume of consecutive optical flow fields, and delivers a volume of segments of ...
2015
-
[4]
Self” refers to training and testing on the same dataset (same as diagonal), and “Mean Others
Due to the variations of network architecture, input resolutions, modalities, as well as pre-processing techniques, we make the best effort to ensure a fair comparison on both datasets. Regarding CAMotion, we surprisingly observe that the image-level COD method HGINet [82] achieves SOTA on JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 7 TABLE ...
-
[5]
Mean Others
and Depth Anything V2 [87] to estimate optical flow and depth map, respectively, with the results visualized in Fig. 7. In the cases ofchequered sengi,clownfishandwillow warbler, we observe that the optical flow provides informative partial camouflage cues in moving object scenarios, while the depth map can also reveal camouflaged object contours to some ...
2015
-
[6]
and EMIP [ 50] in eight challenging attributes in terms of Sα and mIoU, see Fig. 8. Notably, we observe that the sequences involving small object (SO), uncertainty edge (UE), occlusion (OC) and multiple objects (MO) are significantly more difficult. In contrast, sequences characterized by shape complexity and motion blur tend to yield relatively better pe...
2015
-
[7]
Nguyen, Zhongliang Nie, Minh- Triet Tran, and Akihiro Sugimoto
Trung-Nghia Le, Tam V . Nguyen, Zhongliang Nie, Minh- Triet Tran, and Akihiro Sugimoto. Anabranch network for camouflaged object segmentation.Computer Vision and Image Understanding, 184:45–56, 2019
2019
-
[8]
Animal camouflage analysis: Chameleon database
Przemysław Skurowski, Hassan Abdulameer, Jakub Błaszczyk, Tomasz Depta, Adam Kornacki, and Prze- mysław Kozieł. Animal camouflage analysis: Chameleon database. 2018
2018
-
[9]
Camouflaged object detec- tion
Deng-Ping Fan, Ge-Peng Ji, Guolei Sun, Ming-Ming Cheng, Jianbing Shen, and Ling Shao. Camouflaged object detec- tion. InIEEE Conference on Computer Vision and Pattern Recognition, pages 2774–2784, 2020
2020
-
[10]
Simultaneously localize, segment and rank the camouflaged objects
Yunqiu Lv, Jing Zhang, Yuchao Dai, Aixuan Li, Bowen Liu, Nick Barnes, and Deng-Ping Fan. Simultaneously localize, segment and rank the camouflaged objects. In IEEE Conference on Computer Vision and Pattern Recognition, pages 11591–11601, 2021
2021
-
[11]
Learned-Miller
Pia Bideau and Erik G. Learned-Miller. It’s moving! A probabilistic model for causal motion segmentation in moving camera videos. InEuropean Conference on Computer Vision, pages 433–449, 2016
2016
-
[12]
Implicit motion handling for video camouflaged object detection
Xuelian Cheng, Huan Xiong, Deng-Ping Fan, Yiran Zhong, Mehrtash Harandi, Tom Drummond, and Zongyuan Ge. Implicit motion handling for video camouflaged object detection. InIEEE Conference on Computer Vision and Pattern Recognition, pages 13854–13863, 2022
2022
-
[13]
Betrayed by motion: Camouflaged object discovery via motion segmentation
Hala Lamdouar, Charig Yang, Weidi Xie, and Andrew Zisserman. Betrayed by motion: Camouflaged object discovery via motion segmentation. InAsian Conference on Computer Vision, pages 488–503, 2020
2020
-
[14]
Trung-Nghia Le, Yubo Cao, Tan-Cong Nguyen, Minh-Quan Le, Khanh-Duy Nguyen, Thanh-Toan Do, Minh-Triet Tran, and Tam V . Nguyen. Camouflaged instance segmentation in-the-wild: Dataset, method, and benchmark suite.IEEE Transactions on Image Processing, 31:287–300, 2022
2022
-
[15]
Context-aware cross-level fusion network for camouflaged object detection
Yujia Sun, Geng Chen, Tao Zhou, Yi Zhang, and Nian Liu. Context-aware cross-level fusion network for camouflaged object detection. InInternational Joint Conference on Artificial Intelligence, pages 1025–1031, 2021
2021
-
[16]
Preynet: Preying on cam- ouflaged objects
Miao Zhang, Shuang Xu, Yongri Piao, Dongxiang Shi, Shusen Lin, and Huchuan Lu. Preynet: Preying on cam- ouflaged objects. InProceedings of the ACM International Conference on Multimedia, pages 5323–5332, 2022
2022
-
[17]
ZoomNeXt: A unified collaborative pyramid network for camouflaged object detection.IEEE Transactions on Pattern Analysis and Machine Intelligence, 46:9205–9220, 2024
Youwei Pang, Xiaoqi Zhao, Tian-Zhu Xiang, Lihe Zhang, and Huchuan Lu. ZoomNeXt: A unified collaborative pyramid network for camouflaged object detection.IEEE Transactions on Pattern Analysis and Machine Intelligence, 46:9205–9220, 2024
2024
-
[18]
Camouflaged object detection with adaptive partition and background retrieval
Bowen Yin, Xuying Zhang, Li Liu, Ming-Ming Cheng, Yongxiang Liu, and Qibin Hou. Camouflaged object detection with adaptive partition and background retrieval. International Journal of Computer Vision, 133:4877–4893, 2025
2025
-
[19]
Zoom in and out: A mixed-scale triplet network for camouflaged object detection
Youwei Pang, Xiaoqi Zhao, Tian-Zhu Xiang, Lihe Zhang, and Huchuan Lu. Zoom in and out: A mixed-scale triplet network for camouflaged object detection. InIEEE Conference on Computer Vision and Pattern Recognition, pages 2150–2160, 2022
2022
-
[20]
Uncertainty-aware joint salient object and camouflaged object detection
Aixuan Li, Jing Zhang, Yunqiu Lv, Bowen Liu, Tong Zhang, and Yuchao Dai. Uncertainty-aware joint salient object and camouflaged object detection. InIEEE Conference on Computer Vision and Pattern Recognition, pages 10071–10081, 2021
2021
-
[21]
Mutual graph learning for camouflaged object detection
Qiang Zhai, Xin Li, Fan Yang, Chenglizhao Chen, Hong Cheng, and Deng-Ping Fan. Mutual graph learning for camouflaged object detection. InIEEE Conference on Computer Vision and Pattern Recognition, pages 12997–13007, 2021
2021
-
[22]
Findnet: Can you find me? boundary-and-texture enhancement network for cam- ouflaged object detection.IEEE Transactions on Image Processing, 31:6396–6411, 2022
Peng Li, Xuefeng Yan, Hongwei Zhu, Mingqiang Wei, Xiao-Ping Zhang, and Jing Qin. Findnet: Can you find me? boundary-and-texture enhancement network for cam- ouflaged object detection.IEEE Transactions on Image Processing, 31:6396–6411, 2022
2022
-
[23]
Camouflaged object detection with feature decomposition and edge reconstruction
Chunming He, Kai Li, Yachao Zhang, Longxiang Tang, Yulun Zhang, Zhenhua Guo, and Xiu Li. Camouflaged object detection with feature decomposition and edge reconstruction. InIEEE Conference on Computer Vision and Pattern Recognition, pages 22046–22055, 2023
2023
-
[24]
Strategic preys make acute predators: Enhancing camouflaged object detectors by generating camouflaged objects
Chunming He, Kai Li, Yachao Zhang, Yulun Zhang, Zhen- hua Guo, Xiu Li, Martin Danelljan, and Fisher Yu. Strategic preys make acute predators: Enhancing camouflaged object detectors by generating camouflaged objects. InInterna- tional Conference on Learning Representations, 2024
2024
-
[25]
Nowhere to disguise: Spot camouflaged objects via saliency attribute transfer.IEEE Transactions on Image Processing, 32:3108–3120, 2023
Wenda Zhao, Shigeng Xie, Fan Zhao, You He, and Huchuan Lu. Nowhere to disguise: Spot camouflaged objects via saliency attribute transfer.IEEE Transactions on Image Processing, 32:3108–3120, 2023
2023
-
[26]
Feature aggregation and propagation network for cam- ouflaged object detection.IEEE Transactions on Image JOURNAL OF LATEX CLASS FILES, VOL
Tao Zhou, Yi Zhou, Chen Gong, Jian Yang, and Yu Zhang. Feature aggregation and propagation network for cam- ouflaged object detection.IEEE Transactions on Image JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 12 Processing, 31:7036–7047, 2022
2015
-
[27]
A simple yet effective network based on vision transformer for camouflaged object and salient object detection.IEEE Transactions on Image Processing, 34:608–622, 2025
Chao Hao, Zitong Yu, Xin Liu, Jun Xu, Huanjing Yue, and Jing-Yu Yang. A simple yet effective network based on vision transformer for camouflaged object and salient object detection.IEEE Transactions on Image Processing, 34:608–622, 2025
2025
-
[28]
Escnet:edge-semantic collaborative network for camouflaged object detection
Sheng Ye, Xin Chen, Yan Zhang, Xianming Lin, and Liujuan Cao. Escnet:edge-semantic collaborative network for camouflaged object detection. InIEEE International Conference on Computer Vision, pages 20053–20063, 2025
2025
-
[29]
Predictive uncertainty estimation for camouflaged object detection.IEEE Transactions on Image Processing, 32:3580–3591, 2023
Yi Zhang, Jing Zhang, Wassim Hamidouche, and Olivier Dé- forges. Predictive uncertainty estimation for camouflaged object detection.IEEE Transactions on Image Processing, 32:3580–3591, 2023
2023
-
[30]
Detecting camouflaged object in frequency domain
Yijie Zhong, Bo Li, Lv Tang, Senyun Kuang, Shuang Wu, and Shouhong Ding. Detecting camouflaged object in frequency domain. InIEEE Conference on Computer Vision and Pattern Recognition, pages 4494–4503, 2022
2022
-
[31]
Frequency-spatial entanglement learning for camouflaged object detection
Yanguang Sun, Chunyan Xu, Jian Yang, Hanyu Xuan, and Lei Luo. Frequency-spatial entanglement learning for camouflaged object detection. InEuropean Conference on Computer Vision, pages 343–360, 2024
2024
-
[32]
Huntnet: Homomorphic unified nexus topol- ogy for camouflaged object detection.IEEE Transactions on Image Processing, 34:6068–6082, 2025
Haolin Ji, Fengying Xie, Linpeng Pan, Yushan Zheng, and Zhenwei Shi. Huntnet: Homomorphic unified nexus topol- ogy for camouflaged object detection.IEEE Transactions on Image Processing, 34:6068–6082, 2025
2025
-
[33]
Source-free depth for object pop-out
Zongwei Wu, Danda Pani Paudel, Deng-Ping Fan, Jingjing Wang, Shuo Wang, Cédric Demonceaux, Radu Timofte, and Luc Van Gool. Source-free depth for object pop-out. InIEEE International Conference on Computer Vision, pages 1032–1042, 2023
2023
-
[34]
Improving sam for camouflaged object detection via dual stream adapters
Jiaming Liu, Linghe Kong, and Guihai Chen. Improving sam for camouflaged object detection via dual stream adapters. InIEEE International Conference on Computer Vision, pages 21906–21916, 2025
2025
-
[35]
Focusdiffuser: Perceiving local disparities for camouflaged object detection
Jianwei Zhao, Xin Li, Fan Yang, Qiang Zhai, Ao Luo, Zicheng Jiao, and Hong Cheng. Focusdiffuser: Perceiving local disparities for camouflaged object detection. In European Conference on Computer Vision, pages 181–198, 2024
2024
-
[36]
Conditional diffusion models for camouflaged and salient object detection.IEEE Transactions on Pattern Analysis and Machine Intelligence, 47:2833–2848, 2025
Ke Sun, Zhongxi Chen, Xianming Lin, Xiaoshuai Sun, Hong Liu, and Rongrong Ji. Conditional diffusion models for camouflaged and salient object detection.IEEE Transactions on Pattern Analysis and Machine Intelligence, 47:2833–2848, 2025
2025
-
[37]
Continuous feature representation for camouflaged object detection.IEEE Transactions on Image Processing, 34:5672–5685, 2025
Ze Song, Xudong Kang, Xiaohui Wei, Jinyang Liu, Zheng Lin, and Shutao Li. Continuous feature representation for camouflaged object detection.IEEE Transactions on Image Processing, 34:5672–5685, 2025
2025
-
[38]
Controllable-lpmoe: Adapting to challenging object seg- mentation via dynamic local priors from mixture-of-experts
Yanguang Sun, Jiawei Lian, Jian Yang, and Lei Luo. Controllable-lpmoe: Adapting to challenging object seg- mentation via dynamic local priors from mixture-of-experts. InIEEE International Conference on Computer Vision, pages 22327–22337, 2025
2025
-
[39]
Ruozhen He, Qihua Dong, Jiaying Lin, and Rynson W. H. Lau. Weakly-supervised camouflaged object detection with scribble annotations. InProceedings of the AAAI Conference on Artificial Intelligence, pages 781–789, 2023
2023
-
[40]
Weakly- supervised concealed object segmentation with sam-based pseudo labeling and multi-scale feature grouping
Chunming He, Kai Li, Yachao Zhang, Guoxia Xu, Longxi- ang Tang, Yulun Zhang, Zhenhua Guo, and Xiu Li. Weakly- supervised concealed object segmentation with sam-based pseudo labeling and multi-scale feature grouping. In Advances in Neural Information Processing Systems, 2023
2023
-
[41]
SAM-COD: sam-guided unified framework for weakly- supervised camouflaged object detection
Huafeng Chen, Pengxu Wei, Guangqian Guo, and Shan Gao. SAM-COD: sam-guided unified framework for weakly- supervised camouflaged object detection. InEuropean Conference on Computer Vision, pages 315–331, 2024
2024
-
[42]
Segment concealed objects with incomplete supervision.IEEE Transactions on Pattern Analysis and Machine Intelligence, 47:7832–7851, 2025
Chunming He, Kai Li, Yachao Zhang, Ziyun Yang, Youwei Pang, Longxiang Tang, Chengyu Fang, Yulun Zhang, Linghe Kong, Xiu Li, and Sina Farsiu. Segment concealed objects with incomplete supervision.IEEE Transactions on Pattern Analysis and Machine Intelligence, 47:7832–7851, 2025
2025
-
[43]
Just a hint: Point-supervised camouflaged object detection
Huafeng Chen, Dian Shao, Guangqian Guo, and Shan Gao. Just a hint: Point-supervised camouflaged object detection. InEuropean Conference on Computer Vision, pages 332–348, 2024
2024
-
[44]
Learning camouflaged object detection from noisy pseudo label
Jin Zhang, Ruiheng Zhang, Yanjiao Shi, Zhe Cao, Nian Liu, and Fahad Shahbaz Khan. Learning camouflaged object detection from noisy pseudo label. InEuropean Conference on Computer Vision, pages 158–174, 2024
2024
-
[45]
Camoteacher: Dual-rotation consistency learning for semi-supervised camouflaged object detection
Xunfa Lai, Zhiyu Yang, Jie Hu, Shengchuan Zhang, Liujuan Cao, Guannan Jiang, Zhiyu Wang, Songan Zhang, and Rongrong Ji. Camoteacher: Dual-rotation consistency learning for semi-supervised camouflaged object detection. InEuropean Conference on Computer Vision, pages 438–455, 2024
2024
-
[46]
UCOD-DPL: unsupervised camouflaged object detection via dynamic pseudo-label learning
Weiqi Yan, Lvhai Chen, Huaijia Kou, Shengchuan Zhang, Yan Zhang, and Liujuan Cao. UCOD-DPL: unsupervised camouflaged object detection via dynamic pseudo-label learning. InIEEE Conference on Computer Vision and Pattern Recognition, pages 30365–30375, 2025
2025
-
[47]
Shift the lens: Environment-aware unsupervised camouflaged object de- tection
Ji Du, Fangwei Hao, Mingyang Yu, Desheng Kong, Jiesheng Wu, Bin Wang, Jing Xu, and Ping Li. Shift the lens: Environment-aware unsupervised camouflaged object de- tection. InIEEE Conference on Computer Vision and Pattern Recognition, pages 19271–19282, 2025
2025
-
[48]
Beyond single images: Retrieval self-augmented unsupervised cam- ouflaged object detection
Ji Du, Xin Wang, Fangwei Hao, Mingyang Yu, Chunyuan Chen, Jiesheng Wu, Bin Wang, Jing Xu, and Ping Li. Beyond single images: Retrieval self-augmented unsupervised cam- ouflaged object detection. InIEEE International Conference on Computer Vision, pages 22131–22142, 2025
2025
-
[49]
Zero-shot camouflaged object detection.IEEE Transactions on Image Processing, 32:5126– 5137, 2023
Haoran Li, Chun-Mei Feng, Yong Xu, Tao Zhou, Lina Yao, and Xiaojun Chang. Zero-shot camouflaged object detection.IEEE Transactions on Image Processing, 32:5126– 5137, 2023
2023
-
[50]
Upgen: Unleashing potential of foundation models for training- free camouflage detection via generative models.IEEE Transactions on Image Processing, 34:5400–5413, 2025
Ji Du, Jiesheng Wu, Desheng Kong, Weiyun Liang, Fangwei Hao, Jing Xu, Bin Wang, Guiling Wang, and Ping Li. Upgen: Unleashing potential of foundation models for training- free camouflage detection via generative models.IEEE Transactions on Image Processing, 34:5400–5413, 2025
2025
-
[51]
Towards real zero-shot camouflaged object segmentation without camouflaged annotations
Cheng Lei, Jie Fan, Xinran Li, Tian-Zhu Xiang, Ao Li, Ce Zhu, and Le Zhang. Towards real zero-shot camouflaged object segmentation without camouflaged annotations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 47:11990–12004, 2025
2025
-
[52]
Self-supervised video object segmentation by motion grouping
Charig Yang, Hala Lamdouar, Erika Lu, Andrew Zisserman, and Weidi Xie. Self-supervised video object segmentation by motion grouping. InIEEE International Conference on Computer Vision, pages 7157–7168, 2021
2021
-
[53]
Segmenting moving objects via an object-centric layered representation
Junyu Xie, Weidi Xie, and Andrew Zisserman. Segmenting moving objects via an object-centric layered representation. InAdvances in Neural Information Processing Systems, 2022
2022
-
[54]
EM-Driven unsupervised learning for efficient motion segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45:4462–4473, 2023
Etienne Meunier, Anaïs Badoual, and Patrick Bouthemy. EM-Driven unsupervised learning for efficient motion segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45:4462–4473, 2023
2023
-
[55]
Learned-Miller, Cordelia Schmid, and Karteek Alahari
Pia Bideau, Erik G. Learned-Miller, Cordelia Schmid, and Karteek Alahari. The right spin: Learning object motion from rotation-compensated flow fields.International Journal of Computer Vision, 132:40–55, 2024
2024
-
[56]
Explicit motion handling and interactive prompting for video camouflaged object detection.IEEE Transactions on Image Processing, 34:2853–2866, 2025
Xin Zhang, Tao Xiao, Ge-Peng Ji, Xuan Wu, Keren Fu, and Qijun Zhao. Explicit motion handling and interactive prompting for video camouflaged object detection.IEEE Transactions on Image Processing, 34:2853–2866, 2025
2025
-
[57]
Vscode: General visual salient and camouflaged object detection with 2d prompt learning
Ziyang Luo, Nian Liu, Wangbo Zhao, Xuguang Yang, Dingwen Zhang, Deng-Ping Fan, Fahad Khan, and Junwei Han. Vscode: General visual salient and camouflaged object detection with 2d prompt learning. InIEEE Conference on Computer Vision and Pattern Recognition, pages 17169–17180, 2024
2024
-
[58]
Ziyang Luo, Nian Liu, Xuguang Yang, Dingwen Zhang, JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 13 Deng-Ping Fan, Fahad Shahbaz Khan, and Junwei Han. Vscode-v2: Dynamic prompt learning for general visual salient and camouflaged object detection with two-stage optimization.IEEE Transactions on Pattern Analysis and Machine Intelligence, 48:3137...
2015
-
[59]
Endow SAM with keen eyes: Temporal-spatial prompt learning for video camouflaged object detection
Wenjun Hui, Zhenfeng Zhu, Shuai Zheng, and Yao Zhao. Endow SAM with keen eyes: Temporal-spatial prompt learning for video camouflaged object detection. InIEEE Conference on Computer Vision and Pattern Recognition, pages 19058–19067, 2024
2024
-
[60]
SAM-PM: enhancing video cam- ouflaged object detection using spatio-temporal attention
Muhammad Nawfal Meeran, Gokul Adethya T, and Bhanu Pratyush Mantha. SAM-PM: enhancing video cam- ouflaged object detection using spatio-temporal attention. InIEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1857–1866, 2024
2024
-
[61]
Berg, Wan-Yen Lo, Piotr Dollár, and Ross B
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloé Rolland, Laura Gustafson, Tete Xiao, Spencer White- head, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross B. Girshick. Segment anything. InIEEE International Conference on Computer Vision, pages 3992–4003, 2023
2023
-
[62]
CamSAM2: Segment anything accurately in camouflaged videos
Yuli Zhou, Yawei Li, Yuqian Fu, Luca Benini, Ender Konukoglu, and Guolei Sun. CamSAM2: Segment anything accurately in camouflaged videos. InAdvances in Neural Information Processing Systems, 2025
2025
-
[63]
Girshick, Piotr Dollár, and Christoph Feichtenhofer
Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloé Rolland, Laura Gustafson, Eric Mintun, Junt- ing Pan, Kalyan Vasudev Alwala, Nicolas Carion, Chao- Yuan Wu, Ross B. Girshick, Piotr Dollár, and Christoph Feichtenhofer. SAM 2: Segment anything in images and videos. InInternational Confer...
2025
-
[64]
Fast object segmentation in unconstrained video
Anestis Papazoglou and Vittorio Ferrari. Fast object segmentation in unconstrained video. InIEEE International Conference on Computer Vision, pages 1777–1784, 2013
2013
-
[65]
Motion-guided cascaded refinement network for video object segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 42:1957–1967, 2020
Ping Hu, Gang Wang, Xiangfei Kong, Jason Kuen, and Yap- Peng Tan. Motion-guided cascaded refinement network for video object segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence, 42:1957–1967, 2020
1957
-
[66]
Muhammad Faisal, Ijaz Akhter, Mohsen Ali, and Richard I. Hartley. Epo-net: Exploiting geometric constraints on dense trajectories for motion saliency. InIEEE Winter Conference on Applications of Computer Vision, pages 1873–1882, 2020
2020
-
[67]
Joint stereo video deblurring, scene flow estimation and moving object segmentation.IEEE Transactions on Image Processing, 29:1748–1761, 2020
Liyuan Pan, Yuchao Dai, Miaomiao Liu, Fatih Porikli, and Quan Pan. Joint stereo video deblurring, scene flow estimation and moving object segmentation.IEEE Transactions on Image Processing, 29:1748–1761, 2020
2020
-
[68]
Kankanhalli
Tao Zhuo, Zhiyong Cheng, Peng Zhang, Yongkang Wong, and Mohan S. Kankanhalli. Unsupervised online video object segmentation with motion property understanding. IEEE Transactions on Image Processing, 29:237–249, 2020
2020
-
[69]
Actor and action modular network for text-based video segmentation.IEEE Transac- tions on Image Processing, 31:4474–4489, 2022
Jianhua Yang, Yan Huang, Kai Niu, Linjiang Huang, Zhanyu Ma, and Liang Wang. Actor and action modular network for text-based video segmentation.IEEE Transac- tions on Image Processing, 31:4474–4489, 2022
2022
-
[70]
Appearance- based refinement for object-centric motion segmentation
Junyu Xie, Weidi Xie, and Andrew Zisserman. Appearance- based refinement for object-centric motion segmentation. InEuropean Conference on Computer Vision, pages 238–256, 2024
2024
-
[71]
Moving object segmentation: All you need is SAM (and flow)
Junyu Xie, Charig Yang, Weidi Xie, and Andrew Zisserman. Moving object segmentation: All you need is SAM (and flow). InAsian Conference on Computer Vision, pages 291–308, 2024
2024
-
[72]
Segmenting the motion components of a video: A long-term unsupervised model.IEEE Transactions on Pattern Analysis and Machine Intelligence, 48:500–511, 2026
Etienne Meunier and Patrick Bouthemy. Segmenting the motion components of a video: A long-term unsupervised model.IEEE Transactions on Pattern Analysis and Machine Intelligence, 48:500–511, 2026
2026
-
[73]
A general framework for motion segmentation: Independent, articulated, rigid, non-rigid, degenerate and non-degenerate
Jingyu Yan and Marc Pollefeys. A general framework for motion segmentation: Independent, articulated, rigid, non-rigid, degenerate and non-degenerate. InEuropean Conference on Computer Vision, pages 94–106, 2006
2006
-
[74]
Rao, Roberto Tron, René Vidal, and Yi Ma
Shankar R. Rao, Roberto Tron, René Vidal, and Yi Ma. Motion segmentation via robust subspace separation in the presence of outlying, incomplete, or corrupted trajec- tories. InIEEE Conference on Computer Vision and Pattern Recognition, 2008
2008
-
[75]
Learning segmentation from point trajec- tories
Laurynas Karazija, Iro Laina, Christian Rupprecht, and Andrea Vedaldi. Learning segmentation from point trajec- tories. InAdvances in Neural Information Processing Systems, 2024
2024
-
[76]
Cheriyadat and Richard J
Anil M. Cheriyadat and Richard J. Radke. Non-negative matrix factorization of partial track data for motion seg- mentation. InIEEE International Conference on Computer Vision, pages 865–872, 2009
2009
-
[77]
Motion trajectory segmentation via minimum cost multicuts
Margret Keuper, Bjoern Andres, and Thomas Brox. Motion trajectory segmentation via minimum cost multicuts. In IEEE International Conference on Computer Vision, pages 3271– 3279, 2015
2015
-
[78]
Higher-order minimum cost lifted mul- ticuts for motion segmentation
Margret Keuper. Higher-order minimum cost lifted mul- ticuts for motion segmentation. InIEEE International Conference on Computer Vision, pages 4252–4260, 2017
2017
-
[79]
Higher order motion models and spectral clustering
Peter Ochs and Thomas Brox. Higher order motion models and spectral clustering. InIEEE Conference on Computer Vision and Pattern Recognition, pages 614–621, 2012
2012
-
[80]
Segmentation of moving objects by long term video analysis.IEEE Transactions on Pattern Analysis and Machine Intelligence, 36:1187–1200, 2014
Peter Ochs, Jitendra Malik, and Thomas Brox. Segmentation of moving objects by long term video analysis.IEEE Transactions on Pattern Analysis and Machine Intelligence, 36:1187–1200, 2014
2014
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.