Recognition: unknown
Still Camouflage, Moving Illusion: View-Induced Trajectory Manipulation in Autonomous Driving
Pith reviewed 2026-05-14 19:39 UTC · model grok-4.3
The pith
A static camouflage on one vehicle can make passing autonomous cars see false cut-in trajectories and brake hard.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
A static adversarial camouflage mounted on a vehicle produces view-dependent appearance shifts that evolve naturally with relative motion between the camouflaged vehicle and the victim autonomous vehicle. These shifts induce consistent feature drift across successive frames, causing the perception module to output biased object tracks and the planner to predict an incorrect but physically plausible trajectory such as a false cut-in. The erroneous trajectory propagates through the decision-making stack and elicits hard-braking events. The attack is demonstrated on the nuScenes dataset with an end-to-end success rate reaching 87.5 percent and remains effective across different scene contexts,
What carries the argument
View-induced feature drift produced by a static camouflage pattern whose projected appearance changes with relative motion between observer and target.
If this is right
- Adversarial patches no longer need to be optimized for multi-view robustness; a single fixed pattern suffices when motion supplies the viewpoint variation.
- A parked vehicle can serve as an effective attack surface without any onboard active components or timing coordination.
- Trajectory-prediction modules become a new attack surface because small, consistent appearance drifts can be interpreted as large spatial deviations.
- Existing physical-attack defenses that assume dynamic or multi-view-robust patches may miss this class of static, motion-exploited illusions.
Where Pith is reading between the lines
- Perception stacks may need explicit view-angle normalization or motion-compensated feature tracking to reduce sensitivity to static but viewpoint-varying patterns.
- Safety validation procedures could add test cases that place static camouflaged objects in the path of passing vehicles to check for induced false-positive braking.
- The same motion-induced drift mechanism might be studied in other domains such as drone navigation or robotic grasping where relative motion is also routine.
Load-bearing premise
The feature drift created by ordinary viewpoint changes will reliably travel through the entire perception-to-planning pipeline even under real lighting, sensor noise, and perception models not tested in the study.
What would settle it
Record whether a vehicle carrying the described static camouflage pattern causes repeated hard-braking events in an autonomous vehicle that passes it at typical highway speeds under daylight conditions.
Figures
read the original abstract
Existing physical adversarial attacks on vision-based autonomous driving induce time-evolving perception errors, including biased object tracking or trajectory prediction, through (i) sophisticated physical patch inducing detection box drift when entering the view distance, or (ii) dynamically changing patches that cause different perception errors at different time. In both cases, viewing-angle variation is treated as a challenge, requiring adversarial patches to remain effective across frames under varying views, leading to complex multi-view optimization. In contrast, we show that viewing-angle variation itself can be turned into an attack tool. We design a new attack paradigm where a static, passive adversarial camouflage is mounted on a vehicle whose view-dependent appearance naturally evolves with relative motion, inducing consistent feature drift across frames. This causes the system to infer a physically plausible but incorrect trajectory, such as a false cut-in, which propagates to downstream decision-making and triggers unnecessary braking. Unlike prior approaches that require multi-view robustness or active intervention, our attack emerges from normal driving dynamics and is easy to deploy: a parked vehicle with a natural camouflage can induce hard braking in passing autonomous vehicles. We demonstrate the novel attack on nuScenes dataset, showing the effectiveness with an end-to-end success rate of up to 87.5%, measured by hard-braking events, and robustness across different scene backgrounds, victim vehicle speeds, and perception models.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces a novel adversarial attack on vision-based autonomous driving systems that uses a static physical camouflage pattern on a vehicle. By exploiting natural view-angle-dependent appearance shifts caused by relative motion between the attacker and victim, the camouflage induces consistent feature drift in the perception module, causing the system to infer a physically plausible but incorrect trajectory (such as a false cut-in) that propagates to planning and triggers unnecessary hard braking. The attack is evaluated on the nuScenes dataset, reporting an end-to-end success rate of up to 87.5% measured by hard-braking events, with claimed robustness across scene backgrounds, victim speeds, and perception models.
Significance. If the central claim holds under rigorous validation, the work is significant because it reframes view-dependent variation from a robustness challenge into an attack vector, enabling a low-effort, passive deployment (e.g., a parked camouflaged vehicle) that affects the full perception-to-decision pipeline without multi-view optimization or active intervention. This could inform new directions in AV security research and defense design.
major comments (2)
- [Evaluation] The evaluation on nuScenes reports a 87.5% success rate but provides no details on the precise definition of hard-braking events, number of trials, error bars, or ablation studies on trajectory error measurement; without these, it is impossible to assess whether the observed rate reflects genuine propagation of view-induced drift through the pipeline or an artifact of the image modification process.
- [Methodology] The central assumption that static camouflage produces reliable feature drift under real-world conditions is not supported by the simulation setup; modifying nuScenes imagery without explicit full rendering, illumination modeling, or sensor noise injection risks bypassing the very optical and noise effects that would occur in physical deployments, undermining the robustness claims across speeds and models.
minor comments (1)
- [Abstract] The abstract states success rates and robustness claims without referencing specific perception models tested or the exact conditions for the 87.5% maximum, which would improve clarity.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback. We address each major comment below and will revise the manuscript to improve clarity, reproducibility, and discussion of limitations.
read point-by-point responses
-
Referee: [Evaluation] The evaluation on nuScenes reports a 87.5% success rate but provides no details on the precise definition of hard-braking events, number of trials, error bars, or ablation studies on trajectory error measurement; without these, it is impossible to assess whether the observed rate reflects genuine propagation of view-induced drift through the pipeline or an artifact of the image modification process.
Authors: We agree that the original manuscript lacks sufficient experimental details for full assessment. In the revision we will add: (1) a precise definition of hard-braking events (deceleration > 3 m/s^{2} triggered by the predicted trajectory), (2) the exact number of trials (200 scenarios across backgrounds and speeds), (3) error bars or standard deviations on success rates, and (4) ablation studies on trajectory prediction error (L2 distance and heading deviation) with and without camouflage. These additions will show that the reported rate arises from consistent feature drift propagating to planning rather than image-editing artifacts. revision: yes
-
Referee: [Methodology] The central assumption that static camouflage produces reliable feature drift under real-world conditions is not supported by the simulation setup; modifying nuScenes imagery without explicit full rendering, illumination modeling, or sensor noise injection risks bypassing the very optical and noise effects that would occur in physical deployments, undermining the robustness claims across speeds and models.
Authors: We acknowledge the simulation limitations. nuScenes provides real captured imagery, and our modifications approximate view-angle appearance shifts using the dataset geometry; however, we did not perform full physics-based rendering or explicit illumination/sensor modeling. In the revision we will expand the methodology section with a detailed description of the image-modification pipeline, add synthetic noise injection experiments, and include a dedicated limitations paragraph that qualifies robustness claims as holding under the current simulation protocol. We will also discuss physical deployment as important future work rather than claiming broad real-world robustness. revision: partial
Circularity Check
No significant circularity; empirical attack demonstration on public dataset with no equations or self-referential derivations.
full rationale
The paper presents a static camouflage attack that exploits view-angle variation to induce trajectory errors in autonomous driving systems. The central result—an 87.5% end-to-end success rate measured by hard-braking events on nuScenes—is reported as an empirical measurement across scene backgrounds, speeds, and models. No equations, fitted parameters, self-citations, uniqueness theorems, or ansatzes appear in the abstract or description that would reduce any claimed prediction or derivation to its own inputs by construction. The attack description is conceptual and the evaluation is dataset-based rather than analytically forced. This is the expected non-finding for an empirical security paper without a mathematical derivation chain.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption View-angle variation can be exploited to produce consistent feature drift across frames without active patch changes.
Reference graph
Works this paper leans on
-
[1]
Baidu Apollo.https://apollo.baidu.com/, 2022
work page 2022
-
[2]
Nvidia-alpamayo.https://www.nvidia.cn/solutions/autonomous-vehicles/alpamayo/, 2026
work page 2026
-
[3]
Tesla Full Self-Driving.https://www.tesla.com/fsd, 2026
work page 2026
-
[4]
Synthesizing robust adversarial examples
Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. In International conference on machine learning, pages 284–293. PMLR, 2018
work page 2018
-
[5]
Transfusion: Robust lidar-camera fusion for 3d object detection with transformers
Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu, and Chiew-Lan Tai. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1090–1099, 2022
work page 2022
-
[6]
Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh V ora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020
work page 2020
-
[7]
Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin Fang, Ruigang Yang, Qi Alfred Chen, Mingyan Liu, and Bo Li. Invisible for both camera and lidar: Security of multi-sensor fusion based perception in autonomous driving under physical-world attacks. In2021 IEEE symposium on security and privacy (SP), pages 176–194. IEEE, 2021
work page 2021
-
[8]
Dynamic adversarial attacks on autonomous driving systems
Amirhosein Chahe, Chenan Wang, Abhishek Jeyapratap, Kaidi Xu, and Lifeng Zhou. Dynamic adversarial attacks on autonomous driving systems. InProceedings of Robotics: Science and Systems, 2024. 9 APREPRINT- MAY14, 2026
work page 2024
-
[9]
Futr3d: A unified sensor fusion framework for 3d detection
Xuanyao Chen, Tianyuan Zhang, Yue Wang, Yilun Wang, and Hang Zhao. Futr3d: A unified sensor fusion framework for 3d detection. Inproceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 172–181, 2023
work page 2023
-
[10]
Fastbev++: Fast by algorithm, deployable by design.arXiv preprint arXiv:2512.08237, 2025
Yuanpeng Chen, Hui Song, Wei Tao, ShanHui Mo, Shuang Zhang, Xiao Hua, and TianKun Zhao. Fastbev++: Fast by algorithm, deployable by design.arXiv preprint arXiv:2512.08237, 2025
-
[11]
Yuxiao Chen, Ugo Rosolia, Wyatt Ubellacker, Noel Csomay-Shanklin, and Aaron Ames. Interactive multi-modal motion planning with branch model predictive control.IEEE Robotics and Automation Letters, 2022
work page 2022
-
[12]
Fusion is not enough: Single modal attacks on fusion models for 3d object detection
Zhiyuan Cheng, Hongjun Choi, Shiwei Feng, James Liang, Guanhong Tao, Dongfang Liu, Michael Zuzak, and Xiangyu Zhang. Fusion is not enough: Single modal attacks on fusion models for 3d object detection. In International Conference on Learning Representations, volume 2024, pages 23905–23929, 2024
work page 2024
-
[13]
Adversarial pixel masking: A defense against physical attacks for pre-trained object detectors
Ping-Han Chiang, Chi-Shen Chan, and Shan-Hung Wu. Adversarial pixel masking: A defense against physical attacks for pre-trained object detectors. InProceedings of the 29th ACM international conference on multimedia, pages 1856–1865, 2021
work page 2021
-
[14]
Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. A survey of deep learning techniques for autonomous driving.Journal of field robotics, 37(3):362–386, 2020
work page 2020
-
[15]
St-p3: End-to-end vision- based autonomous driving via spatial-temporal feature learning
Shengchao Hu, Li Chen, Penghao Wu, Hongyang Li, Junchi Yan, and Dacheng Tao. St-p3: End-to-end vision- based autonomous driving via spatial-temporal feature learning. InEuropean Conference on Computer Vision, 2022
work page 2022
-
[16]
Planning-oriented autonomous driving
Yihan Hu, Jiazhi Yang, Li Chen, Keyu Li, Chonghao Sima, Xizhou Zhu, Siqi Chai, Senyao Du, Tianwei Lin, Wenhai Wang, Lewei Lu, Xiaosong Jia, Qiang Liu, Jifeng Dai, Yu Qiao, and Hongyang Li. Planning-oriented autonomous driving. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17853–17862, 2023
work page 2023
-
[17]
BEVDet: High-performance Multi-camera 3D Object Detection in Bird-Eye-View
Junjie Huang, Guan Huang, Zheng Zhu, Ye Yun, and Dalong Du. Bevdet: High-performance multi-camera 3d object detection in bird-eye-view.arXiv preprint arXiv:2112.11790, 2021
work page internal anchor Pith review arXiv 2021
-
[18]
Yuille, Changqing Zou, and Ning Liu
Lifeng Huang, Chengying Gao, Yuyin Zhou, Cihang Xie, Alan L. Yuille, Changqing Zou, and Ning Liu. Universal physical camouflage attacks on object detectors. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020
work page 2020
-
[19]
Poltergeist: Acoustic adversarial machine learning against cameras and computer vision
Xiaoyu Ji, Yushi Cheng, Yuepeng Zhang, Kai Wang, Chen Yan, Wenyuan Xu, and Kevin Fu. Poltergeist: Acoustic adversarial machine learning against cameras and computer vision. In2021 IEEE Symposium on Security and Privacy (SP), pages 160–175, 2021
work page 2021
-
[20]
Fooling detection alone is not enough: Adversarial attack against multiple object tracking
Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Hao Chen, Zhenyu Zhong, and Tao Wei. Fooling detection alone is not enough: Adversarial attack against multiple object tracking. InInternational Conference on Learning Representations, 2020
work page 2020
-
[21]
Vad: Vectorized scene representation for efficient autonomous driving
Bo Jiang, Shaoyu Chen, Qing Xu, Bencheng Liao, Jiajie Chen, Helong Zhou, Qian Zhang, Wenyu Liu, Chang Huang, and Xinggang Wang. Vad: Vectorized scene representation for efficient autonomous driving. InProceed- ings of the IEEE/CVF International Conference on Computer Vision, pages 8340–8350, 2023
work page 2023
-
[22]
Pad: Patch-agnostic defense against adversarial patch attacks
Lihua Jing, Rui Wang, Wenqi Ren, Xin Dong, and Cong Zou. Pad: Patch-agnostic defense against adversarial patch attacks. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 24472–24481, 2024
work page 2024
-
[23]
Defending physical adversarial attack on object detection via adversarial patch-feature energy
Taeheon Kim, Youngjoon Yu, and Yong Man Ro. Defending physical adversarial attack on object detection via adversarial patch-feature energy. InProceedings of the 30th ACM International Conference on Multimedia, pages 1905–1913, 2022
work page 1905
-
[24]
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization.International Conference on Learning Representations, 2015
work page 2015
-
[25]
Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection
Yingwei Li, Adams Wei Yu, Tianjian Meng, Ben Caine, Jiquan Ngiam, Daiyi Peng, Junyang Shen, Yifeng Lu, Denny Zhou, Quoc V Le, et al. Deepfusion: Lidar-camera deep fusion for multi-modal 3d object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 17182–17191, 2022
work page 2022
-
[26]
Bevdepth: Acquisition of reliable depth for multi-view 3d object detection
Yinhao Li, Zheng Ge, Guanyi Yu, Jinrong Yang, Zengran Wang, Yukang Shi, Jianjian Sun, and Zeming Li. Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. InProceedings of the AAAI conference on artificial intelligence, pages 1477–1485, 2023
work page 2023
-
[27]
Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, and Soheil Feizi. Segment and complete: Defending object detectors against adversarial patch attacks with robust patch detection. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14973–14982, 2022. 10 APREPRINT- MAY14, 2026
work page 2022
-
[28]
Bevfusion: Multi-task multi-sensor fusion with unified bird’s-eye view representation
Zhijian Liu, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela L Rus, and Song Han. Bevfusion: Multi-task multi-sensor fusion with unified bird’s-eye view representation. In2023 IEEE international conference on robotics and automation (ICRA), pages 2774–2781. IEEE, 2023
work page 2023
-
[29]
3d gaussian splatting driven multi-view robust physical adversarial camouflage generation
Tianrui Lou, Xiaojun Jia, Siyuan Liang, Jiawei Liang, Ming Zhang, Yanjun Xiao, and Xiaochun Cao. 3d gaussian splatting driven multi-view robust physical adversarial camouflage generation. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 28752–28762, 2025
work page 2025
-
[30]
Yang Lou, Yi Zhu, Qun Song, Rui Tan, Chunming Qiao, Wei-Bin Lee, and Jianping Wang. A first {Physical- World} trajectory prediction attack via {LiDAR-induced} deceptions in autonomous driving. In33rd USENIX Security Symposium (USENIX Security 24), pages 6291–6308, 2024
work page 2024
-
[31]
Wip: Towards the practicality of the adversarial attack on object tracking in autonomous driving
Chen Ma, Ningfei Wang, Qi Alfred Chen, and Chao Shen. Wip: Towards the practicality of the adversarial attack on object tracking in autonomous driving. InISOC Symposium on Vehicle Security and Privacy, 2023
work page 2023
-
[32]
Controlloc: Physical-world hijacking attack on camera-based perception in autonomous driving
Chen Ma, Ningfei Wang, Zhengyu Zhao, Qian Wang, Qi Alfred Chen, and Chao Shen. Controlloc: Physical-world hijacking attack on camera-based perception in autonomous driving. InProceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security, pages 738–752, 2025
work page 2025
-
[33]
Ghostimage: Remote perception domain attacks against camera-based image classification systems
Yanmao Man, Ming Li, and Ryan Gerdes. Ghostimage: Remote perception domain attacks against camera-based image classification systems. InProceedings of the 23rd International Symposium on Research in Attacks, Intrusions and Defenses (USENIX RAID 2020), 2020
work page 2020
-
[34]
Yanmao Man, Raymond Muller, Ming Li, Z Berkay Celik, and Ryan Gerdes. That person moves like a car: Misclassification attack detection for autonomous systems using spatiotemporal consistency. In32nd USENIX Security Symposium, pages 6929–6946, 2023
work page 2023
-
[35]
Jiageng Mao, Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. 3d object detection for autonomous driving: A comprehensive survey.International Journal of Computer Vision, 131(8):1909–1963, 2023
work page 1909
-
[36]
Physical hijacking attacks against object trackers
Raymond Muller, Yanmao Man, Z Berkay Celik, Ming Li, and Ryan Gerdes. Physical hijacking attacks against object trackers. InProceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, pages 2309–2322, 2022
work page 2022
-
[37]
Raymond Muller, Ruoyu Song, Chenyi Wang, Yuxia Zhan, Jean-Phillipe Monteuuis, Yanmao Man, Ming Li, Ryan Gerdes, Jonathan Petit, and Z. Berkay Celik. Investigating physical latency attacks against camera-based perception. In2025 IEEE Symposium on Security and Privacy (SP), pages 4588–4605, 2025
work page 2025
-
[38]
Phantom of the adas: Securing advanced driver-assistance systems from split-second phantom attacks
Ben Nassi, Yisroel Mirsky, Dudi Nassi, Raz Ben-Netanel, Oleg Drokin, and Yuval Elovici. Phantom of the adas: Securing advanced driver-assistance systems from split-second phantom attacks. InProceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, page 293–308, 2020
work page 2020
-
[39]
Explanations in autonomous driving: A survey
Daniel Omeiza, Helena Webb, Marina Jirotka, and Lars Kunze. Explanations in autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, 23(8):10142–10162, 2021
work page 2021
-
[40]
Sam 2: Segment anything in images and videos
Nikhila Ravi, Valentin Gabeur, Yuan-Ting Hu, Ronghang Hu, Chaitanya Ryali, Tengyu Ma, Haitham Khedr, Roman Rädle, Chloe Rolland, Laura Gustafson, Eric Mintun, Junting Pan, Kalyan Vasudev Alwala, Nicolas Carion, Chao-Yuan Wu, Ross Girshick, Piotr Dollar, and Christoph Feichtenhofer. Sam 2: Segment anything in images and videos. InInternational Conference o...
work page 2025
-
[41]
Accelerating 3D Deep Learning with PyTorch3D
Nikhila Ravi, Jeremy Reizenstein, David Novotny, Taylor Gordon, Wan-Yen Lo, Justin Johnson, and Georgia Gkioxari. Accelerating 3d deep learning with pytorch3d.arXiv preprint arXiv:2007.08501, 2020
work page internal anchor Pith review arXiv 2007
-
[42]
Trajectron++: Dynamically-feasible trajectory forecasting with heterogeneous data
Tim Salzmann, Boris Ivanovic, Punarjay Chakravarty, and Marco Pavone. Trajectron++: Dynamically-feasible trajectory forecasting with heterogeneous data. InEuropean Conference on Computer Vision (ECCV), 2020
work page 2020
-
[43]
Dta: Physical camouflage attacks using differentiable transformation network
Naufal Suryanto, Yongsu Kim, Hyoeun Kang, Harashta Tatimma Larasati, Youngyeo Yun, Thi-Thu-Huong Le, Hunmin Yang, Se-Yoon Oh, and Howon Kim. Dta: Physical camouflage attacks using differentiable transformation network. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15305– 15314, 2022
work page 2022
-
[44]
Active: Towards highly transferable 3d physical camouflage for universal and robust vehicle evasion
Naufal Suryanto, Yongsu Kim, Harashta Tatimma Larasati, Hyoeun Kang, Thi-Thu-Huong Le, Yoonyoung Hong, Hunmin Yang, Se-Yoon Oh, and Howon Kim. Active: Towards highly transferable 3d physical camouflage for universal and robust vehicle evasion. InProceedings of the IEEE/CVF International Conference on Computer Vision, pages 4305–4314, 2023
work page 2023
-
[45]
Fca: Learning a 3d full-coverage vehicle camouflage for multi-view physical adversarial attack
Donghua Wang, Tingsong Jiang, Jialiang Sun, Weien Zhou, Zhiqiang Gong, Xiaoya Zhang, Wen Yao, and Xiaoqian Chen. Fca: Learning a 3d full-coverage vehicle camouflage for multi-view physical adversarial attack. InProceedings of the AAAI conference on artificial intelligence, pages 2414–2422, 2022. 11 APREPRINT- MAY14, 2026
work page 2022
-
[46]
Dual attention suppression attack: Generate adversarial camouflage in physical world
Jiakai Wang, Aishan Liu, Zixin Yin, Shunchang Liu, Shiyu Tang, and Xianglong Liu. Dual attention suppression attack: Generate adversarial camouflage in physical world. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8565–8574, 2021
work page 2021
-
[47]
Ningfei Wang, Yunpeng Luo, Takami Sato, Kaidi Xu, and Qi Alfred Chen. Does physical adversarial example really matter to autonomous driving? towards system-level effect of adversarial object evasion attack. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4412–4423, 2023
work page 2023
-
[48]
Xinshuo Weng, Jianren Wang, David Held, and Kris Kitani. 3d multi-object tracking: A baseline and new evaluation metrics.IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020
work page 2020
-
[49]
Optimal trajectory generation for dynamic street scenarios in a frenet frame
Moritz Werling, Julius Ziegler, Sören Kammel, and Sebastian Thrun. Optimal trajectory generation for dynamic street scenarios in a frenet frame. In2010 IEEE international conference on robotics and automation, pages 987–993. IEEE, 2010
work page 2010
-
[50]
Yi Yu, Weizhen Han, Libing Wu, Bingyi Liu, Enshu Wang, and Zhuangzhuang Zhang. Enduring, efficient and robust trajectory prediction attack in autonomous driving via optimization-driven multi-frame perturbation framework. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 17229–17238, 2025
work page 2025
-
[51]
Physense: Defending physically realizable attacks for autonomous systems via consistency reasoning
Zhiyuan Yu, Ao Li, Ruoyao Wen, Yijia Chen, and Ning Zhang. Physense: Defending physically realizable attacks for autonomous systems via consistency reasoning. InProceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security, page 3853–3867, 2024
work page 2024
-
[52]
On adversarial robustness of trajectory prediction for autonomous vehicles
Qingzhao Zhang, Shengtuo Hu, Jiachen Sun, Qi Alfred Chen, and Z Morley Mao. On adversarial robustness of trajectory prediction for autonomous vehicles. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 15159–15168, 2022
work page 2022
-
[53]
CAMOU: Learning physical vehicle camouflages to adversarially attack detectors in the wild
Yang Zhang, Hassan Foroosh, Philip David, and Boqing Gong. CAMOU: Learning physical vehicle camouflages to adversarially attack detectors in the wild. InInternational Conference on Learning Representations, 2019
work page 2019
-
[54]
Tracking objects as points.European Conference on Computer Vision, 2020
Xingyi Zhou, Vladlen Koltun, and Philipp Krähenbühl. Tracking objects as points.European Conference on Computer Vision, 2020
work page 2020
-
[55]
Hivt: Hierarchical vector transformer for multi-agent motion prediction
Zikang Zhou, Luyao Ye, Jianping Wang, Kui Wu, and Kejie Lu. Hivt: Hierarchical vector transformer for multi-agent motion prediction. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022
work page 2022
-
[56]
Tpatch: A triggered physical adversarial patch
Wenjun Zhu, Xiaoyu Ji, Yushi Cheng, Shibo Zhang, and Wenyuan Xu. Tpatch: A triggered physical adversarial patch. In32nd USENIX Security Symposium, pages 661–678, 2023. 12 APREPRINT- MAY14, 2026 A Additional Case Study In addition to the hard-braking case in the main text, we provide an additional case study of abandoned overtaking in Fig. 7. In the clean ...
work page 2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.