Recognition: unknown
UniAda: Universal Adaptive Multi-objective Adversarial Attack for End-to-End Autonomous Driving Systems
Pith reviewed 2026-05-08 07:48 UTC · model grok-4.3
The pith
UniAda crafts image-agnostic perturbations that simultaneously disrupt steering and speed in end-to-end autonomous driving systems using adaptive multi-objective optimization.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By designing a multi-objective optimization function with an Adaptive Weighting Scheme, UniAda generates a universal adversarial perturbation that concurrently influences steering and speed decisions in DL-based end-to-end autonomous driving systems, achieving greater deviations than prior benchmark attacks on both simulated and real driving data.
What carries the argument
The Adaptive Weighting Scheme in the multi-objective optimization function, which enables concurrent optimization of steering and speed attack objectives to produce image-agnostic perturbations.
Load-bearing premise
That the Adaptive Weighting Scheme can effectively balance steering and speed objectives in the optimization without scene-specific tuning or violating the imperceptibility of the perturbations.
What would settle it
Experiments on the same simulated and real-world datasets where UniAda fails to produce larger average steering and speed deviations than the five benchmark methods.
Figures
read the original abstract
Adversarial attacks play a pivotal role in testing and improving the reliability of deep learning (DL) systems. Existing literature has demonstrated that subtle perturbations to the input can elicit erroneous outcomes, thereby substantially compromising the security of DL systems. This has emerged as a critical concern in the development of DL-based safety-critical systems like Autonomous Driving Systems (ADSs). The focus of existing adversarial attack methods on End-to-End (E2E) ADSs has predominantly centered on misbehaviors of steering angle, which overlooks speed-related controls or imperceptible perturbations. To address these challenges, we introduce UniAda, a multi-objective white-box attack technique with a core function that revolves around crafting an image-agnostic adversarial perturbation capable of simultaneously influencing both steering and speed controls. UniAda capitalizes on an intricately designed multi-objective optimization function with the Adaptive Weighting Scheme (AWS), enabling the concurrent optimization of diverse objectives. Validated with both simulated and real-world driving data, UniAda outperforms five benchmarks across two metrics, inducing steering and speed deviations from 3.54 degrees to 29 degrees and 11 km per hour to 22 km per hour on average. This systematic approach establishes UniAda as a proven technique for adversarial attacks on modern DL-based E2E ADSs.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces UniAda, a white-box multi-objective adversarial attack for end-to-end autonomous driving systems. It proposes an Adaptive Weighting Scheme (AWS) within a multi-objective optimization to generate a single image-agnostic perturbation that simultaneously induces errors in both steering-angle and speed predictions. The approach is evaluated on simulated and real-world driving data and is reported to outperform five existing benchmarks, producing average steering deviations in the range 3.54–29° and speed deviations in the range 11–22 km/h.
Significance. If the AWS formulation, experimental protocol, and statistical reporting are made rigorous, the work would be a useful contribution to adversarial robustness testing of safety-critical ADS. Simultaneous multi-objective attacks and truly universal (image-agnostic) perturbations remain under-explored relative to single-objective steering attacks; a well-documented method that transfers across scenes while respecting imperceptibility constraints would be of practical value for red-teaming E2E controllers.
major comments (3)
- [Abstract and §3] Abstract and §3 (Method): the multi-objective loss and the explicit adaptive weighting rule of the AWS are not stated. Without the joint objective (e.g., L = w_s·L_steer + w_v·L_speed) and the dynamic rule that sets w_s, w_v at each optimization step, it is impossible to verify that neither objective collapses or that the resulting perturbation remains image-agnostic and imperceptible across unseen scenes.
- [§4] §4 (Experiments): the headline performance numbers (steering 3.54–29°, speed 11–22 km/h) are given only as ranges or averages with no per-scene success rates, perturbation-norm statistics (‖δ‖_p), error bars, or statistical significance tests. This leaves open the possibility that inconsistent balancing on some scenes explains the wide spread rather than systematic superiority of the AWS.
- [§4] §4 and Table X: no ablation isolating the adaptive component from a fixed-weight baseline is reported. Without this comparison it cannot be established that the claimed outperformance is attributable to the AWS rather than to standard multi-objective optimization or to differences in attack budget.
minor comments (3)
- [Abstract] Abstract: the phrasing “inducing … deviations from 3.54 degrees to 29 degrees … on average” is ambiguous; clarify whether the reported figures are means across scenes or the observed min–max range.
- [§2] §2 (Related Work): the five benchmark methods should be summarized with their original loss formulations and imperceptibility constraints so that the claimed improvements can be directly compared.
- [§4.2] Figure 3 or §4.2: add the L_p-norm values of the generated perturbations alongside the deviation metrics to allow direct assessment of imperceptibility relative to the baselines.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments on our manuscript. The feedback identifies important gaps in the presentation of the method and the rigor of the experimental reporting. We will revise the manuscript to address all points raised, adding the missing formulations, expanded statistical details, and the requested ablation study. Our point-by-point responses follow.
read point-by-point responses
-
Referee: [Abstract and §3] Abstract and §3 (Method): the multi-objective loss and the explicit adaptive weighting rule of the AWS are not stated. Without the joint objective (e.g., L = w_s·L_steer + w_v·L_speed) and the dynamic rule that sets w_s, w_v at each optimization step, it is impossible to verify that neither objective collapses or that the resulting perturbation remains image-agnostic and imperceptible across unseen scenes.
Authors: We agree that the explicit joint objective and the precise adaptive weighting rule were not stated with sufficient mathematical detail in the abstract and Section 3. In the revised manuscript we will insert the formulation L = w_s · L_steer + w_v · L_speed together with the exact dynamic rule used by AWS (weight adjustment proportional to the inverse of the current per-objective loss magnitudes at each gradient step). We will also add a short analysis confirming that the resulting perturbation stays image-agnostic and satisfies the imperceptibility constraint on held-out scenes. These additions will make the method fully verifiable. revision: yes
-
Referee: [§4] §4 (Experiments): the headline performance numbers (steering 3.54–29°, speed 11–22 km/h) are given only as ranges or averages with no per-scene success rates, perturbation-norm statistics (‖δ‖_p), error bars, or statistical significance tests. This leaves open the possibility that inconsistent balancing on some scenes explains the wide spread rather than systematic superiority of the AWS.
Authors: We acknowledge that the current experimental section reports only aggregate ranges and averages. In the revision we will augment Section 4 and the associated tables with per-scene success rates, L_p-norm statistics of the perturbations, error bars (standard deviation across scenes), and statistical significance tests (paired t-tests or Wilcoxon signed-rank tests) against each baseline. These additions will demonstrate that the observed deviations are consistent rather than the result of scene-specific imbalance. revision: yes
-
Referee: [§4] §4 and Table X: no ablation isolating the adaptive component from a fixed-weight baseline is reported. Without this comparison it cannot be established that the claimed outperformance is attributable to the AWS rather than to standard multi-objective optimization or to differences in attack budget.
Authors: We concur that an explicit ablation is required to isolate the benefit of adaptivity. We will add a new subsection and table that compares UniAda (AWS) against an otherwise identical fixed-weight multi-objective attack using the same attack budget and optimization settings. The results will quantify the additional gain attributable to the adaptive weighting scheme. revision: yes
Circularity Check
No significant circularity in derivation chain
full rationale
The paper presents an empirical proposal for a multi-objective adversarial attack method (UniAda) relying on an Adaptive Weighting Scheme within standard optimization. No equations, derivations, or mathematical chains are described that reduce claimed results (e.g., deviation ranges or outperformance) to the method's own inputs by construction. Validation uses external simulated and real-world driving data against five benchmarks, which constitutes independent evidence rather than self-referential fitting or renaming. No load-bearing self-citations, uniqueness theorems, or ansatzes are invoked. This matches the default expectation for non-circular empirical ML papers.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Watch just a few self-driving cars stop traffic jams,
M. HUTSON, “Watch just a few self-driving cars stop traffic jams,” 2018
2018
-
[2]
A Survey of Autonomous Driving: Common Practices and Emerging Technologies,
E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda, “A Survey of Autonomous Driving: Common Practices and Emerging Technologies,” IEEE access, vol. 8, pp. 58443–58469, 2020
2020
-
[3]
A Survey of End-to-End Driving: Architectures and Training Methods,
A. Tampuu, T. Matiisen, M. Semikin, D. Fishman, and N. Muhammad, “A Survey of End-to-End Driving: Architectures and Training Methods,” IEEE Transactions on Neural Networks and Learning Systems, vol. 33, no. 4, pp. 1364–1384, 2020
2020
-
[4]
The Development of Machine Vision for Road Vehicles in the Last Decade,
E. D. Dickmanns, “The Development of Machine Vision for Road Vehicles in the Last Decade,” inIntelligent Vehicle Symposium, 2002. IEEE, vol. 1, pp. 268–281, IEEE, 2002
2002
-
[5]
A Perception- Driven Autonomous Urban Vehicle,
J. Leonard, J. How, S. Teller, M. Berger, S. Campbell, G. Fiore, L. Fletcher, E. Frazzoli, A. Huang, S. Karaman,et al., “A Perception- Driven Autonomous Urban Vehicle,”Journal of Field Robotics, vol. 25, no. 10, pp. 727–774, 2008
2008
-
[6]
End to End Learning for Self-Driving Cars
M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang,et al., “End to End Learning for Self-Driving Cars,”arXiv preprint arXiv:1604.07316, 2016
work page internal anchor Pith review arXiv 2016
-
[7]
End-to-end Driving via Conditional Imitation Learning,
F. Codevilla, M. Müller, A. López, V . Koltun, and A. Dosovitskiy, “End-to-end Driving via Conditional Imitation Learning,” in2018 IEEE international conference on robotics and automation (ICRA), pp. 4693– 4700, IEEE, 2018
2018
-
[8]
Exploring the Limitations of Behavior Cloning for Autonomous Driving,
F. Codevilla, E. Santana, A. M. López, and A. Gaidon, “Exploring the Limitations of Behavior Cloning for Autonomous Driving,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9329–9338, 2019
2019
-
[9]
Using Deep Learning to Predict Steering Angles.,
Udacity, “Using Deep Learning to Predict Steering Angles.,” 2016.https://medium.com/udacity/challenge-2-using-deep-learning-to- predict-steering-angles-f42004a36ff3
2016
-
[10]
DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems,
H. Zhou, W. Li, Z. Kong, J. Guo, Y . Zhang, B. Yu, L. Zhang, and C. Liu, “DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems,” in2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE), pp. 347–358, IEEE, 2020
2020
-
[11]
DeepRoad: GAN-based Metamorphic Testing and Input Validation Framework for Autonomous Driving Systems,
M. Zhang, Y . Zhang, L. Zhang, C. Liu, and S. Khurshid, “DeepRoad: GAN-based Metamorphic Testing and Input Validation Framework for Autonomous Driving Systems,” in2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 132–142, IEEE, 2018. SUBMITTED TO IEEE TRANSACTIONS ON RELIABILITY , VOL. XX, NO. XX, XXXX 2023 15
2018
-
[12]
PhysGAN: Generating Physical- World-Resilient Adversarial Examples for Autonomous Driving,
Z. Kong, J. Guo, A. Li, and C. Liu, “PhysGAN: Generating Physical- World-Resilient Adversarial Examples for Autonomous Driving,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14254–14263, 2020
2020
-
[13]
DeepXplore: Automated White- box Testing of Deep Learning Systems,
K. Pei, Y . Cao, J. Yang, and S. Jana, “DeepXplore: Automated White- box Testing of Deep Learning Systems,” inproceedings of the 26th Symposium on Operating Systems Principles, pp. 1–18, 2017
2017
-
[14]
Intriguing properties of neural networks
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing Properties of Neural Networks,”arXiv preprint arXiv:1312.6199, 2013
work page internal anchor Pith review arXiv 2013
-
[15]
Explaining and Harnessing Adversarial Examples
I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples,”arXiv preprint arXiv:1412.6572, 2014
work page internal anchor Pith review arXiv 2014
-
[16]
Adversarial Machine Learn- ing at Scale,
A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial Machine Learn- ing at Scale,”arXiv preprint arXiv:1611.01236, 2016
-
[17]
Adversarial Examples in the Physical World,
A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial Examples in the Physical World,” inArtificial intelligence safety and security, pp. 99– 112, Chapman and Hall/CRC, 2018
2018
-
[18]
arXiv preprint arXiv:1705.07204 , year=
F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble Adversarial Training: Attacks and Defenses,” arXiv preprint arXiv:1705.07204, 2017
-
[19]
Testing DNN-based Autonomous Driving Systems under Critical Environmental Conditions,
Z. Li, M. Pan, T. Zhang, and X. Li, “Testing DNN-based Autonomous Driving Systems under Critical Environmental Conditions,” inInterna- tional Conference on Machine Learning, pp. 6471–6482, PMLR, 2021
2021
-
[20]
DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars,
Y . Tian, K. Pei, S. Jana, and B. Ray, “DeepTest: Automated Testing of Deep-Neural-Network-Driven Autonomous Cars,” inProceedings of the 40th international conference on software engineering, pp. 303–314, 2018
2018
-
[21]
DeepManeuver: Adversarial Test Generation for Trajectory Manipulation of Autonomous Vehicles,
M. von Stein, D. Shriver, and S. Elbaum, “DeepManeuver: Adversarial Test Generation for Trajectory Manipulation of Autonomous Vehicles,” IEEE Transactions on Software Engineering, 2023
2023
-
[22]
Misbehaviour Prediction for Autonomous Driving Systems,
A. Stocco, M. Weiss, M. Calzana, and P. Tonella, “Misbehaviour Prediction for Autonomous Driving Systems,” inProceedings of the ACM/IEEE 42nd international conference on software engineering, pp. 359–371, 2020
2020
-
[23]
A Sur- vey on Universal Adversarial Attack,
C. Zhang, P. Benz, C. Lin, A. Karjauv, J. Wu, and I. S. Kweon, “A Sur- vey on Universal Adversarial Attack,”arXiv preprint arXiv:2103.01498, 2021
-
[24]
Univer- sal Adversarial Perturbations,
S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Univer- sal Adversarial Perturbations,” inProceedings of the IEEE conference on computer vision and pattern recognition, pp. 1765–1773, 2017
2017
-
[25]
GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks,
Z. Chen, V . Badrinarayanan, C.-Y . Lee, and A. Rabinovich, “GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks,” inInternational Conference on Machine Learning, pp. 794– 803, PMLR, 2018
2018
-
[26]
Vision meets Robotics: The KITTI Dataset,
A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets Robotics: The KITTI Dataset,”The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013
2013
-
[27]
Autopilot-Tensorflow,
Sully-Chen, “Autopilot-Tensorflow,” 2016. https://github.com/SullyChen/Autopilot-TensorFlow
2016
-
[28]
Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles,
J. Zhang, Y . Lou, J. Wang, K. Wu, K. Lu, and X. Jia, “Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles,”IEEE Internet of Things Journal, vol. 9, no. 5, pp. 3443– 3456, 2021
2021
-
[29]
A Survey of Deep Learning Applications to Autonomous Vehicle Control,
S. Kuutti, R. Bowden, Y . Jin, P. Barber, and S. Fallah, “A Survey of Deep Learning Applications to Autonomous Vehicle Control,”IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 2, pp. 712–733, 2020
2020
-
[30]
Transfuser: Imitation with Transformer-based Sensor Fusion for Au- tonomous Driving,
K. Chitta, A. Prakash, B. Jaeger, Z. Yu, K. Renz, and A. Geiger, “Transfuser: Imitation with Transformer-based Sensor Fusion for Au- tonomous Driving,”IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022
2022
-
[31]
MP3: A Unified Model to Map, Perceive, Predict and Plan,
S. Casas, A. Sadat, and R. Urtasun, “MP3: A Unified Model to Map, Perceive, Predict and Plan,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14403–14412, 2021
2021
-
[32]
ALVINN: An Autonomous Land Vehicle in a Neural Network,
D. A. Pomerleau, “ALVINN: An Autonomous Land Vehicle in a Neural Network,”Advances in neural information processing systems, vol. 1, 1988
1988
-
[33]
Deep Learning-based Autonomous Driving Systems: A Survey of Attacks and Defenses,
Y . Deng, T. Zhang, G. Lou, X. Zheng, J. Jin, and Q.-L. Han, “Deep Learning-based Autonomous Driving Systems: A Survey of Attacks and Defenses,”IEEE Transactions on Industrial Informatics, vol. 17, no. 12, pp. 7897–7912, 2021
2021
-
[34]
Practical Black-Box Attacks against Machine Learning,
N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical Black-Box Attacks against Machine Learning,” inProceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506–519, 2017
2017
-
[35]
Attacking Spectrum Sensing with Adversarial Deep Learning in Cognitive Radio-Enabled Internet of Things,
M. Liu, H. Zhang, Z. Liu, and N. Zhao, “Attacking Spectrum Sensing with Adversarial Deep Learning in Cognitive Radio-Enabled Internet of Things,”IEEE Transactions on Reliability, 2022
2022
-
[36]
Adversarial Attacks in Modulation Recognition with Convolutional Neural Networks,
Y . Lin, H. Zhao, X. Ma, Y . Tu, and M. Wang, “Adversarial Attacks in Modulation Recognition with Convolutional Neural Networks,”IEEE Transactions on Reliability, vol. 70, no. 1, pp. 389–401, 2020
2020
-
[37]
Detection Tolerant Black- Box Adversarial Attack Against Automatic Modulation Classification with Deep Learning,
P. Qi, T. Jiang, L. Wang, X. Yuan, and Z. Li, “Detection Tolerant Black- Box Adversarial Attack Against Automatic Modulation Classification with Deep Learning,”IEEE Transactions on Reliability, vol. 71, no. 2, pp. 674–686, 2022
2022
-
[38]
Semantic Image Fuzzing of AI Perception Systems,
T. Woodlief, S. Elbaum, and K. Sullivan, “Semantic Image Fuzzing of AI Perception Systems,” inProceedings of the 44th International Conference on Software Engineering, pp. 1958–1969, 2022
1958
-
[39]
Feasibility and Suppres- sion of Adversarial Patch Attacks on End-to-End Vehicle Control,
S. Pavlitskaya, S. Ünver, and J. M. Zöllner, “Feasibility and Suppres- sion of Adversarial Patch Attacks on End-to-End Vehicle Control,” in 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp. 1–8, IEEE, 2020
2020
-
[40]
Adversarial Examples: Attacks and Defenses for Deep Learning,
X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial Examples: Attacks and Defenses for Deep Learning,”IEEE transactions on neural networks and learning systems, vol. 30, no. 9, pp. 2805–2824, 2019
2019
-
[41]
On Offline Evaluation of Vision-based Driving Models,
F. Codevilla, A. M. Lopez, V . Koltun, and A. Dosovitskiy, “On Offline Evaluation of Vision-based Driving Models,” inProceedings of the European Conference on Computer Vision (ECCV), pp. 236–251, 2018
2018
-
[42]
Comparing Offline and Online Testing of Deep Neural Networks: An Autonomous Car Case Study,
F. U. Haq, D. Shin, S. Nejati, and L. C. Briand, “Comparing Offline and Online Testing of Deep Neural Networks: An Autonomous Car Case Study,” in2020 IEEE 13th International Conference on Software Testing, Validation and Verification (ICST), pp. 85–95, IEEE, 2020
2020
-
[43]
Covering code behavior on input validation in functional testing,
H. Liu and H. B. K. Tan, “Covering code behavior on input validation in functional testing,”Information and Software Technology, vol. 51, no. 2, pp. 546–553, 2009
2009
-
[44]
Black Box and White Box Testing Techniques-A Literature Review,
S. Nidhra and J. Dondeti, “Black Box and White Box Testing Techniques-A Literature Review,”International Journal of Embedded Systems and Applications (IJESA), vol. 2, no. 2, pp. 29–50, 2012
2012
-
[45]
Adversarial Driving: Attacking End-to-End Autonomous Driving,
H. Wu, S. Yunas, S. Rowlands, W. Ruan, and J. Wahlström, “Adversarial Driving: Attacking End-to-End Autonomous Driving,” in2023 IEEE Intelligent Vehicles Symposium (IV), pp. 1–7, IEEE, 2023
2023
-
[46]
An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models,
Y . Deng, X. Zheng, T. Zhang, C. Chen, G. Lou, and M. Kim, “An Analysis of Adversarial Attacks and Defenses on Autonomous Driving Models,” in2020 IEEE international conference on pervasive computing and communications (PerCom), pp. 1–10, IEEE, 2020
2020
-
[47]
A Few Useful Things to Know About Machine Learning,
P. Domingos, “A Few Useful Things to Know About Machine Learning,” Communications of the ACM, vol. 55, no. 10, pp. 78–87, 2012
2012
-
[48]
How Does Learn- ing Rate Decay Help Modern Neural Networks?,
K. You, M. Long, J. Wang, and M. I. Jordan, “How Does Learn- ing Rate Decay Help Modern Neural Networks?,”arXiv preprint arXiv:1908.01878, 2019
-
[49]
Guiding Deep Learning System Testing using Surprise Adequacy,
J. Kim, R. Feldt, and S. Yoo, “Guiding Deep Learning System Testing using Surprise Adequacy,” in2019 IEEE/ACM 41st International Con- ference on Software Engineering (ICSE), pp. 1039–1049, IEEE, 2019
2019
-
[50]
Self-Driving Car Steering Angle Prediction: Let Transformer Be a Car Again,
C. Oinar and E. Kim, “Self-Driving Car Steering Angle Prediction: Let Transformer Be a Car Again,”arXiv preprint arXiv:2204.12748, 2022
-
[51]
The Probable Error of a Mean,
W. S. Gosset, “The Probable Error of a Mean,”Biometrika, pp. 1–25, 1908
1908
-
[52]
Enhancing the Transferability of Adversarial Attacks through Variance Tuning,
X. Wang and K. He, “Enhancing the Transferability of Adversarial Attacks through Variance Tuning,” inProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1924– 1933, 2021
1924
-
[53]
Boosting Adversarial Attacks with Momentum,
Y . Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting Adversarial Attacks with Momentum,” inProceedings of the IEEE conference on computer vision and pattern recognition, pp. 9185–9193, 2018
2018
-
[54]
Improving Transferability of Adversarial Examples with Input Diversity,
C. Xie, Z. Zhang, Y . Zhou, S. Bai, J. Wang, Z. Ren, and A. L. Yuille, “Improving Transferability of Adversarial Examples with Input Diversity,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2730–2739, 2019
2019
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.