Recognition: 2 theorem links
· Lean TheoremCAAP: Capture-Aware Adversarial Patch Attacks on Palmprint Recognition Models
Pith reviewed 2026-05-10 17:52 UTC · model grok-4.3
The pith
CAAP shows that capture-aware adversarial patches can reliably compromise palmprint recognition models even after adversarial training.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
CAAP learns a universal cross-shaped adversarial patch that remains effective under realistic acquisition variation by combining input-conditioned rendering, stochastic capture simulation, and feature-level identity disruption. Experiments on the Tongji, IITD, and AISEC datasets show strong untargeted and targeted attack performance against both generic CNNs and palmprint-specific models, with favorable cross-model and cross-dataset transferability. Adversarial training lowers attack success rates but leaves substantial residual vulnerability, indicating that deep palmprint systems are not robust against physically realizable capture-aware patches.
What carries the argument
The CAAP framework, built around a cross-shaped patch topology that covers more texture area under a fixed pixel budget, plus ASIT for input-conditioned rendering, RaS for stochastic capture-aware simulation of distortions, and MS-DIFE for feature-level identity-disruptive guidance.
Load-bearing premise
The stochastic capture-aware simulation and input-conditioned rendering accurately represent the distortions and variations that occur during actual physical palmprint image acquisition.
What would settle it
Printing the generated patches on physical media, placing them on real palms, and capturing images with actual palmprint scanners to check whether the observed attack success rates match the simulated ones.
Figures
read the original abstract
Palmprint recognition is deployed in security-critical applications, including access control and palm-based payment, due to its contactless acquisition and highly discriminative ridge-and-crease textures. However, the robustness of deep palmprint recognition systems against physically realizable attacks remains insufficiently understood. Existing studies are largely confined to the digital setting and do not adequately account for the texture-dominant nature of palmprint recognition or the distortions introduced during physical acquisition. To address this gap, we propose CAAP, a capture-aware adversarial patch framework for palmprint recognition. CAAP learns a universal patch that can be reused across inputs while remaining effective under realistic acquisition variation. To match the structural characteristics of palmprints, the framework adopts a cross-shaped patch topology, which enlarges spatial coverage under a fixed pixel budget and more effectively disrupts long-range texture continuity. CAAP further integrates three modules: ASIT for input-conditioned patch rendering, RaS for stochastic capture-aware simulation, and MS-DIFE for feature-level identity-disruptive guidance. We evaluate CAAP on the Tongji, IITD, and AISEC datasets against generic CNN backbones and palmprint-specific recognition models. Experiments show that CAAP achieves strong untargeted and targeted attack performance with favorable cross-model and cross-dataset transferability. The results further show that, although adversarial training can partially reduce the attack success rate, substantial residual vulnerability remains. These findings indicate that deep palmprint recognition systems remain vulnerable to physically realizable, capture-aware adversarial patch attacks, underscoring the need for more effective defenses in practice. Code available at https://github.com/ryliu68/CAAP.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes CAAP, a capture-aware adversarial patch framework targeting palmprint recognition models. It employs a cross-shaped patch topology to disrupt long-range texture features under a fixed pixel budget and integrates three modules: ASIT for input-conditioned patch rendering, RaS for stochastic simulation of capture variations, and MS-DIFE for feature-level identity disruption. Experiments on the Tongji, IITD, and AISEC datasets against generic CNNs and palmprint-specific models report strong untargeted and targeted attack success rates, favorable cross-model and cross-dataset transferability, and that adversarial training only partially reduces vulnerability.
Significance. If the RaS and ASIT modules prove to faithfully model real acquisition distortions, the work would be significant for biometric security, as it demonstrates residual vulnerabilities in deployed palmprint systems for access control and payments, extending adversarial patch research to texture-dominant modalities and motivating improved defenses.
major comments (3)
- [Abstract and Experimental Evaluation] Abstract and Experimental Evaluation: The central claim that CAAP produces 'physically realizable' attacks with 'strong' performance and 'substantial residual vulnerability' after adversarial training is supported solely by digital experiments applying RaS stochastic simulations and ASIT rendering to images from the Tongji, IITD, and AISEC datasets. No physical experiments (printed patches, real palm captures under varied lighting/angles/sensors) are described. This is load-bearing for the physical-attack conclusions, as any mismatch between simulated and actual distortions (e.g., reflectance, blur, sensor noise) would prevent the reported ASR and transferability from translating to real-world settings.
- [Method and Experimental Evaluation] Method and Experimental Evaluation: The RaS, ASIT, and MS-DIFE modules are presented as key innovations for capture-aware attacks, yet the manuscript provides no ablation studies, sensitivity analysis, or comparisons to standard adversarial patch baselines (e.g., without the cross-shaped topology or stochastic simulation). This makes it unclear whether the claimed performance gains are attributable to these components or to generic optimization, undermining assessment of the framework's novelty and necessity.
- [Abstract] Abstract: The abstract states that 'experiments show that CAAP achieves strong untargeted and targeted attack performance' and 'favorable cross-model and cross-dataset transferability' but reports no specific quantitative metrics (ASR values, baselines, error bars, or dataset-specific results). This absence hinders evaluation of the magnitude and consistency of the findings across the three datasets and multiple models.
minor comments (3)
- [Abstract] The abstract would be strengthened by including at least one or two key numerical results (e.g., average ASR percentages) to convey the scale of the reported attacks.
- [Method] Ensure all acronyms (ASIT, RaS, MS-DIFE) are expanded at first use and that module hyperparameters and implementation details are provided in sufficient detail for reproducibility.
- [Experimental Evaluation] Consider adding error bars or standard deviations to reported attack success rates and transferability metrics to indicate run-to-run or dataset variability.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments. These have highlighted important aspects of our evaluation and presentation that we will address in the revision. Below we respond point-by-point to the major comments.
read point-by-point responses
-
Referee: [Abstract and Experimental Evaluation] The central claim that CAAP produces 'physically realizable' attacks with 'strong' performance and 'substantial residual vulnerability' after adversarial training is supported solely by digital experiments applying RaS stochastic simulations and ASIT rendering to images from the Tongji, IITD, and AISEC datasets. No physical experiments (printed patches, real palm captures under varied lighting/angles/sensors) are described. This is load-bearing for the physical-attack conclusions, as any mismatch between simulated and actual distortions (e.g., reflectance, blur, sensor noise) would prevent the reported ASR and transferability from translating to real-world settings.
Authors: We acknowledge that the manuscript relies exclusively on digital experiments with the RaS module to simulate capture variations rather than physical printing and recapture trials. The RaS component was designed to incorporate stochastic modeling of realistic acquisition factors (lighting, angle, sensor effects) drawn from palmprint literature, but we agree this does not fully substitute for physical validation. In the revised manuscript we will (1) explicitly qualify all claims of physical realizability to reflect the simulation-based setting, (2) add a dedicated limitations subsection discussing potential gaps between simulated and real distortions, and (3) frame the work as demonstrating capture-aware digital attacks that serve as a necessary precursor to physical studies. Full physical experiments remain an important avenue for future work. revision: partial
-
Referee: [Method and Experimental Evaluation] The RaS, ASIT, and MS-DIFE modules are presented as key innovations for capture-aware attacks, yet the manuscript provides no ablation studies, sensitivity analysis, or comparisons to standard adversarial patch baselines (e.g., without the cross-shaped topology or stochastic simulation). This makes it unclear whether the claimed performance gains are attributable to these components or to generic optimization, undermining assessment of the framework's novelty and necessity.
Authors: We agree that the absence of ablations and baseline comparisons weakens the ability to attribute performance gains to the proposed components. In the revised manuscript we will add (1) ablation studies removing or replacing each of ASIT, RaS, and MS-DIFE individually, (2) sensitivity analysis on key hyperparameters of RaS and the cross-shaped topology, and (3) direct comparisons against standard adversarial patch baselines (e.g., rectangular patches optimized without stochastic simulation or cross-shaped constraints). These additions will clarify the necessity of the proposed modules. revision: yes
-
Referee: [Abstract] The abstract states that 'experiments show that CAAP achieves strong untargeted and targeted attack performance' and 'favorable cross-model and cross-dataset transferability' but reports no specific quantitative metrics (ASR values, baselines, error bars, or dataset-specific results). This absence hinders evaluation of the magnitude and consistency of the findings across the three datasets and multiple models.
Authors: We will revise the abstract to report concrete quantitative results, including untargeted and targeted attack success rates on the Tongji, IITD, and AISEC datasets, comparisons against relevant baselines, and summary statistics on cross-model and cross-dataset transferability. Error bars or standard deviations will be noted where applicable to convey consistency. revision: yes
- The complete absence of physical experiments (printed patches and real palm captures) cannot be remedied within a standard revision without new data collection; we can only qualify the claims and discuss limitations.
Circularity Check
No circularity in proposed framework or experimental claims
full rationale
The paper is an empirical contribution that introduces CAAP (with ASIT, RaS, and MS-DIFE modules) and reports attack success rates, transferability, and residual vulnerability after adversarial training on Tongji/IITD/AISEC datasets. No derivation chain, first-principles result, or prediction is claimed; the central assertions rest on standard adversarial optimization and evaluation protocols applied to the proposed modules. The physical-attack conclusion depends on the (unverified) fidelity of the capture simulations rather than any reduction of outputs to fitted inputs or self-citations by construction. No self-definitional, fitted-input, or uniqueness-imported steps appear in the abstract or described methodology.
Axiom & Free-Parameter Ledger
free parameters (1)
- patch topology and pixel budget
axioms (1)
- domain assumption Physically printed patches can approximate the digital adversarial perturbations under real capture conditions
invented entities (3)
-
ASIT module
no independent evidence
-
RaS module
no independent evidence
-
MS-DIFE module
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
CAAP integrates ASIT for input-conditioned patch rendering, RaS for stochastic capture-aware simulation, and MS-DIFE for feature-level identity-disruptive guidance... cross-shaped patch topology
-
IndisputableMonolith/Foundation/ArrowOfTime.leanforward_accumulates unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We optimize P under an expectation-over-transformation (EoT) formulation... stochastic capture model parameterized by ξ
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Contactless palmprint identification using deeply learned residual features,
C. Liu and A. Kumar, “Contactless palmprint identification using deeply learned residual features,”IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 2, no. 2, pp. 172–181, 2020
2020
-
[2]
A survey of palmprint recognition,
A. Kong, D. Zhang, and M. Kamel, “A survey of palmprint recognition,” pattern recognition, pp. 1408–1418, 2009
2009
-
[3]
Alipay launches contactless palm print payment system in china,
ID Tech Wire, “Alipay launches contactless palm print payment system in china,” https://idtechwire.com/ alipay-launches-contactless-palm-print-payment-system-in-china/, Apr. 2025, accessed: 2026-03-06
2025
-
[4]
Tencent partners with visa to bring palm payment to singapore,
Visa, “Tencent partners with visa to bring palm payment to singapore,” https://www.visa.com.sg/about-visa/newsroom/press-releases/ tencent-partners-with-visa-to-bring-palm-payment-to-singapore.html, Nov. 2024, accessed: 2026-03-06
2024
-
[5]
Scan your palm instead of swiping a card to pay at whole foods checkout,
K. George, “Scan your palm instead of swiping a card to pay at whole foods checkout,” https://www.investopedia.com/ amazon-launches-palm-scanning-payments-at-all-whole-foods-7563543, Jul. 2023, accessed: 2026-03-06
2023
-
[6]
Good now: Pay with your palm print at nus’ first unmanned store,
Zaobao, “Good now: Pay with your palm print at nus’ first unmanned store,” https://www.zaobao.com.sg/znews/singapore/ story20190817-981511, Aug. 2019, accessed: 2026-03-06
2019
-
[7]
Palmnet: Gabor- pca convolutional networks for touchless palmprint recognition,
A. Genovese, V . Piuri, K. N. Plataniotis, and F. Scotti, “Palmnet: Gabor- pca convolutional networks for touchless palmprint recognition,”IEEE Transactions on Information Forensics and Security, pp. 3160–3174, 2019
2019
-
[8]
Comprehensive competition mechanism in palmprint recognition,
Z. Yang, H. Huangfu, L. Leng, B. Zhang, A. B. J. Teoh, and Y . Zhang, “Comprehensive competition mechanism in palmprint recognition,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 5160–5170, 2023
2023
-
[9]
Co3net: Coordinate-aware contrastive competitive neural network for palmprint recognition,
Z. Yang, W. Xia, Y . Qiao, Z. Lu, B. Zhang, L. Leng, and Y . Zhang, “Co3net: Coordinate-aware contrastive competitive neural network for palmprint recognition,”IEEE Transactions on Instrumentation and Mea- surement, pp. 1–14, 2023
2023
-
[10]
Boosting black-box attack to deep neural networks with conditional diffusion models,
R. Liu, W. Zhou, T. Zhang, K. Chen, J. Zhao, and K. Lam, “Boosting black-box attack to deep neural networks with conditional diffusion models,”IEEE Transactions on Information Forensics and Security, pp. 5207–5219, 2024
2024
-
[11]
Towards deep learning models resistant to adversarial attacks,
A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” inICLR, 2018
2018
-
[12]
Multi-spectral palmprints joint attack and defense with adversarial examples learning,
Q. Zhu, Y . Zhou, L. Fei, D. Zhang, and D. Zhang, “Multi-spectral palmprints joint attack and defense with adversarial examples learning,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 1789–1799, 2023
2023
-
[13]
Presentation attacks in palmprint recognition systems,
Y . Sun and C. Wang, “Presentation attacks in palmprint recognition systems,”Journal of Multimedia Information System, pp. 103–112, 2022
2022
-
[14]
Ordinal palmprint representation for personal identification,
Z. Sun, T. Tan, Y . Wang, and S. Z. Li, “Ordinal palmprint representation for personal identification,” inCVPR, 2005, pp. 279–284
2005
-
[15]
Palmprint verification based on robust line orientation code,
W. Jia, D.-S. Huang, and D. Zhang, “Palmprint verification based on robust line orientation code,”Pattern Recognition, pp. 1504–1513, 2008
2008
-
[16]
Adversarial patch,
T. B. Brown, D. Man ´e, A. Roy, M. Abadi, and J. Gilmer, “Adversarial patch,”arXiv, 2017
2017
-
[17]
Online palmprint identification,
D. Zhang, W.-K. Kong, J. You, and M. Wong, “Online palmprint identification,”IEEE Transactions on pattern analysis and machine intelligence, pp. 1041–1050, 2003
2003
-
[18]
Competitive coding scheme for palmprint verification,
A.-K. Kong and D. Zhang, “Competitive coding scheme for palmprint verification,” inICPR, 2004, pp. 520–523
2004
-
[19]
Palmprint identification using feature-level fusion,
A. Kong, D. Zhang, and M. Kamel, “Palmprint identification using feature-level fusion,”Pattern Recognition, pp. 478–487, 2006
2006
-
[20]
Contactless palmprint identification using deeply learned residual features,
Y . Liu and A. Kumar, “Contactless palmprint identification using deeply learned residual features,”IEEE Transactions on Biometrics, Behavior, and Identity Science, pp. 172–181, 2020
2020
-
[21]
Compnet: Competitive neural network for palmprint recognition using learnable gabor kernels,
X. Liang, J. Yang, G. Lu, and D. Zhang, “Compnet: Competitive neural network for palmprint recognition using learnable gabor kernels,”IEEE Signal Processing Letters, pp. 1739–1743, 2021
2021
-
[22]
Eepnet: An efficient and effective convolutional neural network for palmprint recognition,
W. Jia, Q. Ren, Y . Zhao, S. Li, H. Min, and Y . Chen, “Eepnet: An efficient and effective convolutional neural network for palmprint recognition,”Pattern Recognition Letters, pp. 140–149, 2022
2022
-
[23]
Towards open-set touchless palmprint recog- nition via weight-based meta metric learning,
H. Shao and D. Zhong, “Towards open-set touchless palmprint recog- nition via weight-based meta metric learning,”Pattern Recognition, p. 108247, 2022
2022
-
[24]
Mobile contactless palmprint recognition: Use of multiscale, multimodel embeddings,
S. A. Grosz, A. Godbole, and A. K. Jain, “Mobile contactless palmprint recognition: Use of multiscale, multimodel embeddings,”IEEE Trans- actions on Information Forensics and Security, pp. 8428–8440, 2024
2024
-
[25]
Contactless palmprint image recognition across smartphones with self- paced cyclegan,
Q. Zhu, G. Xin, L. Fei, D. Liang, Z. Zhang, D. Zhang, and D. Zhang, “Contactless palmprint image recognition across smartphones with self- paced cyclegan,”IEEE Transactions on Information Forensics and Security, vol. 18, pp. 4944–4954, 2023
2023
-
[26]
Contactless palmprint biometrics using deepnet with dedicated assistant layers,
T. Chai, S. Prasad, J. Yan, and Z. Zhang, “Contactless palmprint biometrics using deepnet with dedicated assistant layers,”The Visual Computer, pp. 4029–4047, 2023
2023
-
[27]
Structure suture learning-based robust multiview palmprint recognition,
S. Zhao, L. Fei, J. Wen, B. Zhang, P. Zhao, and S. Li, “Structure suture learning-based robust multiview palmprint recognition,”IEEE Transactions on Neural Networks and Learning Systems, pp. 8401–8413, 2022
2022
-
[28]
STBA: towards evaluating the robustness of dnns for query-limited black-box scenario,
R. Liu, K. Lam, W. Zhou, S. Wu, J. Zhao, D. Hu, and M. Gong, “STBA: towards evaluating the robustness of dnns for query-limited black-box scenario,”IEEE Transactions on Multimedia, pp. 2666–2681, 2025
2025
-
[29]
An enhanced palmprint adversarial attack against visible and invisible features,
J. Cui, Q. Zhang, Z. Wang, J. Wang, and Q. Zhu, “An enhanced palmprint adversarial attack against visible and invisible features,” in ICME. IEEE, 2025, pp. 1–6
2025
-
[30]
Palmprint anti-spoofing based on domain-adversarial training and online triplet mining,
D. Yao, H. Shao, and D. Zhong, “Palmprint anti-spoofing based on domain-adversarial training and online triplet mining,” inICIP, 2023
2023
-
[31]
A review on palmprint image-level attacks,
Q. Zhang, K. Zheng, J. Xu, Y . Xu, and J. Cui, “A review on palmprint image-level attacks,” inCCBR, 2025, pp. 122–130
2025
-
[32]
Adversarial examples in the physical world,
A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” inICLRW, 2018, pp. 99–112
2018
-
[33]
Univer- sal adversarial perturbations,
S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Univer- sal adversarial perturbations,” inCVPR, 2017, pp. 1765–1773
2017
-
[34]
Synthesizing robust adversarial examples,
A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” inICML, 2018, pp. 284–293
2018
-
[35]
Robust physical-world attacks on deep learning visual classification,
K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physical-world attacks on deep learning visual classification,” inCVPR, 2018, pp. 1625–1634
2018
-
[36]
Dpatch: An adversarial patch attack on object detectors,
X. Liu, H. Yang, Z. Liu, L. Song, H. Li, and Y . Chen, “Dpatch: An adversarial patch attack on object detectors,”SafeAI@AAAI, 2019
2019
-
[37]
Fooling automated surveil- lance cameras: adversarial patches to attack person detection,
S. Thys, W. Van Ranst, and T. Goedem ´e, “Fooling automated surveil- lance cameras: adversarial patches to attack person detection,” in CVPRW, 2019, pp. 49–55
2019
-
[38]
Perceptual- sensitive gan for generating adversarial patches,
A. Liu, X. Liu, J. Fan, Y . Ma, A. Zhang, H. Xie, and D. Tao, “Perceptual- sensitive gan for generating adversarial patches,” inAAAI, 2019, pp. 1028–1035
2019
-
[39]
Adversarial camouflage: Hiding physical-world attacks with natural styles,
R. Duan, X. Ma, Y . Wang, J. Bailey, A. K. Qin, and Y . Yang, “Adversarial camouflage: Hiding physical-world attacks with natural styles,” inCVPR, 2020, pp. 1000–1008
2020
-
[40]
Naturalistic physical adversarial patch for object detectors,
Y .-C.-T. Hu, B.-H. Kung, D. S. Tan, J.-C. Chen, K.-L. Hua, and W.-H. Cheng, “Naturalistic physical adversarial patch for object detectors,” in ICCV, 2021, pp. 7848–7857
2021
-
[41]
Cross-shaped adversarial patch attack,
Y . Ran, W. Wang, M. Li, L.-C. Li, Y .-G. Wang, and J. Li, “Cross-shaped adversarial patch attack,”IEEE Transactions on Circuits and Systems for Video Technology, pp. 2289–2303, 2024
2024
-
[42]
Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,
M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” inCCS, 2016, pp. 1528–1540
2016
-
[43]
Advhat: Real-world adversarial attack on arcface face ID system,
S. Komkov and A. Petiushko, “Advhat: Real-world adversarial attack on arcface face ID system,” inICPR, 2020, pp. 819–826
2020
-
[44]
Deep learning in palmprint recognition: A comprehensive survey,
C. Gao, Z. Yang, W. Jia, L. Leng, B. Zhang, and A. B. J. Teoh, “Deep learning in palmprint recognition: A comprehensive survey,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2026
2026
-
[45]
Towards contactless palmprint recognition: A novel device, a new benchmark, and a collabo- rative representation based identification approach,
L. Zhang, L. Li, A. Yang, Y . Shen, and M. Yang, “Towards contactless palmprint recognition: A novel device, a new benchmark, and a collabo- rative representation based identification approach,”Pattern Recognition, pp. 199–212, 2017
2017
-
[46]
Incorporating cohort information for reliable palmprint authentication,
A. Kumar, “Incorporating cohort information for reliable palmprint authentication,” inICVGIP, 2008, pp. 583–590
2008
-
[47]
Mobilenetv2: Inverted residuals and linear bottlenecks,
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” inCVPR, 2018
2018
-
[48]
Very deep convolutional networks for large-scale image recognition,
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” inICLR, 2015
2015
-
[49]
Deep residual learning for image recognition,
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” inCVPR, 2016, pp. 770–778
2016
-
[50]
Shufflenet v2: Practical guidelines for efficient cnn architecture design,
N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” inECCV, 2018
2018
-
[51]
Boosting adversarial attacks with momentum,
Y . Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with momentum,” inCVPR, 2018, pp. 9185–9193
2018
-
[52]
Benchmarking adversarial patch against aerial detection,
J. Lian, S. Mei, S. Zhang, and M. Ma, “Benchmarking adversarial patch against aerial detection,”IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–16, 2022
2022
-
[53]
Advlogo: Adversarial patch attack against object detectors based on diffusion models,
B. Miao, C. Li, Y . Zhu, W. Sun, Z. Wang, X. Wang, and C. Xie, “Advlogo: Adversarial patch attack against object detectors based on diffusion models,”arXiv, 2024
2024
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.