pith. machine review for the scientific record. sign in

arxiv: 2604.06987 · v1 · submitted 2026-04-08 · 💻 cs.CV · cs.AI· cs.CR

Recognition: 2 theorem links

· Lean Theorem

CAAP: Capture-Aware Adversarial Patch Attacks on Palmprint Recognition Models

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:52 UTC · model grok-4.3

classification 💻 cs.CV cs.AIcs.CR
keywords adversarial patch attackspalmprint recognitionphysical attacksadversarial robustnessbiometric securitytransferabilitycapture simulation
0
0 comments X

The pith

CAAP shows that capture-aware adversarial patches can reliably compromise palmprint recognition models even after adversarial training.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops CAAP to create reusable adversarial patches that attack palmprint recognition while accounting for variations in physical image capture. It uses a cross-shaped patch design to disrupt texture patterns and includes modules that render patches based on input images and simulate real acquisition distortions. Tests on three datasets against multiple models demonstrate high success for both untargeted and targeted attacks, plus transferability across models and datasets. The results indicate that adversarial training reduces but does not remove the vulnerability, leaving palmprint systems open to physical attacks in security settings.

Core claim

CAAP learns a universal cross-shaped adversarial patch that remains effective under realistic acquisition variation by combining input-conditioned rendering, stochastic capture simulation, and feature-level identity disruption. Experiments on the Tongji, IITD, and AISEC datasets show strong untargeted and targeted attack performance against both generic CNNs and palmprint-specific models, with favorable cross-model and cross-dataset transferability. Adversarial training lowers attack success rates but leaves substantial residual vulnerability, indicating that deep palmprint systems are not robust against physically realizable capture-aware patches.

What carries the argument

The CAAP framework, built around a cross-shaped patch topology that covers more texture area under a fixed pixel budget, plus ASIT for input-conditioned rendering, RaS for stochastic capture-aware simulation of distortions, and MS-DIFE for feature-level identity-disruptive guidance.

Load-bearing premise

The stochastic capture-aware simulation and input-conditioned rendering accurately represent the distortions and variations that occur during actual physical palmprint image acquisition.

What would settle it

Printing the generated patches on physical media, placing them on real palms, and capturing images with actual palmprint scanners to check whether the observed attack success rates match the simulated ones.

Figures

Figures reproduced from arXiv: 2604.06987 by Cong Wu, Jiale Li, Jie Zhang, Kwok-Yan Lam, Renyang Liu, See-kiong Ng, Shuxin Li, Wei Zhou, Xiaojun Jia.

Figure 1
Figure 1. Figure 1: Training framework of CAAP. A universal cross-shaped patch, specified by a fixed mask [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Attacking phase of CAAP. After training, the patch texture and [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Pairwise cross-model transferability. Each heatmap reports ASR (%) [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: ASR of CAAP on three palmprint-specific recognizers before and [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Identity-level ASR of physical untargeted and targeted attacks by [PITH_FULL_IMAGE:figures/full_fig_p010_5.png] view at source ↗
Figure 8
Figure 8. Figure 8: Effect of patch placement position. ASR (%) is reported for attention [PITH_FULL_IMAGE:figures/full_fig_p011_8.png] view at source ↗
Figure 7
Figure 7. Figure 7: Ablation on patch shape. ASR (%) is reported for four patch [PITH_FULL_IMAGE:figures/full_fig_p011_7.png] view at source ↗
Figure 9
Figure 9. Figure 9: Hyper-parameter sensitivity on Tongji. Untargeted ASR (%) is reported for CCNet, CompNet, and CO3Net, together with their mean, under one-at [PITH_FULL_IMAGE:figures/full_fig_p012_9.png] view at source ↗
read the original abstract

Palmprint recognition is deployed in security-critical applications, including access control and palm-based payment, due to its contactless acquisition and highly discriminative ridge-and-crease textures. However, the robustness of deep palmprint recognition systems against physically realizable attacks remains insufficiently understood. Existing studies are largely confined to the digital setting and do not adequately account for the texture-dominant nature of palmprint recognition or the distortions introduced during physical acquisition. To address this gap, we propose CAAP, a capture-aware adversarial patch framework for palmprint recognition. CAAP learns a universal patch that can be reused across inputs while remaining effective under realistic acquisition variation. To match the structural characteristics of palmprints, the framework adopts a cross-shaped patch topology, which enlarges spatial coverage under a fixed pixel budget and more effectively disrupts long-range texture continuity. CAAP further integrates three modules: ASIT for input-conditioned patch rendering, RaS for stochastic capture-aware simulation, and MS-DIFE for feature-level identity-disruptive guidance. We evaluate CAAP on the Tongji, IITD, and AISEC datasets against generic CNN backbones and palmprint-specific recognition models. Experiments show that CAAP achieves strong untargeted and targeted attack performance with favorable cross-model and cross-dataset transferability. The results further show that, although adversarial training can partially reduce the attack success rate, substantial residual vulnerability remains. These findings indicate that deep palmprint recognition systems remain vulnerable to physically realizable, capture-aware adversarial patch attacks, underscoring the need for more effective defenses in practice. Code available at https://github.com/ryliu68/CAAP.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 3 minor

Summary. The paper proposes CAAP, a capture-aware adversarial patch framework targeting palmprint recognition models. It employs a cross-shaped patch topology to disrupt long-range texture features under a fixed pixel budget and integrates three modules: ASIT for input-conditioned patch rendering, RaS for stochastic simulation of capture variations, and MS-DIFE for feature-level identity disruption. Experiments on the Tongji, IITD, and AISEC datasets against generic CNNs and palmprint-specific models report strong untargeted and targeted attack success rates, favorable cross-model and cross-dataset transferability, and that adversarial training only partially reduces vulnerability.

Significance. If the RaS and ASIT modules prove to faithfully model real acquisition distortions, the work would be significant for biometric security, as it demonstrates residual vulnerabilities in deployed palmprint systems for access control and payments, extending adversarial patch research to texture-dominant modalities and motivating improved defenses.

major comments (3)
  1. [Abstract and Experimental Evaluation] Abstract and Experimental Evaluation: The central claim that CAAP produces 'physically realizable' attacks with 'strong' performance and 'substantial residual vulnerability' after adversarial training is supported solely by digital experiments applying RaS stochastic simulations and ASIT rendering to images from the Tongji, IITD, and AISEC datasets. No physical experiments (printed patches, real palm captures under varied lighting/angles/sensors) are described. This is load-bearing for the physical-attack conclusions, as any mismatch between simulated and actual distortions (e.g., reflectance, blur, sensor noise) would prevent the reported ASR and transferability from translating to real-world settings.
  2. [Method and Experimental Evaluation] Method and Experimental Evaluation: The RaS, ASIT, and MS-DIFE modules are presented as key innovations for capture-aware attacks, yet the manuscript provides no ablation studies, sensitivity analysis, or comparisons to standard adversarial patch baselines (e.g., without the cross-shaped topology or stochastic simulation). This makes it unclear whether the claimed performance gains are attributable to these components or to generic optimization, undermining assessment of the framework's novelty and necessity.
  3. [Abstract] Abstract: The abstract states that 'experiments show that CAAP achieves strong untargeted and targeted attack performance' and 'favorable cross-model and cross-dataset transferability' but reports no specific quantitative metrics (ASR values, baselines, error bars, or dataset-specific results). This absence hinders evaluation of the magnitude and consistency of the findings across the three datasets and multiple models.
minor comments (3)
  1. [Abstract] The abstract would be strengthened by including at least one or two key numerical results (e.g., average ASR percentages) to convey the scale of the reported attacks.
  2. [Method] Ensure all acronyms (ASIT, RaS, MS-DIFE) are expanded at first use and that module hyperparameters and implementation details are provided in sufficient detail for reproducibility.
  3. [Experimental Evaluation] Consider adding error bars or standard deviations to reported attack success rates and transferability metrics to indicate run-to-run or dataset variability.

Simulated Author's Rebuttal

3 responses · 1 unresolved

We thank the referee for the constructive and detailed comments. These have highlighted important aspects of our evaluation and presentation that we will address in the revision. Below we respond point-by-point to the major comments.

read point-by-point responses
  1. Referee: [Abstract and Experimental Evaluation] The central claim that CAAP produces 'physically realizable' attacks with 'strong' performance and 'substantial residual vulnerability' after adversarial training is supported solely by digital experiments applying RaS stochastic simulations and ASIT rendering to images from the Tongji, IITD, and AISEC datasets. No physical experiments (printed patches, real palm captures under varied lighting/angles/sensors) are described. This is load-bearing for the physical-attack conclusions, as any mismatch between simulated and actual distortions (e.g., reflectance, blur, sensor noise) would prevent the reported ASR and transferability from translating to real-world settings.

    Authors: We acknowledge that the manuscript relies exclusively on digital experiments with the RaS module to simulate capture variations rather than physical printing and recapture trials. The RaS component was designed to incorporate stochastic modeling of realistic acquisition factors (lighting, angle, sensor effects) drawn from palmprint literature, but we agree this does not fully substitute for physical validation. In the revised manuscript we will (1) explicitly qualify all claims of physical realizability to reflect the simulation-based setting, (2) add a dedicated limitations subsection discussing potential gaps between simulated and real distortions, and (3) frame the work as demonstrating capture-aware digital attacks that serve as a necessary precursor to physical studies. Full physical experiments remain an important avenue for future work. revision: partial

  2. Referee: [Method and Experimental Evaluation] The RaS, ASIT, and MS-DIFE modules are presented as key innovations for capture-aware attacks, yet the manuscript provides no ablation studies, sensitivity analysis, or comparisons to standard adversarial patch baselines (e.g., without the cross-shaped topology or stochastic simulation). This makes it unclear whether the claimed performance gains are attributable to these components or to generic optimization, undermining assessment of the framework's novelty and necessity.

    Authors: We agree that the absence of ablations and baseline comparisons weakens the ability to attribute performance gains to the proposed components. In the revised manuscript we will add (1) ablation studies removing or replacing each of ASIT, RaS, and MS-DIFE individually, (2) sensitivity analysis on key hyperparameters of RaS and the cross-shaped topology, and (3) direct comparisons against standard adversarial patch baselines (e.g., rectangular patches optimized without stochastic simulation or cross-shaped constraints). These additions will clarify the necessity of the proposed modules. revision: yes

  3. Referee: [Abstract] The abstract states that 'experiments show that CAAP achieves strong untargeted and targeted attack performance' and 'favorable cross-model and cross-dataset transferability' but reports no specific quantitative metrics (ASR values, baselines, error bars, or dataset-specific results). This absence hinders evaluation of the magnitude and consistency of the findings across the three datasets and multiple models.

    Authors: We will revise the abstract to report concrete quantitative results, including untargeted and targeted attack success rates on the Tongji, IITD, and AISEC datasets, comparisons against relevant baselines, and summary statistics on cross-model and cross-dataset transferability. Error bars or standard deviations will be noted where applicable to convey consistency. revision: yes

standing simulated objections not resolved
  • The complete absence of physical experiments (printed patches and real palm captures) cannot be remedied within a standard revision without new data collection; we can only qualify the claims and discuss limitations.

Circularity Check

0 steps flagged

No circularity in proposed framework or experimental claims

full rationale

The paper is an empirical contribution that introduces CAAP (with ASIT, RaS, and MS-DIFE modules) and reports attack success rates, transferability, and residual vulnerability after adversarial training on Tongji/IITD/AISEC datasets. No derivation chain, first-principles result, or prediction is claimed; the central assertions rest on standard adversarial optimization and evaluation protocols applied to the proposed modules. The physical-attack conclusion depends on the (unverified) fidelity of the capture simulations rather than any reduction of outputs to fitted inputs or self-citations by construction. No self-definitional, fitted-input, or uniqueness-imported steps appear in the abstract or described methodology.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 3 invented entities

The central claim rests on the effectiveness of newly introduced simulation and rendering modules plus the assumption that printed patches can realize the digital perturbations; no explicit free parameters are named in the abstract but typical adversarial training hyperparameters are implied.

free parameters (1)
  • patch topology and pixel budget
    Cross-shaped design chosen to enlarge coverage under fixed pixel constraint for palmprint textures.
axioms (1)
  • domain assumption Physically printed patches can approximate the digital adversarial perturbations under real capture conditions
    Required for the physical realizability claim in the abstract.
invented entities (3)
  • ASIT module no independent evidence
    purpose: Input-conditioned patch rendering
    New component for adapting the patch to each input image.
  • RaS module no independent evidence
    purpose: Stochastic capture-aware simulation
    New component to model acquisition variations during training.
  • MS-DIFE module no independent evidence
    purpose: Feature-level identity-disruptive guidance
    New component for guiding the attack at the feature level.

pith-pipeline@v0.9.0 · 5624 in / 1466 out tokens · 44384 ms · 2026-05-10T17:52:58.962407+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

53 extracted references

  1. [1]

    Contactless palmprint identification using deeply learned residual features,

    C. Liu and A. Kumar, “Contactless palmprint identification using deeply learned residual features,”IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 2, no. 2, pp. 172–181, 2020

  2. [2]

    A survey of palmprint recognition,

    A. Kong, D. Zhang, and M. Kamel, “A survey of palmprint recognition,” pattern recognition, pp. 1408–1418, 2009

  3. [3]

    Alipay launches contactless palm print payment system in china,

    ID Tech Wire, “Alipay launches contactless palm print payment system in china,” https://idtechwire.com/ alipay-launches-contactless-palm-print-payment-system-in-china/, Apr. 2025, accessed: 2026-03-06

  4. [4]

    Tencent partners with visa to bring palm payment to singapore,

    Visa, “Tencent partners with visa to bring palm payment to singapore,” https://www.visa.com.sg/about-visa/newsroom/press-releases/ tencent-partners-with-visa-to-bring-palm-payment-to-singapore.html, Nov. 2024, accessed: 2026-03-06

  5. [5]

    Scan your palm instead of swiping a card to pay at whole foods checkout,

    K. George, “Scan your palm instead of swiping a card to pay at whole foods checkout,” https://www.investopedia.com/ amazon-launches-palm-scanning-payments-at-all-whole-foods-7563543, Jul. 2023, accessed: 2026-03-06

  6. [6]

    Good now: Pay with your palm print at nus’ first unmanned store,

    Zaobao, “Good now: Pay with your palm print at nus’ first unmanned store,” https://www.zaobao.com.sg/znews/singapore/ story20190817-981511, Aug. 2019, accessed: 2026-03-06

  7. [7]

    Palmnet: Gabor- pca convolutional networks for touchless palmprint recognition,

    A. Genovese, V . Piuri, K. N. Plataniotis, and F. Scotti, “Palmnet: Gabor- pca convolutional networks for touchless palmprint recognition,”IEEE Transactions on Information Forensics and Security, pp. 3160–3174, 2019

  8. [8]

    Comprehensive competition mechanism in palmprint recognition,

    Z. Yang, H. Huangfu, L. Leng, B. Zhang, A. B. J. Teoh, and Y . Zhang, “Comprehensive competition mechanism in palmprint recognition,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 5160–5170, 2023

  9. [9]

    Co3net: Coordinate-aware contrastive competitive neural network for palmprint recognition,

    Z. Yang, W. Xia, Y . Qiao, Z. Lu, B. Zhang, L. Leng, and Y . Zhang, “Co3net: Coordinate-aware contrastive competitive neural network for palmprint recognition,”IEEE Transactions on Instrumentation and Mea- surement, pp. 1–14, 2023

  10. [10]

    Boosting black-box attack to deep neural networks with conditional diffusion models,

    R. Liu, W. Zhou, T. Zhang, K. Chen, J. Zhao, and K. Lam, “Boosting black-box attack to deep neural networks with conditional diffusion models,”IEEE Transactions on Information Forensics and Security, pp. 5207–5219, 2024

  11. [11]

    Towards deep learning models resistant to adversarial attacks,

    A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” inICLR, 2018

  12. [12]

    Multi-spectral palmprints joint attack and defense with adversarial examples learning,

    Q. Zhu, Y . Zhou, L. Fei, D. Zhang, and D. Zhang, “Multi-spectral palmprints joint attack and defense with adversarial examples learning,” IEEE Transactions on Information Forensics and Security, vol. 18, pp. 1789–1799, 2023

  13. [13]

    Presentation attacks in palmprint recognition systems,

    Y . Sun and C. Wang, “Presentation attacks in palmprint recognition systems,”Journal of Multimedia Information System, pp. 103–112, 2022

  14. [14]

    Ordinal palmprint representation for personal identification,

    Z. Sun, T. Tan, Y . Wang, and S. Z. Li, “Ordinal palmprint representation for personal identification,” inCVPR, 2005, pp. 279–284

  15. [15]

    Palmprint verification based on robust line orientation code,

    W. Jia, D.-S. Huang, and D. Zhang, “Palmprint verification based on robust line orientation code,”Pattern Recognition, pp. 1504–1513, 2008

  16. [16]

    Adversarial patch,

    T. B. Brown, D. Man ´e, A. Roy, M. Abadi, and J. Gilmer, “Adversarial patch,”arXiv, 2017

  17. [17]

    Online palmprint identification,

    D. Zhang, W.-K. Kong, J. You, and M. Wong, “Online palmprint identification,”IEEE Transactions on pattern analysis and machine intelligence, pp. 1041–1050, 2003

  18. [18]

    Competitive coding scheme for palmprint verification,

    A.-K. Kong and D. Zhang, “Competitive coding scheme for palmprint verification,” inICPR, 2004, pp. 520–523

  19. [19]

    Palmprint identification using feature-level fusion,

    A. Kong, D. Zhang, and M. Kamel, “Palmprint identification using feature-level fusion,”Pattern Recognition, pp. 478–487, 2006

  20. [20]

    Contactless palmprint identification using deeply learned residual features,

    Y . Liu and A. Kumar, “Contactless palmprint identification using deeply learned residual features,”IEEE Transactions on Biometrics, Behavior, and Identity Science, pp. 172–181, 2020

  21. [21]

    Compnet: Competitive neural network for palmprint recognition using learnable gabor kernels,

    X. Liang, J. Yang, G. Lu, and D. Zhang, “Compnet: Competitive neural network for palmprint recognition using learnable gabor kernels,”IEEE Signal Processing Letters, pp. 1739–1743, 2021

  22. [22]

    Eepnet: An efficient and effective convolutional neural network for palmprint recognition,

    W. Jia, Q. Ren, Y . Zhao, S. Li, H. Min, and Y . Chen, “Eepnet: An efficient and effective convolutional neural network for palmprint recognition,”Pattern Recognition Letters, pp. 140–149, 2022

  23. [23]

    Towards open-set touchless palmprint recog- nition via weight-based meta metric learning,

    H. Shao and D. Zhong, “Towards open-set touchless palmprint recog- nition via weight-based meta metric learning,”Pattern Recognition, p. 108247, 2022

  24. [24]

    Mobile contactless palmprint recognition: Use of multiscale, multimodel embeddings,

    S. A. Grosz, A. Godbole, and A. K. Jain, “Mobile contactless palmprint recognition: Use of multiscale, multimodel embeddings,”IEEE Trans- actions on Information Forensics and Security, pp. 8428–8440, 2024

  25. [25]

    Contactless palmprint image recognition across smartphones with self- paced cyclegan,

    Q. Zhu, G. Xin, L. Fei, D. Liang, Z. Zhang, D. Zhang, and D. Zhang, “Contactless palmprint image recognition across smartphones with self- paced cyclegan,”IEEE Transactions on Information Forensics and Security, vol. 18, pp. 4944–4954, 2023

  26. [26]

    Contactless palmprint biometrics using deepnet with dedicated assistant layers,

    T. Chai, S. Prasad, J. Yan, and Z. Zhang, “Contactless palmprint biometrics using deepnet with dedicated assistant layers,”The Visual Computer, pp. 4029–4047, 2023

  27. [27]

    Structure suture learning-based robust multiview palmprint recognition,

    S. Zhao, L. Fei, J. Wen, B. Zhang, P. Zhao, and S. Li, “Structure suture learning-based robust multiview palmprint recognition,”IEEE Transactions on Neural Networks and Learning Systems, pp. 8401–8413, 2022

  28. [28]

    STBA: towards evaluating the robustness of dnns for query-limited black-box scenario,

    R. Liu, K. Lam, W. Zhou, S. Wu, J. Zhao, D. Hu, and M. Gong, “STBA: towards evaluating the robustness of dnns for query-limited black-box scenario,”IEEE Transactions on Multimedia, pp. 2666–2681, 2025

  29. [29]

    An enhanced palmprint adversarial attack against visible and invisible features,

    J. Cui, Q. Zhang, Z. Wang, J. Wang, and Q. Zhu, “An enhanced palmprint adversarial attack against visible and invisible features,” in ICME. IEEE, 2025, pp. 1–6

  30. [30]

    Palmprint anti-spoofing based on domain-adversarial training and online triplet mining,

    D. Yao, H. Shao, and D. Zhong, “Palmprint anti-spoofing based on domain-adversarial training and online triplet mining,” inICIP, 2023

  31. [31]

    A review on palmprint image-level attacks,

    Q. Zhang, K. Zheng, J. Xu, Y . Xu, and J. Cui, “A review on palmprint image-level attacks,” inCCBR, 2025, pp. 122–130

  32. [32]

    Adversarial examples in the physical world,

    A. Kurakin, I. J. Goodfellow, and S. Bengio, “Adversarial examples in the physical world,” inICLRW, 2018, pp. 99–112

  33. [33]

    Univer- sal adversarial perturbations,

    S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, “Univer- sal adversarial perturbations,” inCVPR, 2017, pp. 1765–1773

  34. [34]

    Synthesizing robust adversarial examples,

    A. Athalye, L. Engstrom, A. Ilyas, and K. Kwok, “Synthesizing robust adversarial examples,” inICML, 2018, pp. 284–293

  35. [35]

    Robust physical-world attacks on deep learning visual classification,

    K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song, “Robust physical-world attacks on deep learning visual classification,” inCVPR, 2018, pp. 1625–1634

  36. [36]

    Dpatch: An adversarial patch attack on object detectors,

    X. Liu, H. Yang, Z. Liu, L. Song, H. Li, and Y . Chen, “Dpatch: An adversarial patch attack on object detectors,”SafeAI@AAAI, 2019

  37. [37]

    Fooling automated surveil- lance cameras: adversarial patches to attack person detection,

    S. Thys, W. Van Ranst, and T. Goedem ´e, “Fooling automated surveil- lance cameras: adversarial patches to attack person detection,” in CVPRW, 2019, pp. 49–55

  38. [38]

    Perceptual- sensitive gan for generating adversarial patches,

    A. Liu, X. Liu, J. Fan, Y . Ma, A. Zhang, H. Xie, and D. Tao, “Perceptual- sensitive gan for generating adversarial patches,” inAAAI, 2019, pp. 1028–1035

  39. [39]

    Adversarial camouflage: Hiding physical-world attacks with natural styles,

    R. Duan, X. Ma, Y . Wang, J. Bailey, A. K. Qin, and Y . Yang, “Adversarial camouflage: Hiding physical-world attacks with natural styles,” inCVPR, 2020, pp. 1000–1008

  40. [40]

    Naturalistic physical adversarial patch for object detectors,

    Y .-C.-T. Hu, B.-H. Kung, D. S. Tan, J.-C. Chen, K.-L. Hua, and W.-H. Cheng, “Naturalistic physical adversarial patch for object detectors,” in ICCV, 2021, pp. 7848–7857

  41. [41]

    Cross-shaped adversarial patch attack,

    Y . Ran, W. Wang, M. Li, L.-C. Li, Y .-G. Wang, and J. Li, “Cross-shaped adversarial patch attack,”IEEE Transactions on Circuits and Systems for Video Technology, pp. 2289–2303, 2024

  42. [42]

    Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,

    M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” inCCS, 2016, pp. 1528–1540

  43. [43]

    Advhat: Real-world adversarial attack on arcface face ID system,

    S. Komkov and A. Petiushko, “Advhat: Real-world adversarial attack on arcface face ID system,” inICPR, 2020, pp. 819–826

  44. [44]

    Deep learning in palmprint recognition: A comprehensive survey,

    C. Gao, Z. Yang, W. Jia, L. Leng, B. Zhang, and A. B. J. Teoh, “Deep learning in palmprint recognition: A comprehensive survey,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2026

  45. [45]

    Towards contactless palmprint recognition: A novel device, a new benchmark, and a collabo- rative representation based identification approach,

    L. Zhang, L. Li, A. Yang, Y . Shen, and M. Yang, “Towards contactless palmprint recognition: A novel device, a new benchmark, and a collabo- rative representation based identification approach,”Pattern Recognition, pp. 199–212, 2017

  46. [46]

    Incorporating cohort information for reliable palmprint authentication,

    A. Kumar, “Incorporating cohort information for reliable palmprint authentication,” inICVGIP, 2008, pp. 583–590

  47. [47]

    Mobilenetv2: Inverted residuals and linear bottlenecks,

    M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” inCVPR, 2018

  48. [48]

    Very deep convolutional networks for large-scale image recognition,

    K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” inICLR, 2015

  49. [49]

    Deep residual learning for image recognition,

    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” inCVPR, 2016, pp. 770–778

  50. [50]

    Shufflenet v2: Practical guidelines for efficient cnn architecture design,

    N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” inECCV, 2018

  51. [51]

    Boosting adversarial attacks with momentum,

    Y . Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, “Boosting adversarial attacks with momentum,” inCVPR, 2018, pp. 9185–9193

  52. [52]

    Benchmarking adversarial patch against aerial detection,

    J. Lian, S. Mei, S. Zhang, and M. Ma, “Benchmarking adversarial patch against aerial detection,”IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–16, 2022

  53. [53]

    Advlogo: Adversarial patch attack against object detectors based on diffusion models,

    B. Miao, C. Li, Y . Zhu, W. Sun, Z. Wang, X. Wang, and C. Xie, “Advlogo: Adversarial patch attack against object detectors based on diffusion models,”arXiv, 2024