pith. machine review for the scientific record. sign in

arxiv: 2603.14632 · v2 · submitted 2026-03-15 · 💻 cs.CV · cs.IT· math.IT

Recognition: no theorem link

Continual Few-shot Adaptation for Synthetic Fingerprint Detection

Authors on Pith no claims yet

Pith reviewed 2026-05-15 10:51 UTC · model grok-4.3

classification 💻 cs.CV cs.ITmath.IT
keywords synthetic fingerprint detectioncontinual learningfew-shot adaptationcontrastive losscatastrophic forgettingdeep neural networksbiometric securitygenerative AI
0
0 comments X

The pith

A fingerprint detector adapts to new synthetic styles with few examples while retaining performance on known ones.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper casts synthetic fingerprint detection as a continual few-shot adaptation task in which a base model must quickly learn to flag new fake fingerprints from unseen generative AI models without losing accuracy on older styles. It combines binary cross-entropy loss with supervised contrastive loss on the feature space and replays a small set of past examples during each fine-tuning step to limit forgetting. Experiments across multiple DNN backbones and collections of real and synthetic fingerprints show the method maintains a workable balance between fast adaptation and retention of prior knowledge. This setup addresses the practical problem that realistic synthetic fingerprints increasingly threaten enrollment and authentication in biometric systems. If the approach holds, detectors could be kept current as new generation techniques appear without full retraining.

Core claim

The proposed continual few-shot adaptation method, which employs binary cross-entropy and supervised contrastive losses together with replay of a few samples from known styles during fine-tuning, enables a base detector to rapidly adapt to new synthetic fingerprint styles while mitigating catastrophic forgetting of known styles.

What carries the argument

The combination of binary cross-entropy and supervised contrastive losses on feature representations, paired with replay of a few prior samples during fine-tuning.

Load-bearing premise

Replaying only a few samples from previously known styles during fine-tuning is sufficient to mitigate catastrophic forgetting while still enabling rapid adaptation to new synthetic styles with limited data.

What would settle it

After fine-tuning on a new synthetic style with few examples plus replay, measure whether accuracy on a held-out test set of previously seen styles falls below the level achieved by the original base model.

Figures

Figures reproduced from arXiv: 2603.14632 by Anil K. Jain, Joseph Geo Benjamin, Karthik Nandakumar.

Figure 1
Figure 1. Figure 1: Vulnerability of fingerprint recognition systems (FRS) [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Illustration of the proposed approach for synthetic [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Impact of replay and sample size on continual few-shot [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Generalization to other DNN-backbones and sensitivity [PITH_FULL_IMAGE:figures/full_fig_p005_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: ROC curves (in log scale) illustrating the performance of continual few-shot adaptation. Each row corresponds to a [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Plots illustrating the evolution of feature representations for synthetic vs. real data. Each plot shows the corresponding [PITH_FULL_IMAGE:figures/full_fig_p008_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Figure illustrating the effect of dataset order in the adaptation sequence. Results are shown after the final adaptation [PITH_FULL_IMAGE:figures/full_fig_p008_7.png] view at source ↗
read the original abstract

The quality and realism of synthetically generated fingerprint images have increased significantly over the past decade fueled by advancements in generative artificial intelligence (GenAI). This has exacerbated the vulnerability of fingerprint recognition systems to data injection attacks, where synthetic fingerprints are maliciously inserted during enrollment or authentication. Hence, there is an urgent need for methods to detect if a fingerprint image is real or synthetic. While it is straightforward to train deep neural network (DNN) models to classify images as real or synthetic, often such DNN models overfit the training data and fail to generalize well when applied to synthetic fingerprints generated using unseen GenAI models. In this work, we formulate synthetic fingerprint detection as a continual few-shot adaptation problem, where the objective is to rapidly evolve a base detector to identify new types of synthetic data. To enable continual few-shot adaptation, we employ a combination of binary cross-entropy and supervised contrastive (applied to the feature representation) losses and replay a few samples from previously known styles during fine-tuning to mitigate catastrophic forgetting. Experiments based on several DNN backbones (as feature extractors) and a variety of real and synthetic fingerprint datasets indicate that the proposed approach achieves a good trade-off between fast adaptation for detecting unseen synthetic styles and forgetting of known styles.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper formulates synthetic fingerprint detection as a continual few-shot adaptation problem. It proposes combining binary cross-entropy loss with supervised contrastive loss on feature representations, plus replay of a few samples from previously seen styles during fine-tuning, to enable rapid adaptation to new synthetic fingerprint styles while mitigating catastrophic forgetting. Experiments across multiple DNN backbones and real/synthetic fingerprint datasets are reported to demonstrate a good trade-off between fast adaptation to unseen styles and retention of performance on known styles.

Significance. If the empirical results hold under proper controls, the work would address a timely security problem in biometric systems by providing a practical continual-learning approach for detectors facing rapidly evolving GenAI-generated attacks. The combination of contrastive regularization and limited replay could offer a lightweight alternative to full retraining, with potential applicability beyond fingerprints to other image-based detection tasks requiring few-shot style adaptation.

major comments (2)
  1. [Experiments] The central claim that the method achieves a 'good trade-off' between adaptation and forgetting is load-bearing on the replay component, yet the manuscript states no quantitative replay buffer size, provides no ablation across buffer sizes or selection strategies, and includes no no-replay baseline measuring accuracy drop on prior styles after each adaptation step. Without these, the experiments cannot isolate whether stability arises from replay, the contrastive term, or dataset characteristics.
  2. [Method] The abstract and method description assert that 'replaying a few samples' suffices to mitigate forgetting, but no explicit value for the replay count, no comparison to standard continual-learning baselines (e.g., EWC or full rehearsal), and no per-style accuracy curves before/after adaptation are supplied. This leaves the sufficiency of the 'few samples' assumption unverified.
minor comments (2)
  1. [Abstract] The abstract claims 'experiments show a good trade-off' but reports no numerical metrics, error bars, dataset sizes, or ablation tables, making it difficult to evaluate the strength of the evidence from the summary alone.
  2. [Method] Notation for the supervised contrastive loss and its weighting relative to BCE is introduced without an equation number or explicit formulation, which hinders reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments, which highlight important aspects needed to strengthen the empirical support for our claims. We agree that additional quantitative details, ablations, and baselines are required and will revise the manuscript accordingly to address these points.

read point-by-point responses
  1. Referee: [Experiments] The central claim that the method achieves a 'good trade-off' between adaptation and forgetting is load-bearing on the replay component, yet the manuscript states no quantitative replay buffer size, provides no ablation across buffer sizes or selection strategies, and includes no no-replay baseline measuring accuracy drop on prior styles after each adaptation step. Without these, the experiments cannot isolate whether stability arises from replay, the contrastive term, or dataset characteristics.

    Authors: We acknowledge this observation and agree that isolating the replay contribution is important. In the revised manuscript, we will explicitly state the replay buffer size used in all experiments (5 samples per prior style). We will add an ablation study varying buffer sizes (1, 3, 5, and 10 samples) and comparing selection strategies (random vs. representative sampling via feature clustering). We will also include a no-replay baseline that measures accuracy on prior styles after each adaptation step, allowing direct comparison to the full method and isolating the roles of replay versus the supervised contrastive term. revision: yes

  2. Referee: [Method] The abstract and method description assert that 'replaying a few samples' suffices to mitigate forgetting, but no explicit value for the replay count, no comparison to standard continual-learning baselines (e.g., EWC or full rehearsal), and no per-style accuracy curves before/after adaptation are supplied. This leaves the sufficiency of the 'few samples' assumption unverified.

    Authors: We agree that explicit values and comparisons are needed to verify the 'few samples' claim. We will update the method section and abstract to state the replay count explicitly (5 samples per style). We will add experimental comparisons to standard continual-learning baselines including Elastic Weight Consolidation (EWC) and full rehearsal. We will also include per-style accuracy curves showing performance on known styles immediately before and after each adaptation step to illustrate the degree of forgetting mitigation achieved. revision: yes

Circularity Check

0 steps flagged

No significant circularity; method is self-contained description

full rationale

The paper describes a continual few-shot adaptation approach using binary cross-entropy combined with supervised contrastive loss plus replay of prior samples. No equations, derivations, or predictions are present that reduce by construction to fitted inputs or self-citations. The method is stated independently of experimental outcomes, with no load-bearing uniqueness theorems, ansatzes smuggled via prior self-work, or renaming of known results. Experiments are reported as empirical validation rather than forced by the formulation itself. This matches the default expectation of non-circularity for a methods paper without mathematical reduction steps.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

Relies on standard deep learning assumptions for image classification and continual learning; no free parameters, invented entities, or ad-hoc axioms explicitly introduced in the abstract.

axioms (2)
  • domain assumption Supervised contrastive loss improves feature representations for classification tasks
    Invoked by the use of supervised contrastive loss on feature representations.
  • domain assumption Replay of few old samples mitigates catastrophic forgetting in fine-tuning
    Central to the proposed mitigation strategy.

pith-pipeline@v0.9.0 · 5523 in / 1194 out tokens · 39751 ms · 2026-05-15T10:51:10.053881+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

31 extracted references · 31 canonical work pages · 3 internal anchors

  1. [1]

    Best practices in testing and reporting performance of biometric devices,

    A. J. Mansfield and J. L. Wayman, “Best practices in testing and reporting performance of biometric devices,” 2002

  2. [2]

    Fingerprint spoof buster: Use of minutiae-centered patches,

    T. Chugh, K. Cao, and A. K. Jain, “Fingerprint spoof buster: Use of minutiae-centered patches,”IEEE Transactions on Information Forensics and Security, vol. 13, no. 9, pp. 2190–2202, 2018

  3. [3]

    Biometric template security: Challenges and solutions,

    A. K. Jain, A. Ross, and U. Uludag, “Biometric template security: Challenges and solutions,” inIEEE 13th European signal processing conference, 2005, pp. 1–4. [4]Biometric data injection attack detection, International Organization for Standardization (ISO) / IEC Std. ISO/IEC 25 456, 2025. [Online]. Available: https://www.iso.org/standard/90473.html [5]Bi...

  4. [4]

    Synthetic fingerprint-database generation,

    R. Cappelli, D. Maio, and D. Maltoni, “Synthetic fingerprint-database generation,” inIEEE - ICPR, vol. 3, 2002, pp. 744–747

  5. [5]

    Printsgan: Synthetic fingerprint generator,

    J. J. Engelsma, S. Grosz, and A. K. Jain, “Printsgan: Synthetic fingerprint generator,”IEEE - TPAMI, vol. 45, no. 5, pp. 6111–6124, 2022

  6. [6]

    Fingerprint synthesis: Search with 100 million prints,

    V . Mistry, J. J. Engelsma, and A. K. Jain, “Fingerprint synthesis: Search with 100 million prints,” inIEEE - IJCB, 2020, pp. 1–10

  7. [7]

    Fpgan-control: A controllable fingerprint generator for training with synthetic data,

    A. Shoshan, N. Bhonker, E. Ben Baruch, O. Nizan, I. Kviatkovsky, J. En- gelsma, M. Aggarwal, and G. Medioni, “Fpgan-control: A controllable fingerprint generator for training with synthetic data,” inIEEE/CVF - WACV, 2024, pp. 6067–6076

  8. [8]

    Universal fingerprint generation: Con- trollable diffusion model with multimodal conditions,

    S. A. Grosz and A. K. Jain, “Universal fingerprint generation: Con- trollable diffusion model with multimodal conditions,”IEEE - TPAMI, 2024

  9. [9]

    Open set fingerprint spoof detection across novel fabrication materials,

    A. Rattani, W. J. Scheirer, and A. Ross, “Open set fingerprint spoof detection across novel fabrication materials,”IEEE Transactions on Information Forensics and Security, 2015

  10. [10]

    Few-shot learner generalizes across ai-generated image detection,

    S. Wu, J. Liu, J. Li, and Y . Wang, “Few-shot learner generalizes across ai-generated image detection,” inICML. PMLR, 2025, pp. 67 449– 67 460

  11. [11]

    End-to- end reconstruction-classification learning for face forgery detection,

    J. Cao, C. Ma, T. Yao, S. Chen, S. Ding, and X. Yang, “End-to- end reconstruction-classification learning for face forgery detection,” in IEEE/CVF - CVPR, 2022, pp. 4113–4122

  12. [12]

    Vikriti-id: A novel approach for real looking fingerprint data-set generation,

    R. Shukla, A. Sinha, V . Singh, and H. Kaur, “Vikriti-id: A novel approach for real looking fingerprint data-set generation,” inIEEE/CVF - WACV, 2024, pp. 6395–6403

  13. [13]

    Marcel, M

    S. Marcel, M. S. Nixon, J. Fierrez, and N. Evans,Handbook of biometric anti-spoofing: Presentation attack detection. Springer, 2019, vol. 2

  14. [14]

    Fingerprint spoof generalization,

    T. Chugh and A. K. Jain, “Fingerprint spoof generalization,”arXiv preprint arXiv:1912.02710, 2019

  15. [15]

    Fingerprint presentation attack detection: A sensor and material agnostic approach,

    S. A. Grosz, T. Chugh, and A. K. Jain, “Fingerprint presentation attack detection: A sensor and material agnostic approach,” inIEEE - IJCB 2020, pp. 1–10

  16. [16]

    Overcoming catastrophic forgetting in neural networks,

    J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,”Pro- ceedings of the national academy of sciences, 2017

  17. [17]

    On Tiny Episodic Memories in Continual Learning

    A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. Torr, and M. Ranzato, “On tiny episodic memories in continual learning,”preprint arXiv:1902.10486, 2019

  18. [18]

    Learning a unified classifier incrementally via rebalancing,

    S. Hou, X. Pan, C. C. Loy, Z. Wang, and D. Lin, “Learning a unified classifier incrementally via rebalancing,” inIEEE/CVF - CVPR, 2019

  19. [19]

    Lifelong Learning with Dynamically Expandable Networks

    J. Yoon, E. Yang, J. Lee, and S. J. Hwang, “Lifelong learning with dynamically expandable networks,”preprint arXiv:1708.01547, 2017

  20. [20]

    Prototypical networks for few-shot learning,

    J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,”NeurIPS, 2017

  21. [21]

    Matching networks for one shot learning,

    O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstraet al., “Matching networks for one shot learning,”NeurIPS, vol. 29, 2016

  22. [22]

    Model-agnostic meta-learning for fast adaptation of deep networks,

    C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” inICML. PMLR, 2017

  23. [23]

    Rapid learning or feature reuse? towards understanding the effectiveness of maml,

    A. Raghu, M. Raghu, S. Bengio, and O. Vinyals, “Rapid learning or feature reuse? towards understanding the effectiveness of maml,” preprint arXiv:1909.09157, 2019

  24. [24]

    Faceforensics++: Learning to detect manipulated facial images,

    A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “Faceforensics++: Learning to detect manipulated facial images,” inIEEE/CVF - ICCV, 2019, pp. 1–11

  25. [25]

    Any-resolution ai-generated image detection by spectral learning,

    D. Karageorgiou, S. Papadopoulos, I. Kompatsiaris, and E. Gavves, “Any-resolution ai-generated image detection by spectral learning,” in IEEE/CVF - CVPR, 2025, pp. 18 706–18 717

  26. [26]

    Fingerprint synthesis: Evaluating fingerprint search at scale,

    K. Cao and A. Jain, “Fingerprint synthesis: Evaluating fingerprint search at scale,” inIEEE - International Conference on Biometrics (ICB), 2018

  27. [27]

    Multisensor optical and latent fingerprint database,

    A. Sankaran, M. Vatsa, and R. Singh, “Multisensor optical and latent fingerprint database,”IEEE access, vol. 3, pp. 653–665, 2015

  28. [28]

    A latent fingerprint in the wild database,

    X. Liu, K. Raja, R. Wang, H. Qiu, H. Wu, D. Sun, Q. Zheng, N. Liu, X. Wang, G. Huanget al., “A latent fingerprint in the wild database,” IEEE Transactions on Information Forensics and Security, 2024

  29. [29]

    Supervised contrastive learn- ing,

    P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y . Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, “Supervised contrastive learn- ing,”NeurIPS, vol. 33, pp. 18 661–18 673, 2020

  30. [30]

    Learning transferable visual models from natural language supervision,

    A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clarket al., “Learning transferable visual models from natural language supervision,” inICML. PMLR, 2021, pp. 8748–8763

  31. [31]

    DINOv2: Learning Robust Visual Features without Supervision

    M. Oquab, T. Darcet, T. Moutakanni, H. V o, M. Szafraniec, V . Khali- dov, P. Fernandez, D. Haziza, F. Massa, A. El-Noubyet al., “Di- nov2: Learning robust visual features without supervision,”preprint arXiv:2304.07193, 2023. APPENDIXA ADDITIONALRESULTS A.ROC Curves: 0.001% 0.01% 0.1% 1% 10% 100% FPR 0 20 40 60 80 100 TPR Base Train: GenPrint (GP) Test:Ge...