Recognition: no theorem link
Continual Few-shot Adaptation for Synthetic Fingerprint Detection
Pith reviewed 2026-05-15 10:51 UTC · model grok-4.3
The pith
A fingerprint detector adapts to new synthetic styles with few examples while retaining performance on known ones.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The proposed continual few-shot adaptation method, which employs binary cross-entropy and supervised contrastive losses together with replay of a few samples from known styles during fine-tuning, enables a base detector to rapidly adapt to new synthetic fingerprint styles while mitigating catastrophic forgetting of known styles.
What carries the argument
The combination of binary cross-entropy and supervised contrastive losses on feature representations, paired with replay of a few prior samples during fine-tuning.
Load-bearing premise
Replaying only a few samples from previously known styles during fine-tuning is sufficient to mitigate catastrophic forgetting while still enabling rapid adaptation to new synthetic styles with limited data.
What would settle it
After fine-tuning on a new synthetic style with few examples plus replay, measure whether accuracy on a held-out test set of previously seen styles falls below the level achieved by the original base model.
Figures
read the original abstract
The quality and realism of synthetically generated fingerprint images have increased significantly over the past decade fueled by advancements in generative artificial intelligence (GenAI). This has exacerbated the vulnerability of fingerprint recognition systems to data injection attacks, where synthetic fingerprints are maliciously inserted during enrollment or authentication. Hence, there is an urgent need for methods to detect if a fingerprint image is real or synthetic. While it is straightforward to train deep neural network (DNN) models to classify images as real or synthetic, often such DNN models overfit the training data and fail to generalize well when applied to synthetic fingerprints generated using unseen GenAI models. In this work, we formulate synthetic fingerprint detection as a continual few-shot adaptation problem, where the objective is to rapidly evolve a base detector to identify new types of synthetic data. To enable continual few-shot adaptation, we employ a combination of binary cross-entropy and supervised contrastive (applied to the feature representation) losses and replay a few samples from previously known styles during fine-tuning to mitigate catastrophic forgetting. Experiments based on several DNN backbones (as feature extractors) and a variety of real and synthetic fingerprint datasets indicate that the proposed approach achieves a good trade-off between fast adaptation for detecting unseen synthetic styles and forgetting of known styles.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper formulates synthetic fingerprint detection as a continual few-shot adaptation problem. It proposes combining binary cross-entropy loss with supervised contrastive loss on feature representations, plus replay of a few samples from previously seen styles during fine-tuning, to enable rapid adaptation to new synthetic fingerprint styles while mitigating catastrophic forgetting. Experiments across multiple DNN backbones and real/synthetic fingerprint datasets are reported to demonstrate a good trade-off between fast adaptation to unseen styles and retention of performance on known styles.
Significance. If the empirical results hold under proper controls, the work would address a timely security problem in biometric systems by providing a practical continual-learning approach for detectors facing rapidly evolving GenAI-generated attacks. The combination of contrastive regularization and limited replay could offer a lightweight alternative to full retraining, with potential applicability beyond fingerprints to other image-based detection tasks requiring few-shot style adaptation.
major comments (2)
- [Experiments] The central claim that the method achieves a 'good trade-off' between adaptation and forgetting is load-bearing on the replay component, yet the manuscript states no quantitative replay buffer size, provides no ablation across buffer sizes or selection strategies, and includes no no-replay baseline measuring accuracy drop on prior styles after each adaptation step. Without these, the experiments cannot isolate whether stability arises from replay, the contrastive term, or dataset characteristics.
- [Method] The abstract and method description assert that 'replaying a few samples' suffices to mitigate forgetting, but no explicit value for the replay count, no comparison to standard continual-learning baselines (e.g., EWC or full rehearsal), and no per-style accuracy curves before/after adaptation are supplied. This leaves the sufficiency of the 'few samples' assumption unverified.
minor comments (2)
- [Abstract] The abstract claims 'experiments show a good trade-off' but reports no numerical metrics, error bars, dataset sizes, or ablation tables, making it difficult to evaluate the strength of the evidence from the summary alone.
- [Method] Notation for the supervised contrastive loss and its weighting relative to BCE is introduced without an equation number or explicit formulation, which hinders reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive comments, which highlight important aspects needed to strengthen the empirical support for our claims. We agree that additional quantitative details, ablations, and baselines are required and will revise the manuscript accordingly to address these points.
read point-by-point responses
-
Referee: [Experiments] The central claim that the method achieves a 'good trade-off' between adaptation and forgetting is load-bearing on the replay component, yet the manuscript states no quantitative replay buffer size, provides no ablation across buffer sizes or selection strategies, and includes no no-replay baseline measuring accuracy drop on prior styles after each adaptation step. Without these, the experiments cannot isolate whether stability arises from replay, the contrastive term, or dataset characteristics.
Authors: We acknowledge this observation and agree that isolating the replay contribution is important. In the revised manuscript, we will explicitly state the replay buffer size used in all experiments (5 samples per prior style). We will add an ablation study varying buffer sizes (1, 3, 5, and 10 samples) and comparing selection strategies (random vs. representative sampling via feature clustering). We will also include a no-replay baseline that measures accuracy on prior styles after each adaptation step, allowing direct comparison to the full method and isolating the roles of replay versus the supervised contrastive term. revision: yes
-
Referee: [Method] The abstract and method description assert that 'replaying a few samples' suffices to mitigate forgetting, but no explicit value for the replay count, no comparison to standard continual-learning baselines (e.g., EWC or full rehearsal), and no per-style accuracy curves before/after adaptation are supplied. This leaves the sufficiency of the 'few samples' assumption unverified.
Authors: We agree that explicit values and comparisons are needed to verify the 'few samples' claim. We will update the method section and abstract to state the replay count explicitly (5 samples per style). We will add experimental comparisons to standard continual-learning baselines including Elastic Weight Consolidation (EWC) and full rehearsal. We will also include per-style accuracy curves showing performance on known styles immediately before and after each adaptation step to illustrate the degree of forgetting mitigation achieved. revision: yes
Circularity Check
No significant circularity; method is self-contained description
full rationale
The paper describes a continual few-shot adaptation approach using binary cross-entropy combined with supervised contrastive loss plus replay of prior samples. No equations, derivations, or predictions are present that reduce by construction to fitted inputs or self-citations. The method is stated independently of experimental outcomes, with no load-bearing uniqueness theorems, ansatzes smuggled via prior self-work, or renaming of known results. Experiments are reported as empirical validation rather than forced by the formulation itself. This matches the default expectation of non-circularity for a methods paper without mathematical reduction steps.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Supervised contrastive loss improves feature representations for classification tasks
- domain assumption Replay of few old samples mitigates catastrophic forgetting in fine-tuning
Reference graph
Works this paper leans on
-
[1]
Best practices in testing and reporting performance of biometric devices,
A. J. Mansfield and J. L. Wayman, “Best practices in testing and reporting performance of biometric devices,” 2002
work page 2002
-
[2]
Fingerprint spoof buster: Use of minutiae-centered patches,
T. Chugh, K. Cao, and A. K. Jain, “Fingerprint spoof buster: Use of minutiae-centered patches,”IEEE Transactions on Information Forensics and Security, vol. 13, no. 9, pp. 2190–2202, 2018
work page 2018
-
[3]
Biometric template security: Challenges and solutions,
A. K. Jain, A. Ross, and U. Uludag, “Biometric template security: Challenges and solutions,” inIEEE 13th European signal processing conference, 2005, pp. 1–4. [4]Biometric data injection attack detection, International Organization for Standardization (ISO) / IEC Std. ISO/IEC 25 456, 2025. [Online]. Available: https://www.iso.org/standard/90473.html [5]Bi...
-
[4]
Synthetic fingerprint-database generation,
R. Cappelli, D. Maio, and D. Maltoni, “Synthetic fingerprint-database generation,” inIEEE - ICPR, vol. 3, 2002, pp. 744–747
work page 2002
-
[5]
Printsgan: Synthetic fingerprint generator,
J. J. Engelsma, S. Grosz, and A. K. Jain, “Printsgan: Synthetic fingerprint generator,”IEEE - TPAMI, vol. 45, no. 5, pp. 6111–6124, 2022
work page 2022
-
[6]
Fingerprint synthesis: Search with 100 million prints,
V . Mistry, J. J. Engelsma, and A. K. Jain, “Fingerprint synthesis: Search with 100 million prints,” inIEEE - IJCB, 2020, pp. 1–10
work page 2020
-
[7]
Fpgan-control: A controllable fingerprint generator for training with synthetic data,
A. Shoshan, N. Bhonker, E. Ben Baruch, O. Nizan, I. Kviatkovsky, J. En- gelsma, M. Aggarwal, and G. Medioni, “Fpgan-control: A controllable fingerprint generator for training with synthetic data,” inIEEE/CVF - WACV, 2024, pp. 6067–6076
work page 2024
-
[8]
Universal fingerprint generation: Con- trollable diffusion model with multimodal conditions,
S. A. Grosz and A. K. Jain, “Universal fingerprint generation: Con- trollable diffusion model with multimodal conditions,”IEEE - TPAMI, 2024
work page 2024
-
[9]
Open set fingerprint spoof detection across novel fabrication materials,
A. Rattani, W. J. Scheirer, and A. Ross, “Open set fingerprint spoof detection across novel fabrication materials,”IEEE Transactions on Information Forensics and Security, 2015
work page 2015
-
[10]
Few-shot learner generalizes across ai-generated image detection,
S. Wu, J. Liu, J. Li, and Y . Wang, “Few-shot learner generalizes across ai-generated image detection,” inICML. PMLR, 2025, pp. 67 449– 67 460
work page 2025
-
[11]
End-to- end reconstruction-classification learning for face forgery detection,
J. Cao, C. Ma, T. Yao, S. Chen, S. Ding, and X. Yang, “End-to- end reconstruction-classification learning for face forgery detection,” in IEEE/CVF - CVPR, 2022, pp. 4113–4122
work page 2022
-
[12]
Vikriti-id: A novel approach for real looking fingerprint data-set generation,
R. Shukla, A. Sinha, V . Singh, and H. Kaur, “Vikriti-id: A novel approach for real looking fingerprint data-set generation,” inIEEE/CVF - WACV, 2024, pp. 6395–6403
work page 2024
- [13]
-
[14]
Fingerprint spoof generalization,
T. Chugh and A. K. Jain, “Fingerprint spoof generalization,”arXiv preprint arXiv:1912.02710, 2019
-
[15]
Fingerprint presentation attack detection: A sensor and material agnostic approach,
S. A. Grosz, T. Chugh, and A. K. Jain, “Fingerprint presentation attack detection: A sensor and material agnostic approach,” inIEEE - IJCB 2020, pp. 1–10
work page 2020
-
[16]
Overcoming catastrophic forgetting in neural networks,
J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., “Overcoming catastrophic forgetting in neural networks,”Pro- ceedings of the national academy of sciences, 2017
work page 2017
-
[17]
On Tiny Episodic Memories in Continual Learning
A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. Torr, and M. Ranzato, “On tiny episodic memories in continual learning,”preprint arXiv:1902.10486, 2019
work page internal anchor Pith review Pith/arXiv arXiv 1902
-
[18]
Learning a unified classifier incrementally via rebalancing,
S. Hou, X. Pan, C. C. Loy, Z. Wang, and D. Lin, “Learning a unified classifier incrementally via rebalancing,” inIEEE/CVF - CVPR, 2019
work page 2019
-
[19]
Lifelong Learning with Dynamically Expandable Networks
J. Yoon, E. Yang, J. Lee, and S. J. Hwang, “Lifelong learning with dynamically expandable networks,”preprint arXiv:1708.01547, 2017
work page internal anchor Pith review Pith/arXiv arXiv 2017
-
[20]
Prototypical networks for few-shot learning,
J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,”NeurIPS, 2017
work page 2017
-
[21]
Matching networks for one shot learning,
O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstraet al., “Matching networks for one shot learning,”NeurIPS, vol. 29, 2016
work page 2016
-
[22]
Model-agnostic meta-learning for fast adaptation of deep networks,
C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep networks,” inICML. PMLR, 2017
work page 2017
-
[23]
Rapid learning or feature reuse? towards understanding the effectiveness of maml,
A. Raghu, M. Raghu, S. Bengio, and O. Vinyals, “Rapid learning or feature reuse? towards understanding the effectiveness of maml,” preprint arXiv:1909.09157, 2019
-
[24]
Faceforensics++: Learning to detect manipulated facial images,
A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M. Nießner, “Faceforensics++: Learning to detect manipulated facial images,” inIEEE/CVF - ICCV, 2019, pp. 1–11
work page 2019
-
[25]
Any-resolution ai-generated image detection by spectral learning,
D. Karageorgiou, S. Papadopoulos, I. Kompatsiaris, and E. Gavves, “Any-resolution ai-generated image detection by spectral learning,” in IEEE/CVF - CVPR, 2025, pp. 18 706–18 717
work page 2025
-
[26]
Fingerprint synthesis: Evaluating fingerprint search at scale,
K. Cao and A. Jain, “Fingerprint synthesis: Evaluating fingerprint search at scale,” inIEEE - International Conference on Biometrics (ICB), 2018
work page 2018
-
[27]
Multisensor optical and latent fingerprint database,
A. Sankaran, M. Vatsa, and R. Singh, “Multisensor optical and latent fingerprint database,”IEEE access, vol. 3, pp. 653–665, 2015
work page 2015
-
[28]
A latent fingerprint in the wild database,
X. Liu, K. Raja, R. Wang, H. Qiu, H. Wu, D. Sun, Q. Zheng, N. Liu, X. Wang, G. Huanget al., “A latent fingerprint in the wild database,” IEEE Transactions on Information Forensics and Security, 2024
work page 2024
-
[29]
Supervised contrastive learn- ing,
P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y . Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, “Supervised contrastive learn- ing,”NeurIPS, vol. 33, pp. 18 661–18 673, 2020
work page 2020
-
[30]
Learning transferable visual models from natural language supervision,
A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clarket al., “Learning transferable visual models from natural language supervision,” inICML. PMLR, 2021, pp. 8748–8763
work page 2021
-
[31]
DINOv2: Learning Robust Visual Features without Supervision
M. Oquab, T. Darcet, T. Moutakanni, H. V o, M. Szafraniec, V . Khali- dov, P. Fernandez, D. Haziza, F. Massa, A. El-Noubyet al., “Di- nov2: Learning robust visual features without supervision,”preprint arXiv:2304.07193, 2023. APPENDIXA ADDITIONALRESULTS A.ROC Curves: 0.001% 0.01% 0.1% 1% 10% 100% FPR 0 20 40 60 80 100 TPR Base Train: GenPrint (GP) Test:Ge...
work page internal anchor Pith review Pith/arXiv arXiv 2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.