pith. machine review for the scientific record. sign in

arxiv: 2604.17961 · v1 · submitted 2026-04-20 · 💻 cs.CV

Recognition: unknown

DifFoundMAD: Foundation Models meet Differential Morphing Attack Detection

Authors on Pith no claims yet

Pith reviewed 2026-05-10 05:27 UTC · model grok-4.3

classification 💻 cs.CV
keywords morphing attack detectiondifferential MADvision foundation modelsface biometricslightweight fine-tuningcross-database evaluationbiometric security
0
0 comments X

The pith

DifFoundMAD adapts vision foundation model embeddings with lightweight fine-tuning to improve differential morphing attack detection over prior methods.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces DifFoundMAD as a parameter-efficient framework for differential morphing attack detection that replaces standard face recognition embeddings with those from general vision foundation models. It performs lightweight fine-tuning under class-balanced optimization to update only a small subset of parameters while retaining the models' broad priors. This yields consistent gains on cross-database benchmarks, especially at the low error rates demanded by high-security settings such as border control. Sympathetic readers would care because morphing attacks threaten the reliability of facial biometric systems, and the approach offers a practical way to strengthen detection without full model retraining.

Core claim

DifFoundMAD follows the standard differential paradigm but substitutes the representation space with embeddings from vision foundation models, achieving error-rate reductions from 6.16% to 2.17% at high-security thresholds through lightweight fine-tuning and class-balanced optimisation that preserves rich representational priors.

What carries the argument

The central mechanism is the substitution of conventional face recognition or handcrafted features with embeddings from vision foundation models inside the differential morphing attack detection pipeline, enabled by parameter-efficient fine-tuning.

Load-bearing premise

That embeddings from general vision foundation models contain the subtle discrepancies needed to distinguish morphs from live captures and that lightweight fine-tuning can reliably extract them across databases and capture conditions.

What would settle it

A fresh cross-database evaluation in which DifFoundMAD produces higher or equal error rates to existing state-of-the-art systems at the same high-security thresholds would falsify the claimed improvement.

Figures

Figures reproduced from arXiv: 2604.17961 by Andr\'e D\"orsch, Christian Rathgeb, Christoph Busch, Lazaro J. Gonzalez-Soler.

Figure 1
Figure 1. Figure 1: Overview of DifFoundMAD. Morphing artefacts are de [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Example of BSs for two subjects contributing to different MAs for the FERET and FRGC databases. [PITH_FULL_IMAGE:figures/full_fig_p005_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: DET curves comparing DifFoundMAD and the corre [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
read the original abstract

In this work, we introduce DifFoundMAD, a parameter-efficient D-MAD framework that exploits the generalisation capabilities of vision foundation models (FM) to capture discrepancies between suspected morphs and live capture images. In contrast to conventional D-MAD systems that rely on face recognition embeddings or handcrafted feature differences, DifFoundMAD follows the standard differential paradigm while replacing the underlying representation space with embeddings extracted from FMs. By combining lightweight finetuning with class-balanced optimisation, the proposed method updates only a small subset of parameters while preserving the rich representational priors of the underlying FMs. Extensive cross-database evaluations on standard D-MAD benchmarks demonstrate that DifFoundMAD achieves consistent improvements over state-of-the-art systems, particularly at the strict security levels required in operational deployments such as border control: The error rates reported in the current state-of-the-art were reduced from 6.16% to 2.17% for high-security levels using DifFoundMAD.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript presents DifFoundMAD, a parameter-efficient differential morphing attack detection (D-MAD) framework that replaces conventional face recognition embeddings with those from vision foundation models (FMs). It applies lightweight fine-tuning combined with class-balanced optimization to update only a small parameter subset, and reports consistent improvements over state-of-the-art systems on cross-database benchmarks, including a reduction in error rates from 6.16% to 2.17% at high-security operating points relevant to border control.

Significance. If the cross-database performance gains hold under detailed scrutiny, the work would be significant for showing how foundation models can be adapted efficiently to fine-grained biometric forensic tasks. The emphasis on parameter efficiency and generalization across capture conditions addresses practical constraints in operational security systems.

major comments (2)
  1. [§4 (Experiments)] §4 (Experiments): The central performance claim of reducing error rates from 6.16% to 2.17% at high-security levels is load-bearing, yet the manuscript provides no definition of the precise operating point (e.g., fixed BPCER threshold for APCER measurement), no error bars, no mention of the number of independent runs, and no statistical significance tests. This prevents assessment of whether the reported improvement is robust or driven by particular database splits.
  2. [§3 (Method)] §3 (Method): The key assumption that lightweight fine-tuning of general FM embeddings reliably encodes subtle morph-specific low-level cues (texture blending, landmark shifts) across databases is not supported by any ablation (e.g., frozen vs. tuned backbone performance) or feature-level analysis. Without such evidence, it remains possible that the gains arise from high-level semantic adaptation rather than the required morph artifacts, undermining generalization at strict security thresholds.
minor comments (2)
  1. [Abstract] Abstract: The acronym D-MAD is introduced without expansion on first use; expand at first occurrence for clarity.
  2. [Introduction] Introduction: The comparison to prior SOTA error rates would benefit from explicit citation of the exact papers and tables being referenced for the 6.16% baseline.

Simulated Author's Rebuttal

2 responses · 1 unresolved

We thank the referee for their insightful comments, which help improve the clarity and rigor of our work. We address the major comments point by point below. Where revisions are needed, we will incorporate them in the revised manuscript.

read point-by-point responses
  1. Referee: [§4 (Experiments)] The central performance claim of reducing error rates from 6.16% to 2.17% at high-security levels is load-bearing, yet the manuscript provides no definition of the precise operating point (e.g., fixed BPCER threshold for APCER measurement), no error bars, no mention of the number of independent runs, and no statistical significance tests. This prevents assessment of whether the reported improvement is robust or driven by particular database splits.

    Authors: We agree that explicitly defining the operating point is essential for reproducibility and assessment. In the revised version, we will clearly state that the high-security operating point refers to the APCER at a fixed BPCER of 0.1%, consistent with ISO/IEC standards for biometric performance evaluation in high-security scenarios. However, the original experiments were performed using a single training run per configuration due to the high computational cost of fine-tuning foundation models. Therefore, we cannot provide error bars or statistical significance tests without conducting additional independent runs, which we note as a limitation. We will emphasize the consistency of the observed improvements across all cross-database evaluations to support the robustness of the gains. revision: partial

  2. Referee: [§3 (Method)] The key assumption that lightweight fine-tuning of general FM embeddings reliably encodes subtle morph-specific low-level cues (texture blending, landmark shifts) across databases is not supported by any ablation (e.g., frozen vs. tuned backbone performance) or feature-level analysis. Without such evidence, it remains possible that the gains arise from high-level semantic adaptation rather than the required morph artifacts, undermining generalization at strict security thresholds.

    Authors: We appreciate this point and acknowledge that including an ablation study would provide stronger evidence for the role of fine-tuning in capturing morph-specific artifacts. In the revised manuscript, we will add an ablation comparing the performance of the frozen foundation model embeddings versus the lightly fine-tuned ones on the D-MAD task. This will demonstrate the contribution of the class-balanced fine-tuning to encoding the subtle discrepancies. Additionally, we will include a brief feature analysis, such as visualizing the difference maps or activation differences, to illustrate the focus on low-level cues like texture inconsistencies. revision: yes

standing simulated objections not resolved
  • The lack of error bars, number of independent runs, and statistical significance tests for the reported performance metrics, as these were not computed in the original experimental setup.

Circularity Check

0 steps flagged

No circularity: empirical cross-database results rest on independent benchmarks

full rationale

The paper presents an empirical framework that replaces face-recognition embeddings with vision-foundation-model embeddings, applies lightweight fine-tuning under class-balanced optimisation, and reports measured error-rate reductions on standard D-MAD cross-database splits. No derivation, equation, or claim reduces by construction to its own inputs; performance figures are obtained from held-out test sets rather than from any fitted quantity defined in terms of the target metric. No self-citation is invoked as a load-bearing uniqueness theorem or ansatz, and the method does not rename a known result. The central claim therefore remains externally falsifiable against the cited benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review provides no equations, derivations, or explicit modeling choices; therefore no free parameters, axioms, or invented entities can be extracted.

pith-pipeline@v0.9.0 · 5472 in / 1118 out tokens · 32502 ms · 2026-05-10T05:27:29.000866+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

69 extracted references · 8 canonical work pages · 3 internal anchors

  1. [1]

    Accessed: Jan

    Bologna Online Evaluation Platform (BOEP).https:// biolab.csr.unibo.it/fvcongoing/UI/Form/ boep.aspx, 2026. Accessed: Jan. 6, 2026

  2. [2]

    Awais, M

    M. Awais, M. Naseer, S. Khan, R. Anwer, H. Cholakkal, M. Shah, M. Yang, and F. Khan. Foundation models defining a new era in vision: a survey and outlook.IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 2025

  3. [3]

    Blasingame and C

    Z. Blasingame and C. Liu. Fast-DiM: Towards fast diffusion morphs.IEEE Security & Privacy, 22(4):103–114, 2024

  4. [4]

    Blasingame and C

    Z. Blasingame and C. Liu. Greedy-DiM: Greedy algorithms for unreasonably effective face morphs. InProc. Intl. Joint Conference on Biometrics (IJCB), pages 1–11, 2024

  5. [5]

    Blasingame and C

    Z. Blasingame and C. Liu. Leveraging diffusion for strong and high quality face morphing attacks.IEEE Trans. on Biometrics, Behavior, and Identity Science (T-BIOM), 6(1):118–131, 2024

  6. [6]

    Borghi, A

    G. Borghi, A. Franco, N. D. Domenico, M. Ferrara, and D. Maltoni. V-MAD: Video-based morphing attack detec- tion in operational scenarios. InProc. Intl. Joint Conf. on Biometrics (IJCB), pages 1–10, 2024

  7. [7]

    Caldeira, G

    E. Caldeira, G. Guray, T. Chettaoui, M. Ivanovska, P. Peer, F. Boutros, V . Struc, and N. Damer. MADation: Face mor- phing attack detection with foundation models, 2025

  8. [8]

    Damer, V

    N. Damer, V . Boller, Y . Wainakh, F. Boutros, P. Terh ¨orst, A. Braun, and A. Kuijper. Detecting face morphing at- tacks by analyzing the directed distances of facial landmarks shifts. In T. Brox, A. Bruhn, and M. Fritz, editors,Pattern Recognition, pages 518–534, Cham, Switzerland, 2019

  9. [9]

    Damer, S

    N. Damer, S. Zienert, Y . Wainakh, A. Mosegu ´ı-Saladi´e, F. Kirchbuchner, and A. Kuijper. A multi-detector solution towards an accurate and generalized detection of face mor- phing attacks. InProc. Intl. Conf. Information Fusion (FU- SION), pages 1–8, 2019

  10. [10]

    Dargaud, M

    L. Dargaud, M. Ibsen, J. Tapia, and C. Busch. A princi- pal component analysis-based approach for single morphing attack detection. InProc. Winter Conf. on Applications of Computer Vision (WCACV), pages 683–692, 2023

  11. [11]

    Debiasi, U

    L. Debiasi, U. Scherhag, C. Rathgeb, A. Uhl, and C. Busch. PRNU variance analysis for morphed face image detection. InProc. of 9th Intl. Conf. on Biometrics: Theory, Applica- tions and Systems (BTAS 2018), 2018

  12. [12]

    Domenico, G

    N. Domenico, G. Borghi, A. Franco, and D. Maltoni. Com- bining identity features and artifact analysis for differential morphing attack detection. InProc. Intl. Conf. on Image Analysis and Processing (ICIAP), pages 100–111, 2023

  13. [13]

    N. D. Domenico, G. Borghi, A. Franco, and D. Maltoni. Im- proving accomplice detection in the morphing attack.Ma- chine Intelligence Research, pages 1–15, 2025

  14. [14]

    Scalable pre-training of large autoregressive image models.arXiv preprint arXiv:2401.08541, 2024

    A. El-Nouby, M. Klein, S. Zhai, M. Bautista, A. Toshev, V . Shankar, J. Susskind, and A. Joulin. Scalable pre- training of large autoregressive image models.arXiv preprint arXiv:2401.08541, 2024

  15. [15]

    European Union. Regulation (eu) 2019/1157 of the european parliament and of the council of 20 june 2019 on strength- ening the security of identity cards of union citizens and of residence documents issued to union citizens and their fam- ily members exercising their right of free movement. Official Journal of the European Union, L 188, pp. 67-78, 2019. A...

  16. [16]

    Ferrara, A

    M. Ferrara, A. Franco, and D. Maltoni. Face demorphing. IEEE Trans. on Information Forensics and Security (TIFS), 13(4):1008–1017, 2018

  17. [17]

    E. Fini, M. Shukor, X. Li, P. Dufter, M. Klein, D. Haldimann, S. Aitharaju, V . da Costa, L. B´ethune, Z. Gan, et al. Multi- modal autoregressive pre-training of large vision encoders. InProc. Intl. Conf. on Computer Vision and Pattern Recog- nition (CVPR), pages 9641–9654, 2025

  18. [18]

    L. J. Gonzalez-Soler, J. Tapia, and C. Busch. Are foundation models all you need for zero-shot face presentation attack de- tection? InIEEE Intl. Conf. on Automatic Face and Gesture Recognition (FG), pages 1–10, 2025

  19. [19]

    Grimmer and C

    M. Grimmer and C. Busch. LADIMO: face morph genera- tion through biometric template inversion with latent diffu- sion. InProc. Intl. Joint Conf. on Biometrics (IJCB), pages 1–7, 2024

  20. [20]

    E. Hu, Y . Shen, P. Wallis, Z. Allen-Zhu, Y . Li, S. Wang, L. Wang, W. Chen, et al. LoRa: Low-rank adaptation of large language models.Proc. Intl. Conf. on Learning Repre- sentations (ICLR), 1(2):3, 2022

  21. [21]

    Ibsen, L

    M. Ibsen, L. J. Gonzalez-Soler, C. Rathgeb, and C. Busch. TetraLoss: Improving the robustness of face recognition against morphing attacks. InIEEE Intl. Conf. on Automatic Face and Gesture Recognition (FG), pages 1–9, 2024

  22. [22]

    Ibsen, L

    M. Ibsen, L. J. Gonzalez-Soler, C. Rathgeb, P. Drozdowski, M. Gomez-Barrero, and C. Busch. Differential anomaly de- tection for facial images. InIEEE Intl. Workshop on Infor- mation Forensics and Security (WIFS), pages 1–6, 2021

  23. [23]

    In- formation Technology – Methodologies to evaluate the resis- tance of biometric recognition systems to morphing attacks

    ISO/IEC JTC1 SC37 Biometrics.ISO/IEC 20059:2025. In- formation Technology – Methodologies to evaluate the resis- tance of biometric recognition systems to morphing attacks. International Organization for Standardization, 2025

  24. [24]

    Selfmad: Enhancing generalization and robustness in mor- phing attack detection via self-supervised learning,

    M. Ivanovska, L. Todorov, N. Damer, D. Jain, P. Peer, and V .ˇStruc. SelfMAD: Enhancing generalization and robust- ness in morphing attack detection via self-supervised learn- ing.arXiv preprint arXiv:2504.05504, 2025

  25. [25]

    C. Jia, Y . Yang, Y . Xia, Y . Chen, Z. Parekh, H. Pham, Q. Le, Y . Sung, Z. Li, and T. Duerig. Scaling up visual and vision- language representation learning with noisy text supervision. InProc. Intl. Conf. on Machine Learning (ICML), pages 4904–4916, 2021

  26. [26]

    Joshi, M

    I. Joshi, M. Grimmer, C. Rathgeb, C. Busch, F. Bremond, and A. Dantcheva. Synthetic data in human analysis: A sur- vey.IEEE Trans. on Pattern Analysis and Machine Intelli- gence (PAMI), 46(7):4957–4976, 2024

  27. [27]

    Kabbani, K

    W. Kabbani, K. Raja, R. Ramachandra, and C. Busch. Sta- bleMorph: High-quality face morph generation with stable diffusion. InProc. Intl. Joint Conf. on Biometrics (IJCB), pages 1–10, 2025

  28. [28]

    A rank stabilization scaling factor for fine-tuning with lora,

    D. Kalajdzievski. A rank stabilization scaling factor for fine- tuning with lora.arXiv preprint arXiv:2312.03732, 2023

  29. [29]

    Karras, S

    T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. Analyzing and improving the image quality of Style- GAN. InProc. Intl. Conf. on Computer Vision and Pattern Recognition (CVPR), pages 8110–8119, 2020

  30. [30]

    Kessler, K

    R. Kessler, K. Raja, J. Tapia, and C. Busch. Towards min- imizing efforts for morphing attacks—deep embeddings for morphing pair selection and improved morphing attack de- tection.Plos one, 19(5):e0304610, 2024

  31. [31]

    Kirillov, E

    A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. Berg, W. Lo, et al. Segment anything. InProc. Intl. Conf. on Computer Vision (ICCV), pages 4015–4026, 2023

  32. [32]

    T. Lin, P. Goyal, R. Girshick, K. He, and P. Doll´ar. Focal loss for dense object detection. InProc. Intl. Conf. on Computer Vision (ICCV), pages 2980–2988, 2017

  33. [33]

    C. Liu, M. Ferrara, A. Franco, G. Borghi, and D. Zhong. Differential morphing attack detection via triplet-based met- ric learning and artifact extraction. InProc. Intl. Conf. of the Biometrics Special Interest Group (BIOSIG), pages 1–7, 2024

  34. [34]

    M. Ngan, P. Grother, K. Hanaoka, and J. Kuo. Face analy- sis technology evaluation (FATE) part 4: MORPH - perfor- mance of automated face morph detection. 2025

  35. [35]

    DINOv2: Learning Robust Visual Features without Supervision

    M. Oquab, T. Darcet, T. Moutakanni, H. V o, M. Szafraniec, V . Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El- Nouby, et al. DINOv2: Learning robust visual features with- out supervision.arXiv preprint arXiv:2304.07193, 2023

  36. [36]

    Ortega-Delcampo, C

    D. Ortega-Delcampo, C. Conde, D. Palacios-Alonso, and E. Cabello. Border control morphing attack detection with a convolutional neural network de-morphing approach.IEEE Access, 8:92301–92313, 2020

  37. [37]

    Ozgur, E

    G. Ozgur, E. Caldeira, T. Chettaoui, F. Boutros, R. Raghavendra, and N. Damer. FoundPAD: Founda- tion models reloaded for face presentation attack detection. InProc. Winter Conf. on Applications of Computer Vision (WCACV), pages 745–755, 2025

  38. [38]

    Papavasileiou, A

    E. Papavasileiou, A. Paraskevas, and J. Edmunds. egates in airports: A systematic literature review and future research directions.Journal of the Air Transport Research Society, page 100076, 2025

  39. [39]

    Paszke, S

    A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. PyTorch: An Imperative Style, High- Performance Deep Learning Library. InAdvances in Neural Information Processing Sys...

  40. [40]

    Paulo, H

    D. Paulo, H. Proenc ¸a, and J. Neves. FD-MAD: Frequency- domain residual analysis for face morphing attack detection. arXiv preprint arXiv:2601.20656, 2026

  41. [41]

    F. Peng, L. Zhang, and M. Long. FD-GAN: Face de- morphing generative adversarial network for restoring ac- complice’s facial image.IEEE Access, 7:75122–75131, 2019

  42. [42]

    Phillips, P

    J. Phillips, P. Flynn, T. Scruggs, K. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek. Overview of the face recognition grand challenge. InProc. Intl. Conf. on Computer Vision and Pattern Recognition (CVPR), vol- ume 1, pages 947–954, 2005

  43. [43]

    Phillips, H

    J. Phillips, H. Wechsler, J. Huang, and P. Rauss. The feret database and evaluation procedure for face-recognition algo- rithms.Image and vision computing, 16(5):295–306, 1998

  44. [44]

    Rachalwar, M

    H. Rachalwar, M. Fang, N. Damer, and A. Das. Depth- guided robust face morphing attack detection. InProc. Intl. Joint Conf. on Biometrics (IJCB), pages 1–9, 2023

  45. [45]

    Radford, J

    A. Radford, J. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agar- wal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learn- ing transferable visual models from natural language super- vision. InProc. Intl. Conf. on Machine Learning (ICML), pages 8748–8763, 2021

  46. [46]

    Raghavendra, S

    R. Raghavendra, S. Venkatesh, N. Damer, N. Vetrekar, and R. Gad. Multispectral imaging for differential face morphing attack detection: A preliminary study. InProc. Winter Conf. on Applications of Computer Vision (WCACV), pages 6185– 6193, 2024

  47. [47]

    Raghavendra, S

    R. Raghavendra, S. Venkatesh, K. Raja, and C. Busch. To- wards making morphing attack detection robust using hybrid scale-space colour texture features. InProc. Intl. Conf. on Identity, Security, and Behavior Analysis (ISBA), pages 1–8, 2019

  48. [48]

    K. Raja, M. Ferrara, A. Franco, L. Spreeuwers, I. Batskos, F. D. Wit, M. Gomez-Barrero, U. Scherhag, D. Fischer, S. Venkatesh, et al. Morphing attack detection-database, evaluation platform, and benchmarking.IEEE Trans. on Information Forensics and Security (TIFS), 16:4336–4351, 2020

  49. [49]

    Robledo-Moreno, G

    M. Robledo-Moreno, G. Borghi, N. D. Domenico, A. Franco, K. Raja, and D. Maltoni. Towards federated learn- ing for morphing attack detection. InProc. Intl. Joint Conf. on Biometrics (IJCB), pages 1–10, 2024

  50. [50]

    Sarkar, P

    E. Sarkar, P. Korshunov, L. Colbois, and S. Marcel. Are GAN-based morphs threatening face recognition? InProc. Intl. Conf. on Acoustics, Speech and Signal Processing (ICASSP), pages 2959–2963, 2022

  51. [51]

    Scherhag, D

    U. Scherhag, D. Budhrani, M. Gomez-Barrero, and C. Busch. Detecting morphed face images using facial landmarks. InIntl. Conf. on Image and Signal Processing (ICISP), 2018

  52. [52]

    Scherhag, L

    U. Scherhag, L. Debiasi, C. Rathgeb, C. Busch, and A. Uhl. Detection of face morphing attacks based on PRNU analysis. IEEE Trans. on Biometrics, Behavior, and Identity Science (T-BIOM), 2019

  53. [53]

    Scherhag, C

    U. Scherhag, C. Rathgeb, and C. Busch. Morph deterction from single face image: A multi-algorithm fusion approach. InProc. Intl. Conf. on Biometric Engineering and Applica- tions (ICBEA), pages 6–12, 2018

  54. [54]

    Scherhag, C

    U. Scherhag, C. Rathgeb, J. Merkle, and C. Busch. Deep face representations for differential morphing attack detec- tion.IEEE Trans. on Information Forensics and Security (TIFS), 2020

  55. [55]

    LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs

    C. Schuhmann, R. Vencu, R. Beaumont, R. Kaczmarczyk, C. Mullis, A. Katta, T. Coombes, J. Jitsev, and A. Komat- suzaki. LAION-400M: Open dataset of clip-filtered 400 million image-text pairs.arXiv preprint arXiv:2111.02114, 2021

  56. [56]

    Seibold, W

    C. Seibold, W. Samek, A. Hilsmann, and P. Eisert. Detection of face morphing attacks by deep learning. InProc. Intl. Workshop on Digital Forensics and Watermarking, pages 107–120, 2017

  57. [57]

    Shekhawat, H

    R. Shekhawat, H. Li, R. Raghavendra, and S. Venkatesh. Towards zero-shot differential morphing attack detection with multimodal large language models.arXiv preprint arXiv:2505.15332, 2025

  58. [58]

    Shukla and A

    N. Shukla and A. Ross. Facial demorphing via identity pre- serving image decomposition. InProc. Intl. Joint Conf. on Biometrics (IJCB), pages 1–10, 2024

  59. [59]

    DINOv3

    O. Sim ´eoni, H. V o, M. Seitzer, F. Baldassarre, M. Oquab, C. Jose, V . Khalidov, M. Szafraniec, S. Yi, M. Ramamon- jisoa, et al. Dinov3.arXiv preprint arXiv:2508.10104, 2025

  60. [60]

    Singh, K

    J. Singh, K. Raja, R. Raghavendra, and C. Busch. Robust morph-detection at automated border control gate using deep decomposed 3D shape & diffuse reflectance. InProc. of the 15th Intl. Conf. on Signal Image Technology & Internet Based Systems (SITIS), November 2019

  61. [61]

    Tapia, M

    J. Tapia, M. Russo, and C. Busch. Generating automati- cally print/scan textures for morphing attack detection ap- plications.IEEE Access, 2025

  62. [62]

    Tapia, D

    J. Tapia, D. Schulz, and C. Busch. Single-morphing attack detection using few-shot learning and triplet-loss.Neuro- computing, page 130033, 2025

  63. [63]

    Thomaz and G

    C. Thomaz and G. Giraldi. A new ranking method for prin- cipal components analysis and its application to face image analysis.Image and vision computing, 28(6):902–913, 2010

  64. [64]

    Venkatesh, R

    S. Venkatesh, R. Raghavendra, K. Raja, L. Spreeuwers, R. Veldhuis, and C. Busch. Morphed face detection based on deep color residual noise. InProc. Intl. Conf. on Image Processing Theory, Tools and Applications (IPTA), pages 1– 6, 2019

  65. [65]

    Venkatesh, R

    S. Venkatesh, R. Raghavendra, K. Raja, L. Spreeuwers, R. Veldhuis, and C. Busch. Detecting morphed face attacks using residual noise from deep multi-scale context aggrega- tion network. InProc. Winter Conf. on Applications of Com- puter Vision (WCACV), pages 280–289, 2020

  66. [66]

    Zhang, R

    H. Zhang, R. Raghavendra, K. Raja, and C. Busch. Gener- alized single-image-based morphing attack detection using deep representations from vision transformer. InProc. Intl. Conf. on Computer Vision and Pattern Recognition (CVPR), pages 1510–1518, 2024

  67. [67]

    Zhang, R

    H. Zhang, R. Raghavendra, K. Raja, and C. Busch. Syn- Morph: Generating synthetic face morphing dataset with mated samples.IEEE Access, 2025

  68. [68]

    Zhang, S

    H. Zhang, S. Venkatesh, R. Raghavendra, K. Raja, N. Damer, and C. Busch. MIPGAN—generating strong and high qual- ity morphing attacks using identity prior driven GAN.IEEE Trans. on Biometrics, Behavior, and Identity Science (T- BIOM), 3(3):365–383, 2021

  69. [69]

    Zhang, S

    L. Zhang, S. Chen, M. Long, and J. Cai. Face de-morphing based on identity feature transfer.IET Image Processing, 19(1):e13324, 2025