pith. machine review for the scientific record. sign in

arxiv: 2604.20585 · v1 · submitted 2026-04-22 · 💻 cs.CV

Recognition: unknown

On the Impact of Face Segmentation-Based Background Removal on Recognition and Morphing Attack Detection

Authors on Pith no claims yet

Pith reviewed 2026-05-10 00:33 UTC · model grok-4.3

classification 💻 cs.CV
keywords face segmentationbackground removalface recognitionmorphing attack detectionbiometric preprocessingunconstrained imagesimage quality
0
0 comments X

The pith

Background removal via face segmentation systematically influences both face recognition performance and morphing attack detection results.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper sets out to show that preprocessing face images by segmenting away the background changes how well recognition systems identify people and how reliably morphing attack detectors spot fakes. This matters for large-scale biometric setups such as border systems that must handle photos taken outside controlled studios. The work tests a range of segmentation methods, multiple families of attack detectors, and several recognition models on both lab-style and real-world photo collections. Results tie the preprocessing choice to shifts in recognition accuracy, measured image quality, and attack detection scores. The central pattern is that segmentation produces repeatable effects on the security side of the pipeline.

Core claim

Applying segmentation-based background removal to face images produces consistent changes in recognition accuracy and image quality metrics while also systematically altering the output of morphing attack detection methods across three detector families and four recognition models, as measured on databases containing both controlled and in-the-wild captures.

What carries the argument

Face segmentation applied as a preprocessing step to remove non-face background regions before feeding images into recognition or morphing attack detection pipelines.

Load-bearing premise

The tested segmentation methods and the mix of controlled plus in-the-wild databases adequately stand in for the full range of real operational capture conditions without adding their own hidden biases or artifacts.

What would settle it

A new test set of morphed and genuine face images captured under actual border or airport conditions where applying the same segmentation steps produces no measurable shift in morphing attack detection error rates would disprove the systematic influence claim.

Figures

Figures reproduced from arXiv: 2604.20585 by Eduarda Caldeira, Fadi Boutros, Guray Ozgur, Naser Damer.

Figure 1
Figure 1. Figure 1: Visual example of this work’s use case. The image can be acquired [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Pictures from the original FERET, FRGCv2 and IJB-C datasets before and after background removal. The margins were added to highlight the [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 2
Figure 2. Figure 2: A complementary evaluation of the segmentation [PITH_FULL_IMAGE:figures/full_fig_p007_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Pictures from FERET, FRGCv2, and IJB-C before and after [PITH_FULL_IMAGE:figures/full_fig_p011_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Landscape pictures of FRGCv2 before and after background [PITH_FULL_IMAGE:figures/full_fig_p011_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Landscape pictures of FRGCv2 before and after background [PITH_FULL_IMAGE:figures/full_fig_p012_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Histograms of the genuine and impostor score distributions of FERET and its segmented versions when evaluated by ElasticFace (first block), [PITH_FULL_IMAGE:figures/full_fig_p013_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Histograms of the genuine and impostor score distributions of FRGCv2 and its segmented versions when evaluated by ElasticFace (first block), [PITH_FULL_IMAGE:figures/full_fig_p014_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Histograms of the genuine and impostor score distributions of IJB-C and its segmented versions when evaluated by ElasticFace (first block), [PITH_FULL_IMAGE:figures/full_fig_p015_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Joint visualization of the metrics evaluated for IJB-C and its segmented variants in the main paper, namely TAR@FAR=1e-4, [PITH_FULL_IMAGE:figures/full_fig_p016_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Segmentation masks obtained by each of the considered segmen [PITH_FULL_IMAGE:figures/full_fig_p017_10.png] view at source ↗
read the original abstract

This study investigates the impact of face image background correction through segmentation on face recognition and morphing attack detection performance in realistic, unconstrained image capture scenarios. The motivation is driven by operational biometric systems such as the European Entry/Exit System (EES), which require facial enrolment at airports and other border crossing points where controlled backgrounds usually required for such captures cannot always be guaranteed, as well as by accessibility needs that may necessitate image capture outside traditional office environments. By analyzing how such preprocessing steps influence both recognition accuracy and security mechanisms, this work addresses a critical gap between usability-driven image normalization and the reliability requirements of large-scale biometric identification systems. Our study evaluates a comprehensive range of segmentation techniques, three families of morphing attack detection methods, and four distinct face recognition models, using databases that include both controlled and in-the-wild image captures. The results reveal consistent patterns linking segmentation to both recognition performance and face image quality. Additionally, segmentation is shown to systematically influence morphing attack detection performance. These findings highlight the need for careful consideration when deploying such preprocessing techniques in operational biometric systems.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper investigates the impact of face segmentation-based background removal on face recognition performance and morphing attack detection (MAD) in unconstrained capture scenarios. It evaluates a range of segmentation techniques, three families of MAD methods, and four face recognition models across controlled and in-the-wild databases, reporting consistent patterns linking segmentation to recognition accuracy and image quality, as well as a systematic influence of segmentation on MAD performance. The motivation ties to operational systems like the European Entry/Exit System requiring reliable preprocessing under variable conditions.

Significance. If the empirical patterns hold after addressing isolation concerns, the work is significant for biometric system design, as it demonstrates that usability-driven preprocessing steps can affect both recognition utility and security mechanisms like MAD. The multi-method, multi-database evaluation provides a broad empirical foundation that could inform standards for image normalization in large-scale deployments.

major comments (1)
  1. [Experimental setup and results] The central claim that segmentation 'systematically influences' MAD performance (abstract and results) requires isolating background removal from segmentation-induced artifacts such as altered face bounding boxes, resolution changes, or boundary effects. The evaluation uses multiple segmenters and controlled/in-the-wild sets but lacks an ablation holding the face region fixed (e.g., via alpha matting, inpainting controls, or masked-background variants) while varying only the background; without this, observed deltas could be driven by feature-extractor sensitivities to artifacts rather than background per se, undermining the causal link for all three MAD families.
minor comments (2)
  1. [Abstract] The abstract states 'consistent patterns' and 'systematic influence' without any quantitative metrics, error bars, or specific deltas; the full results section should include these explicitly (e.g., EER or accuracy changes per segmenter/MAD pair) to allow readers to assess effect sizes.
  2. [Methods] Clarify the exact segmentation techniques and databases used (section on methods) with version numbers or citations, and specify exclusion criteria for images to ensure reproducibility.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their thorough review and constructive feedback on our manuscript. We address the major comment regarding experimental isolation below, providing clarification on the scope of our claims while acknowledging limitations.

read point-by-point responses
  1. Referee: The central claim that segmentation 'systematically influences' MAD performance (abstract and results) requires isolating background removal from segmentation-induced artifacts such as altered face bounding boxes, resolution changes, or boundary effects. The evaluation uses multiple segmenters and controlled/in-the-wild sets but lacks an ablation holding the face region fixed (e.g., via alpha matting, inpainting controls, or masked-background variants) while varying only the background; without this, observed deltas could be driven by feature-extractor sensitivities to artifacts rather than background per se, undermining the causal link for all three MAD families.

    Authors: We appreciate the referee's point on causal isolation. Our study specifically evaluates the end-to-end impact of applying face segmentation-based background removal as a practical preprocessing step in unconstrained scenarios (as required by systems like the EES), which inherently includes any associated effects on bounding boxes, resolution, and boundaries depending on the segmenter. By testing a diverse range of segmentation techniques across controlled and in-the-wild databases, we observe consistent patterns in MAD performance shifts for all three method families, indicating that the influence arises from the segmentation pipeline as deployed rather than isolated artifacts. While we agree that a dedicated ablation (e.g., inpainting or masked variants to hold the face region fixed) would further disentangle pure background effects, such controls would not reflect real operational use and were outside the paper's scope of assessing usable preprocessing. We will revise the manuscript to clarify the scope of our claims, add discussion of potential confounding artifacts, and include this as a limitation with suggestions for future work. revision: partial

Circularity Check

0 steps flagged

Empirical evaluation with no derivations or self-referential predictions

full rationale

This paper is an experimental study that evaluates the impact of various face segmentation techniques on recognition accuracy and morphing attack detection across controlled and in-the-wild databases using multiple models and methods. No mathematical derivations, first-principles results, fitted parameters renamed as predictions, or self-citation chains are described in the abstract or methodology. All reported outcomes are direct experimental observations from the applied pipelines rather than quantities constructed by definition from the same inputs. The central claims rest on comparative performance metrics, not on any reduction to prior self-authored results or ansatzes.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard domain assumptions in computer vision and biometrics with no free parameters, new entities, or ad-hoc inventions visible in the abstract.

axioms (1)
  • domain assumption Face segmentation techniques can isolate facial regions from backgrounds in both controlled and unconstrained images without introducing performance-altering artifacts.
    Invoked as the basis for the preprocessing step whose impact is measured.

pith-pipeline@v0.9.0 · 5498 in / 1291 out tokens · 30316 ms · 2026-05-10T00:33:37.990152+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

59 extracted references · 3 canonical work pages

  1. [1]

    Remove background from image.https://www.adob e.com/express/feature/image/remove-background, n.d

    Adobe Inc. Remove background from image.https://www.adob e.com/express/feature/image/remove-background, n.d. Adobe Express feature page, accessed 2026-03-17

  2. [2]

    X. An, J. Deng, J. Guo, Z. Feng, X. Zhu, J. Yang, and T. Liu. Killing two birds with one stone: Efficient and robust training of face recognition cnns by partial fc. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4042– 4051, 2022

  3. [3]

    Boutros, N

    F. Boutros, N. Damer, M. Fang, F. Kirchbuchner, and A. Kuijper. Mixfacenets: Extremely efficient face recognition networks. In2021 IEEE International Joint Conference on Biometrics (IJCB), pages 1–8. IEEE, 2021

  4. [4]

    Boutros, N

    F. Boutros, N. Damer, F. Kirchbuchner, and A. Kuijper. Elasticface: Elastic margin loss for deep face recognition. InCVPR Workshops, pages 1577–1586. IEEE, 2022

  5. [5]

    Boutros, M

    F. Boutros, M. Fang, M. Klemt, B. Fu, and N. Damer. Cr-fiqa: face image quality assessment by learning sample relative classifiability. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5836–5845, 2023

  6. [6]

    Caldeira, F

    E. Caldeira, F. Boutros, and N. Damer. Madprompts: Unlocking zero- shot morphing attack detection with multiple prompt aggregation. In Proceedings of the 1st International Workshop & Challenge on Subtle Visual Computing, pages 12–20, 2025

  7. [7]

    remove.bg: Remove image background.http s://www.remove.bg/, n.d

    Canva Austria GmbH. remove.bg: Remove image background.http s://www.remove.bg/, n.d. Online background removal service, accessed 2026-03-17

  8. [8]

    Damer, M

    N. Damer, M. Fang, P. Siebke, J. N. Kolf, M. Huber, and F. Boutros. Mordiff: Recognition vulnerability and attack detectability of face morphing attacks created by diffusion autoencoders.arXiv preprint arXiv:2302.01843, 2023

  9. [9]

    Damer, C

    N. Damer, C. A. F. L ´opez, M. Fang, N. Spiller, M. V . Pham, and F. Boutros. Privacy-friendly synthetic data for the development of face morphing attack detectors. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1606– 1617, 2022

  10. [10]

    Damer, Y

    N. Damer, Y . Wainakh, O. Henniger, C. Croll, B. Berthe, A. Braun, and A. Kuijper. Deep learning-based face recognition and the robustness to perspective distortion. In24th International Conference on Pattern Recognition, ICPR 2018, Beijing, China, August 20-24, 2018, pages 3445–3450. IEEE Computer Society, 2018

  11. [11]

    J. Dan, Y . Liu, H. Xie, J. Deng, H. Xie, X. Xie, and B. Sun. Transface: Calibrating transformer training for face recognition from a data- centric perspective. InICCV, pages 20585–20596. IEEE, 2023

  12. [12]

    DeBruine

    L. DeBruine. debruine/webmorph: Beta release 2.Zenodo https://doi. org/10, 5281, 2018

  13. [13]

    J. Deng, J. Guo, E. Ververas, I. Kotsia, and S. Zafeiriou. Retinaface: Single-shot multi-level face localisation in the wild. InCVPR, pages 5202–5211. Computer Vision Foundation / IEEE, 2020

  14. [14]

    J. Deng, J. Guo, N. Xue, and S. Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. InCVPR, pages 4690–4699. Computer Vision Foundation / IEEE, 2019

  15. [15]

    J. Deng, J. Guo, D. Zhang, Y . Deng, X. Lu, and S. Shi. Lightweight face recognition challenge. InProceedings of the IEEE/CVF interna- tional conference on computer vision workshops, pages 0–0, 2019

  16. [16]

    M. Fang, F. Boutros, and N. Damer. Unsupervised face morphing attack detection via self-paced anomaly detection. In2022 IEEE International Joint Conference on Biometrics (IJCB), pages 1–11. IEEE, 2022

  17. [17]

    Ferrara, A

    M. Ferrara, A. Franco, and D. Maltoni. The magic passport. InIEEE International Joint Conference on Biometrics, Clearwater , IJCB 2014, FL, USA, September 29 - October 2, 2014, pages 1–7. IEEE, 2014

  18. [18]

    Ferrara, A

    M. Ferrara, A. Franco, and D. Maltoni. Face morphing detection in the presence of printing/scanning and heterogeneous image sources. IET Biom., 10(3):290–303, 2021

  19. [19]

    J. Fu, J. Liu, H. Tian, Y . Li, Y . Bao, Z. Fang, and H. Lu. Dual attention network for scene segmentation. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3146– 3154, 2019

  20. [20]

    Grother, M

    P. Grother, M. N. A. Hom, and K. Hanaoka. Ongoing face recognition vendor test (frvt) part 5: Face image quality assessment (4th draft). InNational Institute of Standards and Technology. Tech. Rep., Sep. 2021

  21. [21]

    K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016

  22. [22]

    Huber, F

    M. Huber, F. Boutros, A. T. Luu, K. Raja, R. Ramachandra, N. Damer, P. C. Neto, T. Gonc ¸alves, A. F. Sequeira, J. S. Cardoso, et al. Syn- mad 2022: Competition on face morphing attack detection based on privacy-aware synthetic training data. In2022 IEEE International Joint Conference on Biometrics (IJCB), pages 1–10. IEEE, 2022

  23. [23]

    ISO/IEC DIS 30107- 3:2016: Information Technology – Biometric presentation attack de- tection – P

    International Organization for Standardization. ISO/IEC DIS 30107- 3:2016: Information Technology – Biometric presentation attack de- tection – P. 3: Testing and reporting, 2017

  24. [24]

    ISO/IEC 19794-5:2011 Information technology — Biomet- ric data interchange formats — Part 5: Face image data, 2011

    ISO/IEC. ISO/IEC 19794-5:2011 Information technology — Biomet- ric data interchange formats — Part 5: Face image data, 2011. Edition 2

  25. [25]

    ISO/IEC 20059:2025 Information technology — Method- ologies to evaluate the resistance of biometric systems to morphing attacks, 2025

    ISO/IEC. ISO/IEC 20059:2025 Information technology — Method- ologies to evaluate the resistance of biometric systems to morphing attacks, 2025. Edition 1

  26. [26]

    Karras, T

    T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. InICLR. OpenReview.net, 2018

  27. [27]

    Kirillov, E

    A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y . Lo, et al. Segment anything. InProceedings of the IEEE/CVF international conference on computer vision, pages 4015–4026, 2023

  28. [28]

    Kvanchiani, E

    K. Kvanchiani, E. Petrova, K. Efremyan, A. Sautin, and A. Kapitanov. Easyportrait–face parsing and portrait segmentation dataset.arXiv preprint arXiv:2304.13509, 2023

  29. [29]

    Lee and S

    Y . Lee and S. Lai. Byeglassesgan: Identity preserving eyeglasses removal for face images. In A. Vedaldi, H. Bischof, T. Brox, and J. Frahm, editors,Computer Vision - ECCV 2020 - 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIX, volume 12374 ofLecture Notes in Computer Science, pages 243–258. Springer, 2020

  30. [30]

    T.-Y . Lin, P. Doll´ar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 2117–2125, 2017

  31. [31]

    Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. InICCV, pages 3730–3738. IEEE Computer Society, 2015

  32. [32]

    J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 3431–3440, 2015

  33. [33]

    Makrushin, T

    A. Makrushin, T. Neubert, and J. Dittmann. Automatic generation and detection of visually faultless facial morphs. InVISIGRAPP (6: VISAPP), pages 39–50. SciTePress, 2017

  34. [34]

    S. Mallick. Face morph using opencv — c++ / python.LearnOpenCV, 1(1), 2016

  35. [35]

    B. Maze, J. C. Adams, J. A. Duncan, N. D. Kalka, T. Miller, C. Otto, A. K. Jain, W. T. Niggel, J. Anderson, J. Cheney, and P. Grother. IARPA janus benchmark - C: face dataset and protocol. InICB, pages 158–165. IEEE, 2018

  36. [36]

    Meden, P

    B. Meden, P. Rot, P. Terh ¨orst, N. Damer, A. Kuijper, W. J. Scheirer, A. Ross, P. Peer, and V . ˇStruc. Privacy–enhancing face biometrics: A comprehensive survey.IEEE Transactions on Information F orensics and Security, 16:4147–4183, 2021

  37. [37]

    R. E. Neddo, Z. W. Blasingame, and C. Liu. The impact of print- scanning in heterogeneous morph evaluation scenarios. InIEEE International Joint Conference on Biometrics, IJCB 2024, Buffalo, NY, USA, September 15-18, 2024, pages 1–10. IEEE, 2024

  38. [38]

    Orav and A

    A. Orav and A. D’Alfonso.Smart borders: EU entry/exit system. EPRS, European Parliamentary Research Service, Members’ Research Service, 2016

  39. [39]

    P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek. Overview of the face recognition grand challenge. In2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), volume 1, pages 947–954. IEEE, 2005

  40. [40]

    P. J. Phillips, H. Wechsler, J. Huang, and P. J. Rauss. The feret database and evaluation procedure for face-recognition algorithms.Image and vision computing, 16(5):295–306, 1998

  41. [41]

    Photo editor cutout background eraser.https://play.google.com/stor e/apps/details?id=photoeditor.cutout.backgroun deraser, n.d

    Photo Editor & Cutout Background Eraser (Developer). Photo editor cutout background eraser.https://play.google.com/stor e/apps/details?id=photoeditor.cutout.backgroun deraser, n.d. Google Play Store app, accessed 2026-03-17

  42. [42]

    R. P. K. Poudel, S. Liwicki, and R. Cipolla. Fast-scnn: Fast semantic segmentation network. InBMVC, page 289. BMV A Press, 2019

  43. [43]

    L. Qin, M. Wang, C. Deng, K. Wang, X. Chen, J. Hu, and W. Deng. Swinface: A multi-task transformer for face recognition, expression recognition, age estimation and attribute estimation.IEEE Trans. Circuits Syst. Video Technol., 34(4):2223–2234, 2024

  44. [44]

    A. Quek. Facemorpher, 2019

  45. [45]

    Radford, J

    A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. InInternational conference on machine learning, pages 8748–8763. PmLR, 2021

  46. [46]

    Raghavendra, K

    R. Raghavendra, K. B. Raja, and C. Busch. Detecting morphed face images. InBTAS, pages 1–7. IEEE, 2016

  47. [47]

    Raghavendra, K

    R. Raghavendra, K. B. Raja, S. Venkatesh, and C. Busch. Transferable deep-cnn features for detecting digital and print-scanned morphed face images. InCVPR Workshops, pages 1822–1830. IEEE Computer Society, 2017

  48. [48]

    Rathgeb, K

    C. Rathgeb, K. Bernardo, N. E. Haryanto, and C. Busch. Effects of image compression on face image manipulation detection: A case study on facial retouching.IET Biom., 10(3):342–355, 2021

  49. [49]

    Rathgeb, A

    C. Rathgeb, A. Dantcheva, and C. Busch. Impact and detection of facial beautification in face recognition: An overview.IEEE Access, 7:152667–152678, 2019

  50. [50]

    Scherhag, C

    U. Scherhag, C. Rathgeb, and C. Busch. Performance variation of morphed face image detection algorithms across different datasets. In IWBF, pages 1–6. IEEE, 2018

  51. [51]

    Schlett, S

    T. Schlett, S. Schachner, C. Rathgeb, J. E. Tapia, and C. Busch. Effect of lossy compression algorithms on face image quality and recognition. InIEEE International Conference on Acoustics, Speech and Signal Processing ICASSP 2023, Rhodes Island, Greece, June 4-10, 2023, pages 1–5. IEEE, 2023

  52. [52]

    Z. Sun, S. Song, I. Patras, and G. Tzimiropoulos. Cemiface: Center- based semi-hard synthetic face generation for face recognition. In NeurIPS, 2024

  53. [53]

    E. Xie, W. Wang, Z. Yu, A. Anandkumar, J. M. Alvarez, and P. Luo. Segformer: Simple and efficient design for semantic segmentation with transformers.Advances in neural information processing systems, 34:12077–12090, 2021

  54. [54]

    D. Yi, Z. Lei, S. Liao, and S. Z. Li. Learning face representation from scratch.CoRR, abs/1411.7923, 2014

  55. [55]

    C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 325–341, 2018

  56. [56]

    J. Zeng, X. Qiu, and S. Shi. Image processing effects on the deep face recognition system.Mathematical Biosciences and Engineering, 18(2):1187–1200, 2021

  57. [57]

    Zhang, S

    H. Zhang, S. Venkatesh, R. Ramachandra, K. B. Raja, N. Damer, and C. Busch. MIPGAN - generating strong and high quality morphing attacks using identity prior driven GAN.IEEE Trans. Biom. Behav. Identity Sci., 3(3):365–383, 2021. SUPPLEMENTARY MATERIAL SAM PROMPTSSELECTION As described in the main document, SAM [27] can receive input prompts that orient th...

  58. [58]

    also follow a similar tendency, with datasets that achieve higher quality / FR performance triggering the MAD sys- tems wrongly less often (lower BPCER@t10%). For SPL

  59. [59]

    As analyzed in the main document, these results show that different MAD systems present significantly different behaviors when evaluating the same segmented data

    and MixFaceNet-MAD [9], however, datasets with a poorer recognition performance are generally less prone to trigger erroneous detection. As analyzed in the main document, these results show that different MAD systems present significantly different behaviors when evaluating the same segmented data. Hence, background removal can prove beneficial or impairi...