Recognition: unknown
Co-designing for Compliance: Multi-party Computation Protocols for Post-Market Fairness Monitoring in Algorithmic Hiring
read the original abstract
Post-market fairness monitoring is now mandated to ensure fairness and accountability for high-risk employment AI systems under emerging regulations such as the EU AI Act. However, effective fairness monitoring often requires access to sensitive personal data, which is subject to strict legal protections under data protection law. Multi-party computation (MPC) offers a promising technical foundation for compliant post-market fairness monitoring, enabling the secure computation of fairness metrics without revealing sensitive attributes. Despite growing technical interest, the operationalization of MPC-based fairness monitoring in real-world hiring contexts under concrete legal, industrial, and usability constraints remains unknown. This work addresses this gap through a co-design approach integrating technical, legal, and industrial expertise. We identify practical design requirements for MPC-based fairness monitoring, develop an end-to-end, legally compliant protocol spanning the full data lifecycle, and empirically validate it in a large-scale industrial setting. Our findings provide actionable design insights as well as legal and industrial implications for deploying MPC-based post-market fairness monitoring in algorithmic hiring systems.
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
A Benchmark for Strategic Auditee Gaming Under Continuous Compliance Monitoring
Continuous auditing creates an unavoidable cover regime in which static auditors cannot simultaneously eliminate coverage and granularity failures, shown via new policies, strategies, and a reproducible simulator.
-
Differentially Private Auditing Under Strategic Response
Strategic Private Audit Design (SPAD) uses a bilevel game to allocate differential privacy budgets across harm dimensions so that the welfare-weighted under-detection gap is minimized even when the audited developer r...
Reference graph
Works this paper leans on
-
[1]
The eu artificial intelligence act.European Union, 2024
EU Artificial Intelligence Act. The eu artificial intelligence act.European Union, 2024
work page 2024
-
[2]
Navigating demographic measurement for fairness and equity.Technical Report: https://cdt
Miranda Bogen. Navigating demographic measurement for fairness and equity.Technical Report: https://cdt. org/insights/report-navigating- demographic-measurement-for-fairness-and-equity/, 2024
work page 2024
-
[3]
The Boston Women’s Workforce Council (BWWC). Data privacy. https://thebwwc.org/mpc. Accessed: 2025-11-08
work page 2025
-
[4]
Mapping and understanding the ai governance ecosystem
Ashley Casovan and Richard Sentinella. Mapping and understanding the ai governance ecosystem. IAPP AI Governance Center, 2025. Available at: https://iapp.org/resources/article/mapping-ai-governance-ecosystem
work page 2025
-
[5]
On the privacy risks of algorithmic fairness
Hongyan Chang and Reza Shokri. On the privacy risks of algorithmic fairness. In2021 IEEE European Symposium on Security and Privacy (EuroS&P), pages 292–303. IEEE, 2021
work page 2021
-
[6]
Digital omnibus regulation proposal
European Commission. Digital omnibus regulation proposal. https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal. Accessed: 2026-01-13
work page 2026
-
[7]
Gaebler, Hamed Nilforoshan, Ravi Shroff, and Sharad Goel
Sam Corbett-Davies, Johann D. Gaebler, Hamed Nilforoshan, Ravi Shroff, and Sharad Goel. The measure and mismeasure of fairness.Journal of Machine Learning Research, 24(312):1–117, 2023
work page 2023
-
[8]
Alessia D’Amico. Market power and the gdpr: Can consent given to dominant companies ever be freely given?European Papers-A Journal on Law and Integration, 8(2):611–629, 2023
work page 2023
-
[9]
Cynthia Dwork. Differential privacy. InInternational colloquium on automata, languages, and programming, pages 1–12. Springer, 2006
work page 2006
-
[10]
Guidelines 05/2020 on consent under regulation 2016/679
European Data Protection Board (EDPB). Guidelines 05/2020 on consent under regulation 2016/679. https://www.edpb.europa.eu/our-work- tools/our-documents/guidelines/guidelines-052020-consent-under-regulation-2016679_en. Accessed: 2025-12-30
work page 2020
-
[11]
Lilian Edwards. The eu ai act: a summary of its significance and scope.Artificial Intelligence (the EU AI Act), 1:25, 2021
work page 2021
-
[12]
Secure multi-party computation for inter-organizational process mining
Gamal Elkoumy, Stephan A Fahrenkrog-Petersen, Marlon Dumas, Peeter Laud, Alisa Pankova, and Matthias Weidlich. Secure multi-party computation for inter-organizational process mining. InInternational Conference on Business Process Modeling, Development and Support, pages 166–181. Springer, 2020
work page 2020
-
[13]
Synq: Public policy analytics over encrypted data
Zachary Espiritu, Marilyn George, Seny Kamara, and Lucy Qin. Synq: Public policy analytics over encrypted data. In2024 IEEE Symposium on Security and Privacy (SP), pages 146–165. IEEE, 2024
work page 2024
-
[14]
Nicola Fabiano. Subject roles in the eu ai act: Mapping and regulatory implications.arXiv preprint arXiv:2510.13591, 2025
-
[15]
Alessandro Fabris, Nina Baranowska, Matthew J Dennis, David Graus, Philipp Hacker, Jorge Saldivar, Frederik Zuiderveen Borgesius, and Asia J Biega. Fairness and bias in algorithmic hiring: A multidisciplinary survey.ACM Transactions on Intelligent Systems and Technology, 16(1):1–54, 2025
work page 2025
-
[16]
Daniel Franzen, Claudia Müller-Birn, and Odette Wegwarth. Communicating the privacy-utility trade-off: Supporting informed data donation with privacy decision interfaces for differential privacy.Proceedings of the ACM on Human-Computer Interaction, 8(CSCW1):1–56, 2024
work page 2024
-
[17]
Training curriculum on ai and data protection: Fundamentals of secure ai systems with personal data
Enrico Glerean. Training curriculum on ai and data protection: Fundamentals of secure ai systems with personal data. EDPB Training Material,
-
[18]
Secure multi-party computation.Manuscript
Oded Goldreich. Secure multi-party computation.Manuscript. Preliminary version, 78(110):1–108, 1998
work page 1998
-
[19]
Multi-party computation in the gdpr
Lukas Helminger and Christian Rechberger. Multi-party computation in the gdpr. InPrivacy Symposium: Data Protection Law International Convergence and Compliance with Innovative Technologies, pages 21–39. Springer, 2022
work page 2022
-
[20]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. Improving fairness in machine learning systems: What do industry practitioners need? InProceedings of the 2019 CHI conference on human factors in computing systems, pages 1–16, 2019
work page 2019
-
[21]
Chris Jay Hoofnagle, Bart Van Der Sloot, and Frederik Zuiderveen Borgesius. The european union general data protection regulation: what it is and what it means.Information & Communications Technology Law, 28(1):65–98, 2019
work page 2019
-
[22]
Differential fairness: an intersectional framework for fair ai.Entropy, 25(4):660, 2023
Rashidul Islam, Kamrun Naher Keya, Shimei Pan, Anand D Sarwate, and James R Foulds. Differential fairness: an intersectional framework for fair ai.Entropy, 25(4):660, 2023
work page 2023
-
[23]
Differentially private fair learning
Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, and Jonathan Ullman. Differentially private fair learning. InInternational Conference on Machine Learning, pages 3000–3008. PMLR, 2019
work page 2019
-
[24]
Comprehension from chaos: Towards informed consent for private computation
Bailey Kacsmar, Vasisht Duddu, Kyle Tilbury, Blase Ur, and Florian Kerschbaum. Comprehension from chaos: Towards informed consent for private computation. InProceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, pages 210–224, 2023
work page 2023
-
[25]
Blind justice: Fairness with encrypted sensitive attributes
Niki Kilbertus, Adrià Gascón, Matt Kusner, Michael Veale, Krishna Gummadi, and Adrian Weller. Blind justice: Fairness with encrypted sensitive attributes. InInternational Conference on Machine Learning, pages 2630–2639. PMLR, 2018
work page 2018
-
[26]
Andrei Lapets, Frederick Jansen, Kinan Dak Albab, Rawane Issa, Lucy Qin, Mayank Varia, and Azer Bestavros. Accessible privacy-preserving web-based data analysis for assessing and addressing economic inequalities. InProceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies, pages 1–5, 2018
work page 2018
-
[27]
Mitra Lashkari and Jinghui Cheng. “finding the magic sauce”: Exploring perspectives of recruiters and job seekers on recruitment bias and automated tools. InProceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pages 1–16, 2023
work page 2023
-
[28]
A survey on bias and fairness in machine learning.ACM computing surveys (CSUR), 54(6):1–35, 2021
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning.ACM computing surveys (CSUR), 54(6):1–35, 2021
work page 2021
-
[29]
Jakob Mökander, Maria Axente, Federico Casolari, and Luciano Floridi. Conformity assessments and post-market monitoring: a guide to the role of auditing in the proposed european ai regulation.Minds and Machines, 32(2):241–268, 2022. Manuscript submitted to ACM 18 He et al
work page 2022
-
[30]
Smpai: Secure multi-party computation for federated learning
Vaikkunth Mugunthan, Antigoni Polychroniadou, David Byrd, and Tucker Hybinette Balch. Smpai: Secure multi-party computation for federated learning. InProceedings of the NeurIPS 2019 Workshop on Robust AI in Financial Services, volume 21. MIT Press Cambridge, MA, USA, 2019
work page 2019
-
[31]
Privfair: a library for privacy-preserving fairness auditing.arXiv preprint arXiv:2202.04058, 2022
Sikha Pentyala, David Melanson, Martine De Cock, and Golnoosh Farnadi. Privfair: a library for privacy-preserving fairness auditing.arXiv preprint arXiv:2202.04058, 2022
-
[32]
Sikha Pentyala, Nicola Neophytou, Anderson Nascimento, Martine De Cock, and Golnoosh Farnadi. Privfairfl: Privacy-preserving group fairness in federated learning.arXiv preprint arXiv:2205.11584, 2022
-
[33]
From usability to secure computing and back again
Lucy Qin, Andrei Lapets, Frederick Jansen, Peter Flockhart, Kinan Dak Albab, Ira Globus-Harris, Shannon Roberts, and Mayank Varia. From usability to secure computing and back again. InFifteenth Symposium on Usable Privacy and Security (SOUPS 2019), pages 191–210, 2019
work page 2019
-
[34]
Mitigating bias in algorithmic hiring: Evaluating claims and practices
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 469–481, 2020
work page 2020
-
[35]
Secure multi-party computation-based privacy-preserving data analysis in healthcare iot systems
Kevser Sahinbas and Ferhat Ozgur Catak. Secure multi-party computation-based privacy-preserving data analysis in healthcare iot systems. In Interpretable cognitive Internet of things for healthcare, pages 57–72. Springer, 2023
work page 2023
-
[36]
Javier Sánchez-Monedero, Lina Dencik, and Lilian Edwards. What does it mean to’solve’the problem of discrimination in hiring? social, technical and legal perspectives from the uk on automated hiring systems. InProceedings of the 2020 conference on fairness, accountability, and transparency, pages 458–468, 2020
work page 2020
-
[37]
Intersectional hci: Engaging identity through gender, race, and class
Ari Schlesinger, W Keith Edwards, and Rebecca E Grinter. Intersectional hci: Engaging identity through gender, race, and class. InProceedings of the 2017 CHI conference on human factors in computing systems, pages 5412–5427, 2017
work page 2017
-
[38]
Theresa Stadler and Carmela Troncoso. Why the search for a privacy-preserving data sharing mechanism is failing.Nature Computational Science, 2(4):208–210, 2022
work page 2022
-
[39]
European Union. Art. 9 gdpr: Processing of special categories of personal data. https://gdpr-info.eu/art-9-gdpr/. Accessed: 2025-12-30
work page 2025
-
[40]
Article 10: Data and data governance
European Union. Article 10: Data and data governance. https://artificialintelligenceact.eu/article/10/. Accessed: 2025-08-25
work page 2025
-
[41]
European Union. Article 72: Post-market monitoring by providers and post-market monitoring plan for high-risk ai systems. https: //artificialintelligenceact.eu/article/72/. Accessed: 2025-12-30
work page 2025
-
[42]
Article 9: Risk management system
European Union. Article 9: Risk management system. https://artificialintelligenceact.eu/article/9/. Accessed: 2025-12-30
work page 2025
-
[43]
Charter of fundamental rights of the european union
European Union. Charter of fundamental rights of the european union. https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX: 12012P/TXT. Accessed: 2025-12-30
work page 2025
-
[44]
Michael Veale and Reuben Binns. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data.Big Data & Society, 4(2):2053951717743530, 2017
work page 2017
-
[45]
Enabling analytics on sensitive medical data with secure multi-party computation
Meilof Veeningen, Supriyo Chatterjea, Anna Zsófia Horváth, Gerald Spindler, Eric Boersma, Peter van der Spek, Onno Van Der Galiën, Job Gutteling, Wessel Kraaij, and Thijs Veugen. Enabling analytics on sensitive medical data with secure multi-party computation. In40th Medical Informatics in Europe Conference, MIE 2018, pages 76–80. IOS, 2018
work page 2018
-
[46]
Paul Voigt and Axel Von dem Bussche. The eu general data protection regulation (gdpr).A Practical Guide, 1st Ed., Cham: Springer International Publishing, 10(3152676):10–5555, 2017
work page 2017
-
[47]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. Why fairness cannot be automated: Bridging the gap between eu non-discrimination law and ai.Computer Law & Security Review, 41:105567, 2021
work page 2021
-
[48]
Angelina Wang, Vikram V Ramaswamy, and Olga Russakovsky. Towards intersectionality in machine learning: Including more identities, handling underrepresentation, and performing evaluation. InProceedings of the 2022 ACM conference on fairness, accountability, and transparency, pages 336–349, 2022
work page 2022
-
[49]
Hilde Weerts, Raphaële Xenidis, Fabien Tarissan, Henrik Palmer Olsen, and Mykola Pechenizkiy. Algorithmic unfairness through the lens of eu non-discrimination law: Or why the law is not a decision tree. InProceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, 2023
work page 2023
-
[50]
Do artifacts have politics?Daedalus, 109(1):121–136, 1980
Langdon Winner. Do artifacts have politics?Daedalus, 109(1):121–136, 1980
work page 1980
-
[51]
Access denied: Meaningful data access for quantitative algorithm audits
Juliette Zaccour, Reuben Binns, and Luc Rocher. Access denied: Meaningful data access for quantitative algorithm audits. InProceedings of the 2025 CHI Conference on Human Factors in Computing Systems, pages 1–31, 2025
work page 2025
-
[52]
Yiliang Zhang and Qi Long. Assessing fairness in the presence of missing data.Advances in neural information processing systems, 34:16007–16019, 2021
work page 2021
-
[53]
Chuan Zhao, Shengnan Zhao, Minghao Zhao, Zhenxiang Chen, Chong-Zhi Gao, Hongwei Li, and Yu-an Tan. Secure multi-party computation: theory, practice and applications.Information Sciences, 476:357–372, 2019
work page 2019
-
[54]
Rui Zhao, Naman Goel, Nitin Agrawal, Jun Zhao, Jake Stein, Wael S Albayaydh, Ruben Verborgh, Reuben Binns, Tim Berners-Lee, and Nigel Shadbolt. Libertas: Privacy-preserving collaborative computation for decentralised personal data stores.Proceedings of the ACM on Human-Computer Interaction, 9(7):1–28, 2025. Manuscript submitted to ACM Multi-party Computat...
work page 2025
-
[55]
Organizational and governance learnings AI governance and compliance foundation The demonstrator translated abstract regulatory requirements into con- crete monitoring practices, strengthening internal governance and readi- ness for AI Act compliance. Multidisciplinary approach to responsibility Early and continuous involvement of technical, legal, produc...
-
[56]
Technical and implementation learnings Effort and development’s cost Most implementation effort was concentrated on data engineering, ETL pipelines, metric precomputation, and dashboard usability rather than metric formulation. Feasibility of discrimination monitoring dash- boards Privacy-preserving techniques such as secret sharing and secure multi- part...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.