pith. machine review for the scientific record. sign in

arxiv: 2604.06148 · v1 · submitted 2026-04-07 · 💻 cs.CR · cs.AI· cs.MA

Recognition: no theorem link

Who Governs the Machine? A Machine Identity Governance Taxonomy (MIGT) for AI Systems Operating Across Enterprise and Geopolitical Boundaries

Authors on Pith no claims yet

Pith reviewed 2026-05-10 18:53 UTC · model grok-4.3

classification 💻 cs.CR cs.AIcs.MA
keywords machine identityAI governancerisk taxonomycybersecurity frameworkregulatory alignmentstate actor threatsenterprise securityautomated agents
0
0 comments X

The pith

AI governance overlooks machine identities that now outnumber humans eighty to one and drive major outages and espionage.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper establishes that current AI governance frameworks ignore the machine identities such as API tokens, service accounts, and automated workflows that AI agents rely on to act. These identities outnumber human ones by ratios exceeding eighty to one in enterprises, yet they lack any integrated oversight. The authors show this gap has already produced billions in losses, as in the 2024 CrowdStrike outage, and has been exploited by nation-state actors for espionage. They introduce the Machine Identity Governance Taxonomy as a single six-domain structure meant to close the technical, regulatory, and cross-border gaps at once. A supporting risk taxonomy lists thirty-seven sub-categories drawn from real incidents, and a regulatory alignment map handles conflicts between major jurisdictions.

Core claim

The central claim is that a dedicated Machine Identity Governance Taxonomy (MIGT) can integrate technical controls, regulatory compliance, and cross-jurisdictional coordination for machine identities in AI systems, backed by an AI-Identity Risk Taxonomy of thirty-seven enumerated sub-categories, a state-actor threat model, and a four-phase implementation roadmap.

What carries the argument

The Machine Identity Governance Taxonomy (MIGT), a six-domain framework that simultaneously covers technical governance, regulatory compliance, and cross-jurisdictional coordination for machine identities.

If this is right

  • Enterprises gain a single structure to govern the dominant form of identity in AI deployments instead of patching separate technical and compliance efforts.
  • Organizations can map and manage conflicting obligations under EU, US, and Chinese rules using the provided alignment structure.
  • Security teams obtain an enumerated list of thirty-seven risk areas to audit against documented incidents and threat data.
  • A four-phase roadmap translates the taxonomy into concrete enterprise programs with defined steps.
  • The state-actor threat model highlights specific vectors such as AI-enhanced credential abuse by groups like Silk Typhoon that existing frameworks have not addressed.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Widespread use of the framework could shift regulatory focus from human-centric identity rules toward machine-centric ones, prompting updates in standards that currently treat automated agents as secondary.
  • If the cross-jurisdictional map proves workable, it might serve as a template for other domains where AI systems operate across borders, such as data localization or algorithmic accountability.
  • Enterprises that implement the roadmap early may reduce exposure to automated failures that scale faster than human-managed processes.
  • The approach invites empirical testing of whether integrating the six domains actually lowers total risk compared with addressing technical, legal, and geopolitical issues in isolation.

Load-bearing premise

The thirty-seven risk sub-categories are comprehensive enough to represent all relevant threats and the six-domain structure can be put in place without creating new regulatory or operational conflicts.

What would settle it

An enterprise adopts the full MIGT and four-phase roadmap, then tracks whether machine-identity incidents, compliance violations, or successful state-actor intrusions decline measurably relative to a matched control group that does not adopt it.

read the original abstract

The governance of artificial intelligence has a blind spot: the machine identities that AI systems use to act. AI agents, service accounts, API tokens, and automated workflows now outnumber human identities in enterprise environments by ratios exceeding 80 to 1, yet no integrated framework exists to govern them. A single ungoverned automated agent produced $5.4-10 billion in losses in the 2024 CrowdStrike outage; nation-state actors including Silk Typhoon and Salt Typhoon have operationalized ungoverned machine credentials as primary espionage vectors against critical infrastructure. This paper makes four original contributions. First, the AI-Identity Risk Taxonomy (AIRT): a comprehensive enumeration of 37 risk sub-categories across eight domains, each grounded in documented incidents, regulatory recognition, practitioner prevalence data, and threat intelligence. Second, the Machine Identity Governance Taxonomy (MIGT): an integrated six-domain governance framework simultaneously addressing the technical governance gap, the regulatory compliance gap, and the cross-jurisdictional coordination gap that existing frameworks address only in isolation. Third, a foreign state actor threat model for enterprise identity governance, establishing that Silk Typhoon, Salt Typhoon, Volt Typhoon, and North Korean AI-enhanced identity fraud operations have already operationalized AI identity vulnerabilities as active attack vectors. Fourth, a cross-jurisdictional regulatory alignment structure mapping enterprise AI identity governance obligations under EU, US, and Chinese frameworks simultaneously, identifying irreconcilable conflicts and providing a governance mechanism for managing them. A four-phase implementation roadmap translates the MIGT into actionable enterprise programs.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims that machine identities (service accounts, API tokens, automated workflows) represent a critical blind spot in AI governance, outnumbering human identities 80:1 in enterprises. It presents the AI-Identity Risk Taxonomy (AIRT) enumerating 37 risk sub-categories across eight domains, grounded in incidents, regulations, and threat data; the Machine Identity Governance Taxonomy (MIGT) as a six-domain integrated framework addressing technical, regulatory, and cross-jurisdictional gaps; a nation-state threat model highlighting actors like Silk Typhoon and Volt Typhoon; a regulatory alignment structure mapping EU, US, and Chinese obligations with conflict management; and a four-phase implementation roadmap. The central motivation cites the 2024 CrowdStrike outage as producing $5.4-10B losses from an ungoverned automated agent.

Significance. If the taxonomies prove comprehensive and the MIGT implementable without introducing new conflicts, the work could offer a timely synthesis for enterprises facing rising machine identity risks amid AI automation and geopolitical tensions. The explicit integration of a foreign actor threat model with cross-jurisdictional regulatory mapping is a constructive contribution that existing frameworks handle only separately. The four-phase roadmap provides a practical translation path. However, the overall significance hinges on stronger empirical validation of category exhaustiveness and resolution of the motivating incident example.

major comments (2)
  1. [Abstract] Abstract and opening motivation: The claim that 'a single ungoverned automated agent produced $5.4-10 billion in losses in the 2024 CrowdStrike outage' is not supported by the documented root cause. Public analyses from CrowdStrike, Microsoft, and independent reports attribute the outage to a faulty sensor content update that triggered Windows kernel panics, with no role for service account credentials, API token misuse, or identity governance failures. Because this example is presented as direct evidence of the technical governance gap that AIRT and MIGT are designed to close, the empirical grounding for the framework's urgency requires correction or replacement with incidents that actually involve machine identity vectors.
  2. [AIRT section] AIRT construction (presumed §3 or equivalent): The manuscript presents the 37 risk sub-categories as comprehensive and grounded in 'documented incidents, regulatory recognition, practitioner prevalence data, and threat intelligence,' yet provides no explicit methodology, coverage audit, or conflict analysis demonstrating that the enumeration is exhaustive or free of overlaps. Given that the central claim is that AIRT fills a previously unaddressed gap, a load-bearing justification for completeness is needed; post-hoc selection of incidents risks under- or over-representation of categories.
minor comments (2)
  1. [MIGT framework] The six-domain MIGT structure is introduced as simultaneously addressing three distinct gaps, but the manuscript would benefit from an explicit mapping table showing how each domain contributes to technical, regulatory, and cross-jurisdictional objectives without creating new inconsistencies.
  2. [Taxonomy presentation] Notation for the 37 sub-categories and eight domains in AIRT could be clarified with a summary table early in the paper to aid reader navigation, especially when later referenced in the threat model and regulatory sections.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their detailed and constructive feedback, which highlights important opportunities to strengthen the manuscript's empirical accuracy and methodological transparency. We address each major comment below and will incorporate revisions accordingly.

read point-by-point responses
  1. Referee: [Abstract] Abstract and opening motivation: The claim that 'a single ungoverned automated agent produced $5.4-10 billion in losses in the 2024 CrowdStrike outage' is not supported by the documented root cause. Public analyses from CrowdStrike, Microsoft, and independent reports attribute the outage to a faulty sensor content update that triggered Windows kernel panics, with no role for service account credentials, API token misuse, or identity governance failures. Because this example is presented as direct evidence of the technical governance gap that AIRT and MIGT are designed to close, the empirical grounding for the framework's urgency requires correction or replacement with incidents that actually involve machine identity vectors.

    Authors: We agree that the attribution of the 2024 CrowdStrike outage to an ungoverned automated agent via machine identity failure is factually incorrect. Public post-incident analyses confirm the root cause was a faulty sensor content update triggering kernel panics, with no involvement of service accounts, API tokens, or identity governance lapses. This example was selected to illustrate risks of ungoverned automation but was misapplied to the specific governance gap addressed by AIRT/MIGT. We will revise the abstract, introduction, and motivation sections to remove the claim entirely and replace it with documented machine identity incidents, such as nation-state exploitation of service accounts by actors like Volt Typhoon against critical infrastructure. This correction will better support the urgency of the proposed frameworks. revision: yes

  2. Referee: [AIRT section] AIRT construction (presumed §3 or equivalent): The manuscript presents the 37 risk sub-categories as comprehensive and grounded in 'documented incidents, regulatory recognition, practitioner prevalence data, and threat intelligence,' yet provides no explicit methodology, coverage audit, or conflict analysis demonstrating that the enumeration is exhaustive or free of overlaps. Given that the central claim is that AIRT fills a previously unaddressed gap, a load-bearing justification for completeness is needed; post-hoc selection of incidents risks under- or over-representation of categories.

    Authors: We acknowledge that the manuscript asserts grounding in incidents, regulations, prevalence data, and threat intelligence but does not include an explicit methodology, audit, or overlap analysis for the 37 sub-categories. To provide the required justification, we will add a dedicated subsection (and supporting appendix) describing the construction process: systematic sourcing from breach databases (e.g., Verizon DBIR, CISA reports), regulatory mappings (NIST, GDPR, AI Act drafts), practitioner surveys, and MITRE ATT&CK TTPs for identity vectors; followed by iterative deduplication and coverage checks against eight domains. Each sub-category will include source traceability to demonstrate exhaustiveness and minimize selection bias. This addition will directly address the load-bearing concern. revision: yes

Circularity Check

0 steps flagged

No significant circularity; taxonomy synthesized from external incidents and gaps

full rationale

The paper proposes the AIRT (37 risk sub-categories) and MIGT (six-domain framework) as original syntheses grounded in documented incidents, regulatory mappings, practitioner data, and threat intelligence reports. No equations, fitted parameters, predictions, or derivations appear in the provided text or abstract. The central claims rest on external references (e.g., CrowdStrike outage, nation-state actors) rather than reducing to self-referential definitions or self-citation chains. The framework is presented as an organizational integration of pre-existing gaps, not a self-definitional or fitted-input construction. This is self-contained against external benchmarks with no load-bearing circular steps.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 2 invented entities

The central claims rest on domain assumptions about the scale and risks of machine identities plus the novelty and utility of the newly proposed taxonomies; no free parameters or mathematical axioms are present.

axioms (2)
  • domain assumption Machine identities now outnumber human identities in enterprise environments by ratios exceeding 80 to 1.
    Stated as a factual premise establishing the scale of the governance problem.
  • domain assumption Ungoverned machine identities have produced major documented losses and serve as active espionage vectors for nation-state actors.
    Grounded in specific incidents (CrowdStrike, Silk Typhoon, Salt Typhoon) cited in the abstract.
invented entities (2)
  • AI-Identity Risk Taxonomy (AIRT) no independent evidence
    purpose: Comprehensive enumeration of 37 risk sub-categories across eight domains.
    Newly introduced framework presented as an original contribution.
  • Machine Identity Governance Taxonomy (MIGT) no independent evidence
    purpose: Integrated six-domain governance framework addressing technical, regulatory, and cross-jurisdictional gaps.
    Newly introduced framework presented as an original contribution.

pith-pipeline@v0.9.0 · 5597 in / 1625 out tokens · 34379 ms · 2026-05-10T18:53:17.630212+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Decision Evidence Maturity Model for Agentic AI: A Property-Level Method Specification

    cs.CY 2026-04 unverdicted novelty 4.0

    DEMM defines four executable evidence-sufficiency categories plus a conflicting category for agentic AI decisions and rolls per-property verdicts into a five-level maturity rubric.

Reference graph

Works this paper leans on

115 extracted references · 9 canonical work pages · cited by 1 Pith paper

  1. [1]

    T. Peck. How AI is reshaping identity governance for CISOs and CIOs. CyberArk Blog, November 2025. Available:https://www.cyberark.com/resources/blog/how-ai-i s-reshaping-identity-governance-for-cisos-and-cios

  2. [2]

    Gyure and K

    W. Gyure and K. Johnson. The human-machine identity blur: A unified framework for cybersecurity risk management in 2025. arXiv:2503.18255 [preprint], March 2025. Available:https://arxiv.org/abs/2503.18255

  3. [3]

    Five identity-driven shifts reshaping enterprise security in 2026

    Delinea. Five identity-driven shifts reshaping enterprise security in 2026. Help Net Security, December 2025. Available:https://www.helpnetsecurity.com/2025/12/ 24/five-identity-driven-shifts-reshaping-enterprise-security-in-2026/

  4. [4]

    Identity and access management in the AI era: 2025 guide

    Identity Defined Security Alliance. Identity and access management in the AI era: 2025 guide. IDSA, 2025. Available:https://www.idsalliance.org/blog/identity-and -access-management-in-the-ai-era-2025-guide/

  5. [5]

    AI and privacy: Shifting from 2024 to 2025

    Cloud Security Alliance. AI and privacy: Shifting from 2024 to 2025. CSA Blog, April

  6. [6]

    Available: https://cloudsecurityalliance.org/blog/2025/04/22/ai-and -privacy-2024-to-2025-embracing-the-future-of-global-legal-development s

  7. [7]

    Chun et al

    Y. Chun et al. Comparative global AI regulation: Policy perspectives from the EU, China, and the US. arXiv:2410.21279, October 2024. Available:https://arxiv.org/ abs/2410.21279

  8. [8]

    L. S. Lim. Artificial intelligence regulation matures: Landscapes of the USA, European Union, and China.Journal of Information Technology, 2025

  9. [9]

    Three rulebooks, one race: AI regulation in the U.S., EU, and China

    Communications of the ACM. Three rulebooks, one race: AI regulation in the U.S., EU, and China. Commun. ACM, February 2026. Available:https://cacm.acm.org /news/three-rulebooks-one-race-ai-regulation-in-the-u-s-eu-and-china/

  10. [10]

    NHI and secrets risk report — H1 2025

    Entro Labs. NHI and secrets risk report — H1 2025. Technical report, Entro Security, July 2025. Available: https://www.cybersecuritytribe.com/news/research-rev eals-44-growth-in-nhis-from-2024-to-2025

  11. [11]

    IBM security X-Force threat intelligence index 2025

    IBM Security. IBM security X-Force threat intelligence index 2025. Technical report, IBM, 2025. Available:https://www.ibm.com/reports/threat-intelligence. Kurtz & Krawiecka (2026)Who Governs the Machine?77

  12. [12]

    C. Duffy. CrowdStrike outage: We finally know what caused it and how much it cost. CNN Business, July 2024. Available:https://www.cnn.com/2024/07/24/tech/crow dstrike-outage-cost-cause

  13. [13]

    S. Snider. CrowdStrike outage drained $5.4 billion from Fortune 500: Report. Informa- tionWeek, July 2024. Available:https://www.informationweek.com/cyber-resil ience/crowdstrike-outage-drained-5-4-billion-from-fortune-500-report

  14. [14]

    Chin and P

    S. Chin and P. Lester. Protecting our edge: Trade secrets and the global AI arms race. CSIS, December 2025. Available:https://www.csis.org/analysis/protecting-o ur-edge-trade-secrets-and-global-ai-arms-race

  15. [15]

    Silk typhoon targeting IT supply chain

    Microsoft Threat Intelligence. Silk typhoon targeting IT supply chain. Microsoft Security Blog, March 2025. Available:https://www.microsoft.com/en-us/securi ty/blog/2025/03/05/silk-typhoon-targeting-it-supply-chain/

  16. [16]

    State of AI agent security 2026 report: When adoption outpaces control, February 2026

    Gravitee.io. State of AI agent security 2026 report: When adoption outpaces control, February 2026. Available:https://www.gravitee.io/blog/state-of-ai-agent-s ecurity-2026-report-when-adoption-outpaces-control

  17. [17]

    Deng et al

    Z. Deng et al. AI agents under threat: A survey of key security challenges and future pathways.ACM Comput. Surv., 57(7):182, February 2025

  18. [19]

    R. Pandey. The agentic AI governance framework: A universal model for risk, ac- countability, and compliance in autonomous systems. SSRN, October 2025. Available: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5652350

  19. [20]

    S. Joshi. Framework for government policy on agentic and generative AI: Governance, regulation, and risk management. SSRN, August 2025. Available:https://papers.s srn.com/sol3/papers.cfm?abstract_id=5511060

  20. [21]

    Shavit, S

    Y. Shavit, S. Agarwal, et al. Practices for governing agentic AI systems. OpenAI, December 2023. Available: https://cdn.openai.com/papers/practices-for-gov erning-agentic-ai-systems.pdf

  21. [22]

    S. Rose, O. Borchert, S. Mitchell, and S. Connelly. Zero trust architecture. Techni- cal Report NIST Special Publication 800-207, National Institute of Standards and Technology, August 2020. Kurtz & Krawiecka (2026)Who Governs the Machine?78

  22. [23]

    Identity management for agentic AI

    OpenID Foundation. Identity management for agentic AI. OpenID Foundation Working Group Report, October 2025. Available:https://openid.net/wp-content/uploads /2025/10/Identity-Management-for-Agentic-AI.pdf

  23. [24]

    A. Chan, R. Salganik, A. Markelius, et al. Harms from increasingly agentic algorithmic systems. InProc. 2023 ACM Conf. Fairness, Accountability, Transparency (FAccT 2023), Chicago, IL, June 2023. Available:https://dl.acm.org/doi/10.1145/35930 13.3594033

  24. [25]

    Holgersson, L

    M. Holgersson, L. Dahlander, H. W. Chesbrough, and M. Bogers. Rethinking AI agents: A principal-agent perspective. California Management Review, July 2025. Available: https://cmr.berkeley.edu/2025/07/rethinking-ai-agents-a-principal-age nt-perspective/

  25. [26]

    Algorithmic foreign influence: Rethinking sovereignty in the age of AI

    Lawfare Media. Algorithmic foreign influence: Rethinking sovereignty in the age of AI. Lawfare, August 2025. Available:https://www.lawfaremedia.org/article/algor ithmic-foreign-influence--rethinking-sovereignty-in-the-age-of-ai

  26. [27]

    Artificial intelligence and state-sponsored cyber espionage

    NYU Journal of Intellectual Property and Entertainment Law. Artificial intelligence and state-sponsored cyber espionage. JIPEL, 2025. Available:https://jipel.law.ny u.edu/artificial-intelligence-and-state-sponsored-cyber-espionage/

  27. [28]

    Ding et al

    J. Ding et al. Spy vs. AI: How artificial intelligence will remake espionage. Foreign Affairs, January 2025. Available:https://www.foreignaffairs.com/united-state s/spy-vs-ai

  28. [29]

    D. B. Johnson. Silk typhoon shifted to specifically targeting IT management companies. CyberScoop, March 2025. Available:https://cyberscoop.com/silk-typhoon-tar gets-it-services/

  29. [30]

    FIMI 101: Foreign information manipulation and interference targeting the 2024 US general election

    DFRLab. FIMI 101: Foreign information manipulation and interference targeting the 2024 US general election. Atlantic Council, September 2024. Available:https: //dfrlab.org/2024/09/26/fimi-101/

  30. [31]

    Perboli, N

    G. Perboli, N. Simionato, and S. Pratali. Navigating the AI regulatory landscape: Balancing innovation, ethics, and global governance.Economic and Political Studies, 13(4):367–397, December 2025

  31. [32]

    AI risk repository (v4)

    MIT AI Risk Repository. AI risk repository (v4). Massachusetts Institute of Technology, December 2025. Available:https://airisk.mit.edu. Kurtz & Krawiecka (2026)Who Governs the Machine?79

  32. [33]

    He et al

    Z. He et al. The emerged security and privacy of LLM agents: A survey with case studies.ACM Comput. Surv., 2025

  33. [34]

    OWASP top 10 for agentic applications 2026

    OWASP GenAI Security Project. OWASP top 10 for agentic applications 2026. OWASP, December 2025. Available: https://genai.owasp.org/resource/owasp-top-10-f or-agentic-applications-for-2026/

  34. [35]

    AI risk management framework (AI RMF 1.0)

    NIST. AI risk management framework (AI RMF 1.0). Technical report, National Institute of Standards and Technology, 2023. Available:https://nvlpubs.nist.gov /nistpubs/ai/NIST.AI.100-1.pdf

  35. [36]

    FBI announces joint cybersecurity advisory related to Salt typhoon

    Federal Bureau of Investigation. FBI announces joint cybersecurity advisory related to Salt typhoon. FBI.gov, August 2025. Available:https://www.fbi.gov/video-repos itory/salttyphoon082725.mp4/view

  36. [37]

    Salt typhoon hacks of telecommunications companies and federal response implications

    Congressional Research Service. Salt typhoon hacks of telecommunications companies and federal response implications. Technical report, Congress.gov, January 2025. Available:https://www.congress.gov/crs-product/IF12798

  37. [38]

    D. B. Johnson. Officials worry salt typhoon apathy is killing momentum for tougher telecom security rules. CyberScoop, March 2026. Available:https://cyberscoop.c om/salt-typhoon-china-telecom-hack-impact-new-jersey/

  38. [39]

    L. Schmitt. Mapping global AI governance: A nascent regime in a fragmented landscape. AI and Ethics, 2:303–314, 2022

  39. [40]

    The state of identity governance report 2026

    Omada Identity. The state of identity governance report 2026. Omada, January 2026. Available: https://omadaidentity.com/resources/analyst-reports/state-of-i ga/

  40. [41]

    2025 identity security landscape

    CyberArk. 2025 identity security landscape. CyberArk, April 2025. Available:https: //www.cyberark.com/press/machine-identities-outnumber-humans-by-more-t han-80-to-1-new-report-exposes-the-exponential-threats-of-fragmented-i dentity-security/

  41. [42]

    Key takeaways from the 2024 ESG report on non-human identity (NHI) management

    AppViewX. Key takeaways from the 2024 ESG report on non-human identity (NHI) management. AppViewX Blog, October 2024. Available:https://www.appviewx.com /blogs/key-takeaways-from-the-2024-esg-report-on-non-human-identity-n hi-management/. Kurtz & Krawiecka (2026)Who Governs the Machine?80

  42. [43]

    The AI agent identity crisis: New research reveals a governance gap, February 2026

    Strata Identity / Cloud Security Alliance. The AI agent identity crisis: New research reveals a governance gap, February 2026. Available:https://www.strata.io/blog/a gentic-identity/the-ai-agent-identity-crisis-new-research-reveals-a-g overnance-gap/

  43. [44]

    Non-human identity solutions, global, 2024–2030

    Research and Markets. Non-human identity solutions, global, 2024–2030. Research and Markets, February 2026. Available:https://www.researchandmarkets.com/repor ts/6217813/non-human-identity-solutions-global

  44. [45]

    Markovic

    S. Markovic. Non-human identities push identity security into uncharted territory. Help Net Security, December 2025. Available:https://www.helpnetsecurity.com/2025 /12/30/identity-security-permissions-sprawl/

  45. [46]

    D. Gupta. The AI identity crisis: NHIs, agents and vibe coding in 2025, October 2025. Available: https://guptadeepak.com/the-identity-crisis-no-ones-talking-a bout-how-ai-agents-and-vibe-coding-are-rewriting-the-rules-of-digital -security/

  46. [47]

    What caused the CrowdStrike outage: A detailed breakdown, August

    Messageware. What caused the CrowdStrike outage: A detailed breakdown, August

  47. [48]

    Available: https://www.messageware.com/what-caused-the-crowdstrike-o utage-a-detailed-breakdown/

  48. [49]

    Delta can sue CrowdStrike over computer outage that caused 7,000 canceled flights

    Reuters. Delta can sue CrowdStrike over computer outage that caused 7,000 canceled flights. Reuters, May 2025. Available:https://www.reuters.com/sustainability /boards-policy-regulation/delta-can-sue-crowdstrike-over-computer-out age-that-caused-7000-canceled-flights-2025-05-19/

  49. [50]

    The lasting impact of the CrowdStrike update outage

    Tufin. The lasting impact of the CrowdStrike update outage. Tufin Blog, June 2025. Available: https://www.tufin.com/blog/lasting-impact-of-crowdstrike-updat e-outage

  50. [51]

    What the 2024 CrowdStrike glitch can teach us about cyber risk

    Harvard Business Review. What the 2024 CrowdStrike glitch can teach us about cyber risk. HBR, January 2025. Available:https://hbr.org/2025/01/what-the-2024-c rowdstrike-glitch-can-teach-us-about-cyber-risk

  51. [52]

    Z. Zorz. Marks and spencer ransomware breach incident. Help Net Security, April 2025. Available: https://www.helpnetsecurity.com/2025/04/29/marks-spencer-ranso mware-breach-incident/

  52. [53]

    NCA retail cyber attacks: Nca arrest four for attacks on m&s, co-op and harrods

    National Crime Agency. NCA retail cyber attacks: Nca arrest four for attacks on m&s, co-op and harrods. NCA, July 2025. Available:https://www.nationalcrimeagency. Kurtz & Krawiecka (2026)Who Governs the Machine?81 gov.uk/news/retail-cyber-attacks-nca-arrest-four-for-attacks-on-m-s-c o-op-and-harrods

  53. [54]

    Identity and access management market size, share, and industry analysis — forecast 2024–2032

    Fortune Business Insights. Identity and access management market size, share, and industry analysis — forecast 2024–2032. Technical report, Fortune Business Insights, September 2024. Available:https://www.fortunebusinessinsights.com/industry -reports/identity-and-access-management-market-100373

  54. [55]

    AI agent standards initiative

    NIST Center for AI Standards and Innovation (CAISI). AI agent standards initiative. National Institute of Standards and Technology, February 2026. Available:https: //www.nist.gov/caisi/ai-agent-standards-initiative

  55. [56]

    Galluzzo, B

    R. Galluzzo, B. Fisher, H. Booth, and J. Roberts. Accelerating the adoption of software and artificial intelligence agent identity and authorization. Draft concept paper, NIST National Cybersecurity Center of Excellence, March 2026. Available:https://www.nc coe.nist.gov/projects/software-and-ai-agent-identity-and-authorization

  56. [57]

    Software andAI agent identity andauthorization

    NISTNCCoE. Software andAI agent identity andauthorization. National Cybersecurity Center of Excellence, 2026. Available:https://www.nccoe.nist.gov/projects/so ftware-and-ai-agent-identity-and-authorization

  57. [58]

    Rizvi, A

    S. Rizvi, A. Kurtz, J. Pfeffer, and M. Rizvi. Securing the Internet of Things (IoT): A security taxonomy for IoT. InProc. 17th IEEE Int. Conf. Trust, Security and Privacy in Computing and Communications (TrustCom/BigDataSE 2018), pages 163–168, New York, NY, August 2018. Available:https://ieeexplore.ieee.org/document/84559 02

  58. [59]

    M. A. Coutinho, A. Ashofteh, and Y. Al Helaly. Risk taxonomies and governance frameworks for generative AI: A review of ethical, cybersecurity, and regulatory chal- lenges. InProc. 20th Iberian Conf. Information Systems and Technologies (CISTI 2025), volume 1717 ofLecture Notes in Networks and Systems, Cham, 2026. Springer. Available:https://link.springer...

  59. [60]

    Kaur et al

    D. Kaur et al. AI governance: A systematic literature review.AI and Ethics, January 2025

  60. [61]

    Moura et al

    J. Moura et al. Prompt injection attacks in large language models and AI agent systems. Information, 17(1):54, 2026. Kurtz & Krawiecka (2026)Who Governs the Machine?82

  61. [62]

    M. A. Ferrag et al. From prompt injections to protocol exploits: Threats in LLM- powered AI agent workflows.ICT Express, 2025. Available:https://www.sciencedir ect.com/science/article/pii/S2405959525001997

  62. [63]

    S. Wang, Y. Zhang, Y. Xiao, and Z. Liang. Artificial intelligence policy frameworks in China, the European Union and United States: An analysis based on structure topic model.Technological Forecasting and Social Change, 212:123971, 2025

  63. [64]

    M. Chmura. Agentic AI: The future and governance of autonomous systems. Bloomsbury Intelligence and Security Institute, February 2026. Available:https://bisi.org.uk/ reports/agentic-ai-the-future-and-governance-of-autonomous-systems

  64. [65]

    Csernatoni

    R. Csernatoni. Global AI governance: Barriers and pathways forward.International Affairs, 100(3):1275–1294, May 2024

  65. [66]

    Dorwart, H

    H. Dorwart, H. Qu, T. Brautigam, and J. Gong. Preparing for compliance: Key differences between EU and chinese AI regulations. IAPP, February 2025. Available: https://iapp.org/news/a/preparing-for-compliance-key-differences-betwe en-eu-chinese-ai-regulations

  66. [67]

    EU AI act

    European Commission. EU AI act. Official Journal of the European Union, July 2024. Available: https://digital-strategy.ec.europa.eu/en/policies/regulatory-f ramework-ai

  67. [68]

    Timeline for the implementation of the EU AI act

    EU AI Act Service Desk. Timeline for the implementation of the EU AI act. European Commission, 2025. Available:https://ai-act-service-desk.ec.europa.eu/en/ai -act/timeline/timeline-implementation-eu-ai-act

  68. [69]

    New guidance under the EU AI act ahead of its next enforcement date

    Pearl Cohen. New guidance under the EU AI act ahead of its next enforcement date. Pearl Cohen Law, December 2025. Available:https://www.pearlcohen.com/new-g uidance-under-the-eu-ai-act-ahead-of-its-next-enforcement-date/

  69. [70]

    EU and Luxembourg update on the European harmonised rules on artificial intelligence

    K&L Gates. EU and Luxembourg update on the European harmonised rules on artificial intelligence. K&L Gates, January 2026. Available:https://www.klgates.com/EU-a nd-Luxembourg-Update-on-the-European-Harmonised-Rules-on-Artificial-I ntelligenceRecent-Developments-1-20-2026

  70. [71]

    Winning the race: America’s AI action plan

    White House Office of Science and Technology Policy. Winning the race: America’s AI action plan. The White House, July 2025. Available:https://www.whitehouse.gov /wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf. Kurtz & Krawiecka (2026)Who Governs the Machine?83

  71. [72]

    Ensuring a national policy framework for artificial intelligence

    Executive Office of the President. Ensuring a national policy framework for artificial intelligence. Executive Order 14365, Federal Register, December 2025. Available: https://www.federalregister.gov/documents/2025/12/16/2025-23092/ensurin g-a-national-policy-framework-for-artificial-intelligence

  72. [73]

    Baker Botts. U.S. artificial intelligence law update: Navigating the evolving state and federal regulatory landscape. Baker Botts, January 2026. Available: https: //www.bakerbotts.com/thought-leadership/publications/2026/january/us-a i-law-update

  73. [74]

    H.-Y. Chen. AI governance and regulation 2026: A complete guide to global frameworks, February 2026. Available:https://www.hungyichen.com/en/insights/ai-governa nce-regulatory-landscape-2026

  74. [75]

    Global AI governance law and policy: China

    IAPP. Global AI governance law and policy: China. International Association of Privacy Professionals, 2025. Available:https://iapp.org/resources/article/glo bal-ai-governance-china

  75. [76]

    China’s key developments in artificial intelligence governance in 2025

    ICLG. China’s key developments in artificial intelligence governance in 2025. ICLG, December 2025. Available:https://iclg.com/practice-areas/telecoms-media-a nd-internet-laws-and-regulations/03-china-s-key-developments-in-artif icial-intelligence-governance-in-2025

  76. [77]

    China’s 2025 cybersecurity law amendments: Enhanced penalties, expanded extraterritorial application, and AI governance

    Linklaters. China’s 2025 cybersecurity law amendments: Enhanced penalties, expanded extraterritorial application, and AI governance. Linklaters Tech Insights, October 2025. Available:https://techinsights.linklaters.com/post/102lrz5/

  77. [78]

    China announces action plan for global AI governance

    ANSI. China announces action plan for global AI governance. American National Standards Institute, August 2025. Available:https://www.ansi.org/standards-new s/all-news/8-1-25-china-announces-action-plan-for-global-ai-governanc e

  78. [79]

    Pouget, C

    H. Pouget, C. Dennis, et al. The future of international scientific assessments of AI’s risks. Carnegie Endowment for International Peace, August 2024. Available: https://carnegieendowment.org/research/2024/08/the-future-of-internati onal-scientific-assessments-of-ais-risks

  79. [80]

    Waging warfare against states: The deployment of artificial intelligence in cyber espionage.AI and Ethics, 2025

    Springer AI and Ethics. Waging warfare against states: The deployment of artificial intelligence in cyber espionage.AI and Ethics, 2025. Kurtz & Krawiecka (2026)Who Governs the Machine?84

  80. [81]

    Navigating the nexus: Geopolitical, international relations and technical dimensions of US-China cyber strategic competition.Cogent Social Sciences, May 2025

    others. Navigating the nexus: Geopolitical, international relations and technical dimensions of US-China cyber strategic competition.Cogent Social Sciences, May 2025

Showing first 80 references.