pith. machine review for the scientific record. sign in

arxiv: 2605.01091 · v1 · submitted 2026-05-01 · 💻 cs.CY · cs.AI· cs.MA

Recognition: unknown

Governing What the EU AI Act Excludes: Accountability for Autonomous AI Agents in Smart City Critical Infrastructure

Authors on Pith no claims yet

Pith reviewed 2026-05-09 18:05 UTC · model grok-4.3

classification 💻 cs.CY cs.AIcs.MA
keywords EU AI Actautonomous AI agentssmart city critical infrastructureaccountability deficitgovernance architectureAnnex III exclusionsmulti-agent systemsregulatory gap analysis
0
0 comments X

The pith

The EU AI Act excludes safety-component AI in critical infrastructure from key explanation rights and impact assessments, narrowing resident accountability for multi-agent smart-city systems.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows that Annex III point 2 of the EU AI Act removes safety-component AI used in critical infrastructure from Article 86 explanation rights and Article 27 fundamental-rights impact assessments. Provider and deployer duties under Articles 9-15 continue to apply, yet the main resident-facing tools are limited for autonomous systems that interact across agencies, such as traffic signals and power grids affecting the same corridor. Analysis of the four remaining pathways under GDPR Article 22, GDPR transparency rules, tortious liability, and NIS2 reveals each is confined to single-controller and single-decision scopes. The authors respond by outlining AgentGov-SC, a three-layer architecture with 25 governance measures, five conflict-resolution rules, and an autonomy-calibrated activation model that maintains traceability to the Act and related standards. Scenario analysis with documented multi-agent cascades from UAE smart-city deployments illustrates how the measures would activate differently from isolated single-system cases.

Core claim

Annex III, point 2 of the EU AI Act excludes safety-component AI in critical infrastructure from the explanation rights in Article 86 and the fundamental-rights impact assessment in Article 27. Although Articles 9-15 duties for providers and deployers remain, and residual pathways exist under GDPR Article 22, transparency obligations, tortious liability, and NIS2, each pathway is structurally bounded by individual-controller, individual-decision scope. The paper traces this deficit and presents AgentGov-SC, a three-layer architecture (Agent, Orchestration, City) that supplies cross-system accountability through 25 governance measures, five conflict resolution rules, and an autonomy-calibrate

What carries the argument

AgentGov-SC, a three-layer architecture (Agent, Orchestration, City) specifying 25 governance measures with bidirectional traceability to the EU AI Act, ISO/IEC 42001, and the NIST AI Risk Management Framework, plus five conflict resolution rules and an autonomy-calibrated activation model.

If this is right

  • Residents would gain traceable accountability across interacting autonomous systems rather than only isolated decisions.
  • Governance measures would activate in proportion to the autonomy level and interaction scope of the agents.
  • Conflict resolution rules would allow integration with the EU AI Act and related standards without direct clashes.
  • Scenario analysis confirms distinct activation patterns for multi-agent corridor cascades versus single-system operations.
  • Bidirectional traceability would enable compliance verification against the Act, ISO/IEC 42001, and NIST framework.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The gap analysis could guide targeted amendments to extend explanation rights to cross-agency critical-infrastructure AI.
  • The activation model could be tested in live smart-city deployments to measure enforcement overhead not captured in scenarios.
  • Similar layered architectures might address multi-agent accountability gaps in other regulated domains such as energy grids or transport networks.
  • Adding resident feedback channels to the City layer could strengthen democratic oversight of infrastructure decisions.

Load-bearing premise

The four residual pathways are each limited to individual-controller and individual-decision scope, and a new three-layer architecture with 25 measures can supply the missing cross-system accountability without creating conflicts with existing laws.

What would settle it

A documented case in which GDPR Article 22 or NIS2 successfully assigns accountability for the combined effect of two or more interacting autonomous AI systems in critical infrastructure would undermine the claimed structural bound on those pathways.

Figures

Figures reproduced from arXiv: 2605.01091 by Muhammad Iqbal, Razi Iqbal, Talal Ashraf Butt.

Figure 1
Figure 1. Figure 1: AgentGov-SC three-layer governance architecture. Layers separate unit-level compliance (Agent), inter-system coordination (Orchestration), and societal view at source ↗
Figure 2
Figure 2. Figure 2: Autonomy-calibrated governance model plotting publicly documented UAE smart city AI systems against measures. Unlike MI9’s Agency-Risk Index view at source ↗
Figure 3
Figure 3. Figure 3: The Corridor Cascade: temporal progression of autonomous agent interactions along the E11 corridor. Red arrows indicate cross-domain cascade e view at source ↗
read the original abstract

When a traffic signal controller adjusts green phases and a grid manager curtails power on the same corridor, each system may comply with its own obligations. The resident who suffers the combined effect has no single authority to hold accountable and, under the EU AI Act, limited means to obtain an explanation. Annex III, point 2 excludes safety-component AI in critical infrastructure from Article 86 explanation rights and Article 27 fundamental-rights impact assessment. Provider and deployer duties under Articles 9-15 still apply, and residual pathways under the GDPR, NIS2, and tortious liability offer partial coverage. The Act's principal resident-facing accountability instruments are nonetheless narrowed for the autonomous infrastructure systems most likely to interact across agencies. The paper traces this accountability deficit through four residual pathways (GDPR Article 22, GDPR transparency obligations, tortious liability, and NIS2) and shows that each is structurally bounded by individual-controller, individual-decision scope. As a governance response, it presents AgentGov-SC, a three-layer architecture (Agent, Orchestration, City) specifying 25 governance measures with bidirectional traceability to the EU AI Act, ISO/IEC 42001, and the NIST AI Risk Management Framework. Five conflict resolution rules and an autonomy-calibrated activation model complete the design. A scenario analysis traces governance activation through a multi-agent corridor cascade involving three documented UAE smart-city systems, with a contrasting single-system scenario confirming proportional activation. The paper contributes a regulatory gap analysis and governance architecture for an increasingly important class of urban AI deployment that existing frameworks treat as bounded and isolated.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper argues that Annex III point 2 of the EU AI Act excludes safety-component AI in critical infrastructure from Article 86 explanation rights and Article 27 fundamental-rights impact assessments. It traces an accountability deficit through four residual pathways (GDPR Article 22, GDPR transparency obligations, tortious liability, and NIS2), claiming each is structurally bounded by individual-controller and individual-decision scope. As a response, the paper proposes AgentGov-SC, a three-layer architecture (Agent, Orchestration, City) with 25 governance measures, bidirectional traceability to the EU AI Act, ISO/IEC 42001, and NIST AI RMF, five conflict-resolution rules, and an autonomy-calibrated activation model, illustrated via a multi-agent corridor cascade scenario using documented UAE smart-city systems contrasted with a single-system case.

Significance. If the gap analysis holds and the architecture can be implemented without legal conflicts, the work would provide a concrete, standards-traceable governance framework for coordinated autonomous AI agents in smart-city critical infrastructure—an increasingly relevant deployment class that existing frameworks treat as isolated. The bidirectional traceability to ISO/NIST and the proportional activation model in the scenario are strengths that could support practical adoption. The absence of empirical validation or testing against real deployments, however, limits the immediate significance of the proposed solution.

major comments (2)
  1. [gap analysis of residual pathways] The central claim that the four residual pathways are each 'structurally bounded by individual-controller, individual-decision scope' (abstract and gap-analysis section) rests on textual readings of terms such as 'individual' without citations to CJEU precedents on joint controllership, EDPB opinions, or NIS2 enforcement examples involving multi-agent cascades. This is load-bearing for the necessity of the 25-measure AgentGov-SC architecture; if expansive interpretations apply to cross-agency coordination (e.g., traffic + grid), the identified deficit narrows and the new framework's justification is reduced.
  2. [AgentGov-SC architecture and scenario analysis] § on AgentGov-SC architecture and scenario analysis: the proposal that the three-layer design with 25 measures and five conflict rules resolves the accountability deficit is supported only by a high-level specification and one illustrative multi-agent scenario (UAE corridor cascade) contrasted with a single-system case. No empirical validation, error analysis, or testing against real deployments is provided, which is required to substantiate that the architecture supplies the missing cross-system accountability without creating conflicts with existing laws.
minor comments (2)
  1. [architecture description] The autonomy-calibrated activation model is referenced but its parameters, calibration method, and decision thresholds are not fully specified, which would aid reproducibility and comparison with existing risk frameworks.
  2. [governance measures] Consider adding a short table mapping the 25 measures explicitly to the EU AI Act articles, ISO 42001 controls, and NIST functions to strengthen the bidirectional traceability claim.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive and detailed comments, which highlight important areas for strengthening the legal analysis and clarifying the scope of the proposed architecture. We address each major comment below and indicate the revisions we will make to the manuscript.

read point-by-point responses
  1. Referee: [gap analysis of residual pathways] The central claim that the four residual pathways are each 'structurally bounded by individual-controller, individual-decision scope' (abstract and gap-analysis section) rests on textual readings of terms such as 'individual' without citations to CJEU precedents on joint controllership, EDPB opinions, or NIS2 enforcement examples involving multi-agent cascades. This is load-bearing for the necessity of the 25-measure AgentGov-SC architecture; if expansive interpretations apply to cross-agency coordination (e.g., traffic + grid), the identified deficit narrows and the new framework's justification is reduced.

    Authors: We agree that incorporating additional legal citations will strengthen the gap analysis and better substantiate the claim. The core argument relies on the textual scope of key terms (e.g., 'individual' in GDPR Article 22 and analogous limitations in the other pathways), but we will revise the relevant section to include references to CJEU case law on joint controllership, specifically Cases C-25/17 (Wirtschaftsakademie) and C-40/17 (Fashion ID), along with EDPB Guidelines 07/2020 on controllers and processors. For NIS2, we will add discussion of enforcement examples and limitations in cross-system scenarios drawn from available national implementation reports. These additions will show that even under broader interpretations of joint responsibility, the specific cross-agent, cross-agency accountability deficit in critical infrastructure remains unaddressed, preserving the justification for AgentGov-SC. revision: yes

  2. Referee: [AgentGov-SC architecture and scenario analysis] § on AgentGov-SC architecture and scenario analysis: the proposal that the three-layer design with 25 measures and five conflict rules resolves the accountability deficit is supported only by a high-level specification and one illustrative multi-agent scenario (UAE corridor cascade) contrasted with a single-system case. No empirical validation, error analysis, or testing against real deployments is provided, which is required to substantiate that the architecture supplies the missing cross-system accountability without creating conflicts with existing laws.

    Authors: We acknowledge that the manuscript offers a conceptual specification of the three-layer architecture, the 25 measures, conflict-resolution rules, and an illustrative scenario based on documented UAE smart-city systems rather than empirical testing or error analysis. As a regulatory gap analysis and governance proposal, the paper prioritizes traceability to the EU AI Act, ISO/IEC 42001, and NIST AI RMF, with the scenario demonstrating proportional activation. Full empirical validation would require access to operational deployment data and pilot testing, which is beyond the scope of this work. We will revise by adding a dedicated limitations section that explicitly discusses the illustrative nature of the scenario, the absence of real-world testing, and outlines future research directions for empirical validation and legal conflict analysis. revision: partial

Circularity Check

0 steps flagged

No circularity: gap analysis and new architecture rest on external legal texts and standards

full rationale

The paper's derivation proceeds by textual interpretation of the EU AI Act (Annex III exclusions from Arts. 86 and 27), followed by analysis of four residual pathways (GDPR Art. 22, transparency duties, tort, NIS2) whose individual-controller/individual-decision bounds are asserted from the wording of those external instruments. It then introduces the independent AgentGov-SC three-layer architecture and 25 measures with traceability to ISO/IEC 42001 and NIST AI RMF. No equations, fitted parameters, self-defined terms, or load-bearing self-citations appear; the central claims do not reduce to the paper's own inputs by construction. This is the normal non-circular pattern for regulatory gap analysis plus design proposal.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The central claim rests on an interpretation of the EU AI Act text and the introduction of a new governance construct without independent empirical grounding or formal verification.

axioms (2)
  • domain assumption Annex III point 2 of the EU AI Act excludes safety-component AI in critical infrastructure from Article 86 and Article 27 obligations
    Stated directly in the abstract as the starting regulatory fact.
  • domain assumption The four residual pathways are each limited to individual-controller scope
    Claimed as the outcome of the gap analysis.
invented entities (1)
  • AgentGov-SC no independent evidence
    purpose: Three-layer governance architecture to supply cross-system accountability for autonomous AI agents in smart cities
    Newly proposed construct with 25 measures, five conflict rules, and activation model.

pith-pipeline@v0.9.0 · 5594 in / 1589 out tokens · 33501 ms · 2026-05-09T18:05:26.783436+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

95 extracted references · 38 canonical work pages · 3 internal anchors

  1. [1]

    Race After Technology: Abolitionist Tools for the New Jim Code

    Benjamin, R., 2019. Race After Technology: Abolitionist Tools for the New Jim Code. Polity, Medford, MA. URL: https://www.politybooks.com/bookdetail?boo k_slug=race-after-technology-abolitionist-t ools-for-the-new-jim-code--9781509526390

  2. [2]

    Algorithmic injustice: A relational ethics approach

    Birhane, A., 2021. Algorithmic injustice: A relational ethics approach. Patterns 2, 100205. doi: 10.1016/j.pa tter.2021.100205

  3. [3]

    Algorithmic trans- parency for the smart city

    Brauneis, R., Goodman, E.P., 2018. Algorithmic trans- parency for the smart city. Yale Journal of Law & Technol- ogy 20, 103–176. URL: https://yjolt.org/sites/ default/files/20_yale_j._l._tech._103.pdf

  4. [4]

    Smart urbanism and smart citizenship: The neoliberal logic of ‘citizen-focused’ smart cities in europe

    Cardullo, P., Kitchin, R., 2019. Smart urbanism and smart citizenship: The neoliberal logic of ‘citizen-focused’ smart cities in europe. Environment and Planning C: Politics and Space 37, 813–830. doi:10.1177/0263774X18806508

  5. [5]

    Request for in- formation regarding security considerations for artificial intelligence agents

    Center for AI Standards and Innovation, National Insti- tute of Standards and Technology, 2026. Request for in- formation regarding security considerations for artificial intelligence agents. URL: https://www.federalreg ister.gov/public-inspection/2026-00206/req uest-for-information-security-consideration s-for-artificial-intelligence-agents . docket NIST...

  6. [6]

    Algorithmic transparency recording standard (ATRS) guid- ance

    Central Digital and Data Office, UK Government, 2024. Algorithmic transparency recording standard (ATRS) guid- ance. URL: https://www.gov.uk/government/coll ections/algorithmic-transparency-recording -standard-hub

  7. [7]

    Escalation

    Chan, A., Ezell, C., Kaufmann, M., Wei, K., Hammond, L., Bradley, H., Bluemke, E., Rajkumar, N., Krueger, D., Kolt, N., Heim, L., Anderljung, M., 2024. Visibility into AI agents, in: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, pp. 958–973. doi:10.1145/3630106.3658948

  8. [8]

    Helsinki AI register

    City of Helsinki, 2020. Helsinki AI register. URL: https: //ai.hel.fi/en/ai-register/

  9. [9]

    Cobbe, J., Lee, M.S.A., Singh, J., 2021. Reviewable au- tomated decision-making: A framework for accountable algorithmic systems, in: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pp. 598–609. doi:10.1145/3442188.3445921

  10. [10]

    Algorhythmic governance: Regulating the ‘heartbeat’ of a city using the internet of things

    Coletta, C., Kitchin, R., 2017. Algorhythmic governance: Regulating the ‘heartbeat’ of a city using the internet of things. Big Data & Society 4, 1–16. doi: 10.1177/2053 951717742418

  11. [11]

    Case C- 634/21 (SCHUFA holding (scoring))

    Court of Justice of the European Union, 2023. Case C- 634/21 (SCHUFA holding (scoring)). Judgment of the Court (First Chamber). URL: https://curia.europa .eu/juris/document/document.jsf?docid=2804

  12. [12]

    eCLI:EU:C:2023:957, 7 December 2023

  13. [13]

    Case C- 203/22, CK v Dun & Bradstreet Austria GmbH and Magis- trat der Stadt Wien

    Court of Justice of the European Union, 2025. Case C- 203/22, CK v Dun & Bradstreet Austria GmbH and Magis- trat der Stadt Wien. Judgment of the Court (First Chamber). URL: https://curia.europa.eu/juris/liste.js f?num=C-203/22 . eCLI:EU:C:2025:117, 27 February 2025

  14. [14]

    Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence

    Crawford, K., 2021. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. doi:10.12987/9780300252392

  15. [15]

    Urban artificial intelligence: From automation to autonomy in the smart city

    Cugurullo, F., 2020. Urban artificial intelligence: From automation to autonomy in the smart city. Frontiers in Sustainable Cities 2, 38. doi: 10.3389/frsc.2020.0003 8

  16. [16]

    Frankenstein Urbanism: Eco, Smart and Autonomous Cities, Artificial Intelligence and the End of the City

    Cugurullo, F., 2021. Frankenstein Urbanism: Eco, Smart and Autonomous Cities, Artificial Intelligence and the End of the City. Routledge, Abingdon. doi: 10.4324/978100 3046684

  17. [17]

    The rise of AI urban- ism in post-smart cities: A critical commentary on ur- ban artificial intelligence

    Cugurullo, F., Caprotti, F., Cook, M., Karvonen, A., McGuirk, P., Marvin, S., 2024. The rise of AI urban- ism in post-smart cities: A critical commentary on ur- ban artificial intelligence. Urban Studies 61, 1168–1182. doi:10.1177/00420980231203386. 20

  18. [18]

    DEW A adoption of smart dubai’s ethical AI toolkit

    Dubai Electricity and Water Authority (DEWA), 2020a. DEW A adoption of smart dubai’s ethical AI toolkit. URL: https://www.digitaldubai.ae/initiatives/ai -principles-ethics . dEWA was an early adopter of the Smart Dubai (now Digital Dubai) Ethical AI Toolkit

  19. [19]

    DEW A wins innovative power technology award for gen- eration technology and innovation centre (GTIC)

    Dubai Electricity and Water Authority (DEWA), 2020b. DEW A wins innovative power technology award for gen- eration technology and innovation centre (GTIC). URL: https://www.dewa.gov.ae/en/about-us/media-p ublications/latest-news/2020/11/dewa-wins-i nnovative-power-technology-of-the-yea . reports the GTIC system using digital twinning, AI and ML to control...

  20. [20]

    GTIC at m-station: Ai and machine learning for gas turbine control

    Dubai Electricity and Water Authority (DEWA), 2020c. GTIC at m-station: Ai and machine learning for gas turbine control. URL: https://www.dewa.gov.ae/en/about -us/media-publications/latest-news/2020/11/ dewa-wins-innovative-power-technology-of-t he-yea . joint DEWA–Siemens Energy project. Project savings estimated at AED 17 million annually upon full deployment

  21. [21]

    DIFC data protection regulations 2023

    Dubai International Financial Centre, 2023a. DIFC data protection regulations 2023. URL: https://www.di fc.com/business/registrars-and-commissione rs/commissioner-of-data-protection . effective 1 September 2023; supplementing DIFC Data Protection Law No. 5 of 2020

  22. [22]

    Regulation 10 on processing personal data through autonomous and semi- autonomous systems

    Dubai International Financial Centre, 2023b. Regulation 10 on processing personal data through autonomous and semi- autonomous systems. URL: https://www.difc.com /business/registrars-and-commissioners/com missioner-of-data-protection/regulation-10 . effective 1 September 2023

  23. [23]

    RTA expands traffic incident management unit in collaboration with dubai police

    Dubai Media Office, 2024. RTA expands traffic incident management unit in collaboration with dubai police. URL: https://mediaoffice.ae/en/news/2024/march/ 17-03/rta-and-dubai-police . march 2024 announce- ment of expansion to 17 corridors/951 km total coverage

  24. [24]

    RTA launches traffic signal control system upgrade using AI and digital twin technol- ogy

    Dubai Media Office, 2025. RTA launches traffic signal control system upgrade using AI and digital twin technol- ogy. URL: https://www.mediaoffice.ae/en/news /2025/february/24-02/rta-launches-traffic-s ignal-control-system-upgrade-using-ai-and-d igital-twin-technology. february 2025 Dubai Media Office announcement of UTC-UX Fusion deployment

  25. [25]

    Population bulletin emirate of dubai

    Dubai Statistics Center, 2024. Population bulletin emirate of dubai. URL: https://www.dsc.gov.ae/Publica tion/Population%20Bulletin%20Emirate%20of% 20Dubai%20-%202024.pdf . 2024 Population Bulletin showing Dubai population of 4,248,200 (end of 2024). Earlier bulletins follow same URL pattern by year

  26. [26]

    Dubai demand side management (DSM) strategy 2030

    Dubai Supreme Council of Energy, 2024. Dubai demand side management (DSM) strategy 2030. URL: https: //www.dubaisce.gov.ae/en/Pages/default.aspx

  27. [27]

    How siemens energy powers the UAE’s net-zero journey: Interview with khalid bin hadi

    Economy Middle East, 2024. How siemens energy powers the UAE’s net-zero journey: Interview with khalid bin hadi. URL: https://economymiddleeast.com/ne ws/khalid-bin-hadi-siemens-energy-intervi ew/. confirms Gas Turbine Intelligent Controller (GTIC) as Siemens Energy/DEWA partnership at Jebel Ali (M- Station). Citation key retains extttsiemens_energy prefi...

  28. [28]

    Eisenberg, Lucía Gamboa, and Eli Sherman

    Eisenberg, I.W., Gamboa, L., Sherman, E., 2025. The unified control framework: Establishing a common foun- dation for enterprise AI governance, risk management and regulatory compliance. URL: https://arxiv.org/ab s/2503.05937 , doi: 10.48550/arXiv.2503.05937 , arXiv:2503.05937

  29. [29]

    Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks

    Engin, Z., Hand, D., 2025. Toward adaptive categories: Dimensional governance for agentic AI. URL: https: //arxiv.org/abs/2505.11579, doi:10.48550/arXiv .2505.11579,arXiv:2505.11579

  30. [30]

    Standardisation request to the European standardisation organisations in support of Union policy on artificial intelligence

    European Commission, 2023. Standardisation request to the European standardisation organisations in support of Union policy on artificial intelligence. URL: https://ec .europa.eu/growth/tools-databases/enorm/man date/593_en. c(2023) 3215 final, 22 May 2023

  31. [31]

    Implementation timeline of the artificial intelligence act

    European Commission, 2024. Implementation timeline of the artificial intelligence act. URL: https://digital-s trategy.ec.europa.eu/en/policies/regulator y-framework-ai

  32. [32]

    The general-purpose AI code of practice

    European Commission, 2025a. The general-purpose AI code of practice. URL: https://digital-strategy. ec.europa.eu/en/policies/contents-code-gpai . final version published 10 July 2025

  33. [33]

    Guidelines on prohibited artificial intelligence practices established by regulation (EU) 2024/1689 (AI Act)

    European Commission, 2025b. Guidelines on prohibited artificial intelligence practices established by regulation (EU) 2024/1689 (AI Act). URL: https://digital-str ategy.ec.europa.eu/en/library/commission-g uidelines-prohibited-artificial-intelligenc e-ai-practices-defined-ai-act . c(2025) 884 final, 4 February 2025

  34. [34]

    Guidelines on the definition of an artificial intelligence system established by regulation (EU) 2024/1689 (AI Act)

    European Commission, 2025c. Guidelines on the definition of an artificial intelligence system established by regulation (EU) 2024/1689 (AI Act). URL: https://digital-str ategy.ec.europa.eu/en/library/commission-g uidelines-definition-artificial-intelligenc e-system-established-ai-act . c(2025) 924 final, 6 February 2025

  35. [35]

    European Commission, 2025d. Proposal for a reg- ulation amending regulations (EU) 2016/679, (EU) 21 2018/1724, (EU) 2018/1725, (EU) 2023/2854 and direc- tives 2002/58/EC, (EU) 2022/2555 and (EU) 2022/2557 as regards the simplification of the digital legislative frame- work (Digital Omnibus). URL: https://eur-lex.e uropa.eu/legal- content/EN/TXT/?uri=celex...

  36. [36]

    Withdrawal of the proposal for an AI liability directive

    European Commission, 2025e. Withdrawal of the proposal for an AI liability directive. URL: https://commis sion.europa.eu/strategy-and-policy/strate gy-documents/commission-work-programme_en . commission Work Programme 2025 (COM(2025) 45 final), Annex IV

  37. [37]

    Article 27: Fundamental rights impact assessment for high-risk AI systems

    European Commission, AI Act Service Desk, 2025. Article 27: Fundamental rights impact assessment for high-risk AI systems. URL: https://ai-act-service-desk.ec .europa.eu/en/ai-act/article-27

  38. [39]

    Official Journal of the European Union

    Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data (General Data Protection Regulation). Official Journal of the European Union. URL: https://eur-lex.eur opa.eu/eli/reg/2016/679/oj. oJ L 119, 4.5.2016, p. 1–88

  39. [40]

    European Parliament and Council of the European Union,

  40. [41]

    Official Journal of the European Union

    Directive (EU) 2022/2555 on measures for a high common level of cybersecurity across the Union (NIS2 Directive). Official Journal of the European Union. URL: https://eur-lex.europa.eu/eli/dir/2022/2555 /oj. oJ L 333, 27.12.2022, p. 80–152

  41. [42]

    Directive (EU) 2024/2853 on liability for defective products

    European Parliament and Council of the European Union, 2024a. Directive (EU) 2024/2853 on liability for defective products. Official Journal of the European Union. URL: https://eur-lex.europa.eu/eli/dir/2024/2853 /oj. oJ L 2024/2853, 18.11.2024

  42. [43]

    Regulation (EU) 2024/1689 laying down har- monised rules on artificial intelligence (Artificial Intelli- gence Act)

    European Parliament and Council of the European Union, 2024b. Regulation (EU) 2024/1689 laying down har- monised rules on artificial intelligence (Artificial Intelli- gence Act). Official Journal of the European Union. URL: https://eur-lex.europa.eu/eli/reg/2024/1689 /oj. oJ L 2024/1689, 12.7.2024

  43. [44]

    Waymo chaos during san francisco power outage likely due to ‘operational management failure’ instead of software flaw, expert says

    Fortune, 2025. Waymo chaos during san francisco power outage likely due to ‘operational management failure’ instead of software flaw, expert says. URL: https: //fortune.com/2025/12/22/waymo- ai- san- f rancisco- power- outage- operational- manag ement-failure-software/ . december 2025 PG&E substation fire affected approximately 130,000 customers; Waymo ro...

  44. [45]

    Governance-as-a-Service.arXiv2025, 2508.18765

    Gaurav, S., Heikkonen, J., Chaudhary, J., 2025. Governance-as-a-service: A multi-agent framework for AI system compliance and policy enforcement. URL: https://arxiv.org/abs/2508.18765, doi:10.48550 /arXiv.2508.18765,arXiv:2508.18765

  45. [46]

    AI and the Transformation of Accountability and Discretion in Urban Governance

    Goldsmith, S., Yang, J.T., 2024. AI and the transfor- mation of accountability and discretion in urban gover- nance. Data-Smart City Solutions, Bloomberg Center for Cities at Harvard University. Also arXiv:2502.13101. URL: https://ssrn.com/abstract=4968086 , doi:10.2139/ssrn.4968086

  46. [47]

    Decree no

    Government of Dubai, 2021. Decree no. (4) of 2021 establishing the supreme committee for crisis and disas- ter management. Dubai Official Gazette. URL: https: //dlp.dubai.gov.ae/

  47. [48]

    Government of Dubai, 2023. Law no. (9) of 2023 regu- lating the operation of autonomous vehicles in the Emi- rate of Dubai. Dubai Official Gazette. URL: https: //dlp.dubai.gov.ae/Legislation%20Reference /2023/Law%20No.%20(9)%20of%202023%20Regul ating%20the%20Operation%20of%20Autonomous% 20Vehicles.html . issued 6 April 2023; published in Official Gazette ...

  48. [49]

    Executive council resolution no

    Government of Dubai, Executive Council, 2021. Executive council resolution no. (34) of 2021 forming the committee supervising the dubai Oyoon (Eyes) project. Dubai Official Gazette. URL: https://dlp.dubai.gov.ae/Legisla tion%20Reference/2021/Executive%20Council% 20Resolution%20No.%20(34)%20of%202021%20Fo rming%20the%20Committee%20Supervising.html

  49. [50]

    UAE AI strategy and initiatives factsheet

    Government of the United Arab Emirates, 2024. UAE AI strategy and initiatives factsheet. URL: https://u.ae/e n/about-the-uae/strategies-initiatives-and -awards/strategies-plans-and-visions

  50. [51]

    The Smart Enough City: Putting Technol- ogy in Its Place to Reclaim Our Urban Future

    Green, B., 2019. The Smart Enough City: Putting Technol- ogy in Its Place to Reclaim Our Urban Future. MIT Press. doi:10.7551/mitpress/11555.001.0001

  51. [52]

    Dubai’s Oyoon surveillance network now spans 300,000+cameras

    Gulf News, 2021. Dubai’s Oyoon surveillance network now spans 300,000+cameras. URL: https://gulfnews .com/uae/government/sheikh-mohammed-bin-ras hid-briefed-about-oyoon-security-surveilla nce-programme-in-dubai-1.80650875

  52. [53]

    The European AI liability directives — critique of a half-hearted approach and lessons for the future

    Hacker, P., 2023. The European AI liability directives — critique of a half-hearted approach and lessons for the future. Computer Law & Security Review 51, 105871. doi:10.1016/j.clsr.2023.105871

  53. [54]

    The ethics of AI ethics: An eval- uation of guidelines

    Hagendorff, T., 2020. The ethics of AI ethics: An eval- uation of guidelines. Minds and Machines 30, 99–120. doi:10.1007/s11023-020-09517-8. 22

  54. [55]

    Model AI governance framework for agentic AI

    Infocomm Media Development Authority, 2026. Model AI governance framework for agentic AI. URL: https: //www.imda.gov.sg/-/media/imda/files/about /emerging-tech-and-research/artificial-int elligence/mgf-for-agentic-ai.pdf . launched at World Economic Forum, 22 January 2026

  55. [56]

    ISO/IEC 42001:2023 — information tech- nology — artificial intelligence — management system

    ISO/IEC, 2023. ISO/IEC 42001:2023 — information tech- nology — artificial intelligence — management system. International Standard. URL: https://www.iso.org/ standard/42001

  56. [57]

    Designing a policy engine for agentic AI systems: From governance requirements to runtime enforcement

    Jackson, F., 2025. Designing a policy engine for agentic AI systems: From governance requirements to runtime enforcement. URL: https://ssrn.com/abstract=59 04104, doi:10.2139/ssrn.5904104

  57. [58]

    The Global Landscape of AI Ethics Guidelines,

    Jobin, A., Ienca, M., Vayena, E., 2019. The global land- scape of AI ethics guidelines. Nature Machine Intelligence 1, 389–399. doi:10.1038/s42256-019-0088-2

  58. [59]

    LLM agents for smart city management: Enhancing decision support through multi-agent AI systems

    Kalyuzhnaya, A., Mityagin, S., Lutsenko, E., Getmanov, A., Aksenkin, Y ., Fatkhiev, K., Fedorin, K., Nikitin, N.O., Chichkova, N., V orona, V ., Boukhanovsky, A., 2025. LLM agents for smart city management: Enhancing decision support through multi-agent AI systems. Smart Cities 8,

  59. [60]

    doi:10.3390/smartcities8010019

  60. [61]

    The right to ex- planation in the AI act, in: The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence

    Kaminski, M.E., Malgieri, G., 2025. The right to ex- planation in the AI act, in: The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence. URL: https://ssrn.com/abstract=5194301 , doi:10.2139/ssrn.5194301 . u of Colorado Law Le- gal Studies Research Paper No. 25-9

  61. [62]

    Dubai’s smart traffic system to cover 100% of the main road network by 2026

    Khaleej Times, 2024. Dubai’s smart traffic system to cover 100% of the main road network by 2026. URL: https://www.khaleejtimes.com/uae/transport /dubai-smart-traffic-system-to-cover-100-o f- the- main- road- network- by- 2026 . reports 311 RTA traffic surveillance cameras in Phase I; Phase II to ex- pand coverage to 100% by 2026. Citation key retains ext...

  62. [63]

    The real-time city? Big data and smart urbanism

    Kitchin, R., 2014. The real-time city? Big data and smart urbanism. GeoJournal 79, 1–14. doi: 10.1007/s10708-0 13-9516-8

  63. [64]

    The ethics of smart cities and urban science

    Kitchin, R., 2016. The ethics of smart cities and urban science. Philosophical Transactions of the Royal Society A 374, 20160115. doi:10.1098/rsta.2016.0115

  64. [65]

    Governing AI agents

    Kolt, N., 2025. Governing AI agents. Notre Dame Law Review 101. URL: https://arxiv.org/abs/25 01.07913 , doi: 10.2139/ssrn.4772956 . forthcom- ing. Also available as arXiv:2501.07913 and SSRN ab- stract_id=4772956

  65. [66]

    Krafft, P.M., Young, M., Katell, M., Lee, J.E., Narayan, S., Epstein, M., Dailey, D., Herman, B., Tam, A., Guetler, V ., Bintz, C., Raz, D., Jobe, P.O., Putz, F., Robick, B., Barghouti, B., 2021. An action-oriented AI policy toolkit for technology audits by community advocates and ac- tivists, in: Proceedings of the 2021 ACM Conference on Fairness, Accoun...

  66. [67]

    Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk

    Laux, J., Wachter, S., Mittelstadt, B., 2024. Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk. Regulation & Governance 18, 3–32. doi:10.1111/rego .12512

  67. [68]

    Engineering a Safer World: Systems Thinking Applied to Safety

    Leveson, N.G., 2011. Engineering a Safer World: Systems Thinking Applied to Safety. MIT Press, Cambridge, MA. doi:10.7551/mitpress/8179.001.0001

  68. [69]

    MoHRE implements midday break from 15 june to 15 september 2024

    Ministry of Human Resources and Emiratisation, UAE (MoHRE), 2024. MoHRE implements midday break from 15 june to 15 september 2024. URL: https://www.mo hre.gov.ae/en/media-centre/news/31/5/2024/m ohre-implements-midday-break-from-15-june-t o-15-september-2024.aspx . 20th consecutive year of UAE midday break rule prohibiting outdoor work between 12:30 PM an...

  69. [70]

    Principles alone cannot guarantee ethical AI

    Mittelstadt, B., 2019. Principles alone cannot guarantee ethical AI. Nature Machine Intelligence 1, 501–507. doi:10 .1038/s42256-019-0114-4

  70. [71]

    AI Agents Under EU Law

    Nannini, L., Smith, A.L., Maggini, M.J., Panai, E., Feli- ciano, S., Tiulkanov, A., Maran, E., Gealy, J., Bisconti, P., 2026. AI agents under EU law: A compliance archi- tecture for AI providers. URL: https://arxiv.org/ abs/2604.04604, doi:10.48550/arXiv.2604.04604 , arXiv:2604.04604

  71. [72]

    Re- quest for information regarding security considerations for artificial intelligence agents (federal register notice)

    National Institute of Standards and Technology, 2026. Re- quest for information regarding security considerations for artificial intelligence agents (federal register notice). URL: https://www.federalregister.gov/documents/ 2026/01/08/2026-00206/request-for-informati on-regarding-security-considerations-for-a rtificial-intelligence-agents . 91 FR 700; FR ...

  72. [73]

    OW ASP top 10 for agentic applications

    OW ASP GenAI Security Project, 2025. OW ASP top 10 for agentic applications. URL: https://genai.owasp.or g/resource/owasp-top-10-for-agentic-applica tions-for-2026/. released December 10, 2025

  73. [74]

    The agentic AI governance framework: A universal model for risk, accountability, and compliance in autonomous systems

    Pandey, R., 2025. The agentic AI governance framework: A universal model for risk, accountability, and compliance in autonomous systems. URL: https://ssrn.com/abs tract=5652350, doi:10.2139/ssrn.5652350

  74. [75]

    Roads and Transport Authority, Government of Dubai,

  75. [76]

    URL: https://www.rta.ae/wps/portal/rta/ae /home/news-and-media/all-news/NewsDetails/ launching-traffic-signal-control-system-u pgrade- using- ai- and- digital- twin- technol ogy

    Launching traffic signal control system upgrade 23 using AI and digital twin technology (UTC-UX fusion). URL: https://www.rta.ae/wps/portal/rta/ae /home/news-and-media/all-news/NewsDetails/ launching-traffic-signal-control-system-u pgrade- using- ai- and- digital- twin- technol ogy. phase 1 completed September 2025 with 16–37% efficiency gains across key ...

  76. [77]

    Principles and controls of AI ethics

    Saudi Data and AI Authority (SDAIA), 2023. Principles and controls of AI ethics. URL: https://sdaia.gov. sa/en/SDAIA/about/Documents/ai-principles. pdf. final version published September 2023; seven core principles, four-tier risk classification

  77. [78]

    Contestations in urban mobility: Rights, risks, and responsibilities for urban AI

    Sawhney, N., 2023. Contestations in urban mobility: Rights, risks, and responsibilities for urban AI. AI & Society 38, 1083–1098. doi: 10.1007/s00146-022-015 02-2

  78. [79]

    Schmitz, C., Rystrøm, J., Batzner, J., 2025. Oversight structures for agentic AI in public-sector organizations, in: Proceedings of the 1st Workshop for Research on Agent Language Models (REALM 2025), Association for Com- putational Linguistics, Vienna, Austria. pp. 298–308. URL: https://aclanthology.org/2025.realm-1.21/ , doi:10.18653/v1/2025.realm-1.21

  79. [80]

    In: Proceedings of the Conference on Fairness, Accountability, and Transparency

    Selbst, A.D., Boyd, d., Friedler, S.A., Venkatasubrama- nian, S., Vertesi, J., 2019. Fairness and abstraction in sociotechnical systems, in: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 59–68. doi:10.1145/3287560.3287598

  80. [81]

    Practices for governing agentic AI systems

    Shavit, Y ., Agarwal, S., Brundage, M., Adler, S., O’Keefe, C., Campbell, R., Lee, T., Mishkin, P., Eloundou, T., Hickey, A., Slama, K., Ahmad, L., McMillan, P., Beu- tel, A., Passos, A., Robinson, D.G., 2023. Practices for governing agentic AI systems. OpenAI White Paper. URL: https://cdn.openai.com/papers/practices-for -governing-agentic-ai-systems.pdf

Showing first 80 references.