pith. machine review for the scientific record. sign in

arxiv: 2604.07007 · v1 · submitted 2026-04-08 · 💻 cs.MA · cs.AI· cs.CY

Recognition: unknown

AgentCity: Constitutional Governance for Autonomous Agent Economies via Separation of Power

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:18 UTC · model grok-4.3

classification 💻 cs.MA cs.AIcs.CY
keywords autonomous agentsmulti-agent systemsblockchain governanceAI alignmentsmart contractsseparation of powersagent economies
0
0 comments X

The pith

A constitutional separation of powers aligns autonomous agent collectives with human intent through ownership accountability.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Autonomous agents operating across organizational boundaries can form collectives whose emergent behavior becomes opaque to any single human observer or owner. The paper proposes the Separation of Power model on a public blockchain, in which agents themselves produce operational rules as smart contracts, deterministic software executes those rules, and every agent stays bound to a human principal through an unbroken ownership chain. Under this structure the collective is expected to converge on behavior aligned with human intent because each agent's actions remain accountable to its owner rather than drifting into an unaccountable logic monopoly. The approach is tested in a pre-registered experiment involving a commons production economy at scales from 50 to 1,000 agents. This matters for any domain where independent AI agents begin transacting and delegating without centralized oversight.

Core claim

The paper establishes that the Logic Monopoly in agent societies—where the full chain from planning through execution to evaluation is controlled by agents without human visibility—can be broken by three structural separations: agents legislate operational rules as smart contracts, deterministic software executes within those contracts, and humans adjudicate through a complete ownership chain that binds every agent to a responsible principal. Instantiated as AgentCity on an EVM-compatible layer-2 blockchain with a three-tier contract hierarchy, this architecture produces alignment through accountability: if each agent remains aligned with its human owner, the collective converges on human意图.

What carries the argument

The Separation of Power (SoP) model, which divides governance into agents producing rules as smart contracts, deterministic execution of those contracts, and human adjudication via ownership chains.

If this is right

  • Agents can self-legislate operational rules that then bind their own execution as on-chain smart contracts.
  • Collective behavior in shared-resource economies aligns with human principals without requiring top-down imposition of rules.
  • The blockchain supplies a public, tamper-resistant record of all legislative output and accountability links.
  • The three-tier contract hierarchy (foundational, meta, operational) enables modular governance that scales to at least 1,000 agents.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same ownership-chain mechanism could be applied to other open environments where autonomous entities must coordinate without a central enforcer.
  • If the accountability structure holds, it reduces reliance on external safety layers by distributing oversight back to human principals.
  • Extensions could examine whether the model remains stable when agents begin delegating across multiple human owners simultaneously.

Load-bearing premise

Humans can effectively monitor, adjudicate, and enforce accountability through ownership chains at scales of hundreds of interacting agents without the chains collapsing into monopoly.

What would settle it

At 1,000-agent scale in the commons production economy, emergent behaviors appear that systematically diverge from the interests of the human owners despite the ownership chains remaining formally intact.

Figures

Figures reproduced from arXiv: 2604.07007 by Anbang Ruan, Xing Zhang.

Figure 1
Figure 1. Figure 1: Separation of Power. Three structurally isolated branches—Legislation (agent-driven, blue), [PITH_FULL_IMAGE:figures/full_fig_p005_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: AgentCity system architecture. Central three-tier contract hierarchy—foundational contracts [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Three-tier contract hierarchy. Foundational contracts (Tier 1, gray) are human-authored [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Six-stage legislative pipeline. Proposal [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Seven-stage execution pipeline. Orchestrate [PITH_FULL_IMAGE:figures/full_fig_p010_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Six-stage accountability pipeline. Registration [PITH_FULL_IMAGE:figures/full_fig_p012_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: CSR trajectory over 200 rounds (Experiment 1, [PITH_FULL_IMAGE:figures/full_fig_p108_7.png] view at source ↗
Figure 9
Figure 9. Figure 9: Governance overhead G(n) scaling (Experiment 2). Planned: Log-log plot of B(n) vs. n with power-law fit line [PITH_FULL_IMAGE:figures/full_fig_p109_9.png] view at source ↗
Figure 11
Figure 11. Figure 11: Benefit-to-overhead ratio B(n)/G(n) with break-even crossover (Experiment 2). 109 [PITH_FULL_IMAGE:figures/full_fig_p109_11.png] view at source ↗
read the original abstract

Autonomous AI agents are beginning to operate across organizational boundaries on the open internet -- discovering, transacting with, and delegating to agents owned by other parties without centralized oversight. When agents from different human principals collaborate at scale, the collective becomes opaque: no single human can observe, audit, or govern the emergent behavior. We term this the Logic Monopoly -- the agent society's unchecked monopoly over the entire logic chain from planning through execution to evaluation. We propose the Separation of Power (SoP) model, a constitutional governance architecture deployed on public blockchain that breaks this monopoly through three structural separations: agents legislate operational rules as smart contracts, deterministic software executes within those contracts, and humans adjudicate through a complete ownership chain binding every agent to a responsible principal. In this architecture, smart contracts are the law itself -- the actual legislative output that agents produce and that governs their behavior. We instantiate SoP in AgentCity on an EVM-compatible layer-2 blockchain (L2) with a three-tier contract hierarchy (foundational, meta, and operational). The core thesis is alignment-through-accountability: if each agent is aligned with its human owner through the accountability chain, then the collective converges on behavior aligned with human intent -- without top-down rules. A pre-registered experiment evaluates this thesis in a commons production economy -- where agents share a finite resource pool and collaboratively produce value -- at 50-1,000 agent scale.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper defines the 'Logic Monopoly' as the opacity of emergent behavior when autonomous agents from different principals collaborate at scale. It proposes the Separation of Power (SoP) model on an EVM-compatible L2 blockchain, with three separations: agents legislate operational rules as smart contracts, deterministic software executes within those contracts, and humans adjudicate via complete ownership chains binding every agent to a responsible principal. The core thesis is alignment-through-accountability: if each agent is aligned with its human owner through the chain, the collective converges on human intent without top-down rules. This is instantiated in AgentCity with a three-tier contract hierarchy and evaluated via a pre-registered experiment in a commons production economy at 50-1,000 agent scale.

Significance. If the thesis holds with supporting evidence, the SoP architecture would offer a novel, blockchain-native constitutional mechanism for decentralized multi-agent governance, potentially influencing alignment research in open agent economies. The explicit separation of legislation, execution, and adjudication, together with the use of smart contracts as the law itself, provides a concrete alternative to centralized oversight or purely incentive-based approaches.

major comments (3)
  1. [Abstract] Abstract and experiment description: the pre-registered experiment is stated to evaluate the alignment-through-accountability thesis at 50-1,000 agent scale, yet no methods, metrics, outcomes, or even basic results (e.g., adjudication frequency, collective behavior metrics) are reported, leaving the central claim without empirical support.
  2. [SoP Model] SoP model and ownership-chain axiom: the claim that complete ownership chains enable effective human adjudication (preventing Logic Monopoly) is presented as an axiom without any analysis of cognitive load, information requirements for intervention, coordination costs, or failure modes such as principal overload or chain fragmentation at the stated scale; this assumption is load-bearing for the thesis.
  3. [Introduction] Alignment definition: the outcome 'collective converges on behavior aligned with human intent' is defined directly in terms of the proposed accountability chain and separations, with no independent external benchmarks or falsifiable predictions supplied, creating circularity that prevents external validation of the thesis.
minor comments (2)
  1. [Introduction] The term 'Logic Monopoly' is introduced without comparison to related concepts in multi-agent systems or mechanism design literature.
  2. [AgentCity Implementation] Notation for the three-tier contract hierarchy (foundational, meta, operational) is clear but lacks a diagram or pseudocode example showing how agent-proposed contracts interact with human adjudication.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the detailed and constructive report. The comments identify important areas for strengthening the empirical support, model analysis, and definitional clarity in the manuscript. We address each major comment point-by-point below and commit to revisions that preserve the core thesis while improving rigor.

read point-by-point responses
  1. Referee: [Abstract] Abstract and experiment description: the pre-registered experiment is stated to evaluate the alignment-through-accountability thesis at 50-1,000 agent scale, yet no methods, metrics, outcomes, or even basic results (e.g., adjudication frequency, collective behavior metrics) are reported, leaving the central claim without empirical support.

    Authors: We agree that the abstract and main text currently provide only a high-level summary of the experiment without sufficient detail on methods, metrics, or outcomes. The manuscript describes the pre-registered commons production economy setup at the stated scale but does not report quantitative results such as adjudication frequency or collective behavior metrics. We will revise by expanding the abstract to include key results and adding a dedicated results subsection with the pre-registered metrics and findings. revision: yes

  2. Referee: [SoP Model] SoP model and ownership-chain axiom: the claim that complete ownership chains enable effective human adjudication (preventing Logic Monopoly) is presented as an axiom without any analysis of cognitive load, information requirements for intervention, coordination costs, or failure modes such as principal overload or chain fragmentation at the stated scale; this assumption is load-bearing for the thesis.

    Authors: The ownership chain is indeed load-bearing, and the current presentation treats its effectiveness as following directly from blockchain transparency without explicit analysis of human-side costs. We will add a new subsection to the SoP model section that analyzes cognitive load (via automated ownership dashboards), information requirements, coordination costs, and failure modes including principal overload (mitigated by delegation) and chain fragmentation (prevented by mandatory on-chain registration). This addition will be analytical rather than empirical. revision: yes

  3. Referee: [Introduction] Alignment definition: the outcome 'collective converges on behavior aligned with human intent' is defined directly in terms of the proposed accountability chain and separations, with no independent external benchmarks or falsifiable predictions supplied, creating circularity that prevents external validation of the thesis.

    Authors: The definition is intentionally operationalized through the three separations, but we acknowledge the risk of circularity. The manuscript supplies falsifiable predictions via the experiment (e.g., higher collective output and lower intervention rates under complete vs. incomplete chains). We will revise the introduction to explicitly list these independent metrics—resource efficiency, adjudication frequency, and output per agent—as external benchmarks separate from the accountability mechanism itself. revision: yes

Circularity Check

0 steps flagged

No significant circularity in the derivation chain

full rationale

The paper proposes a conceptual governance architecture (SoP) and states its core thesis as an empirical hypothesis to be evaluated via pre-registered experiment: alignment emerges from accountability chains without top-down rules. No equations, fitted parameters, predictions, or self-citations appear in the abstract or provided text that reduce the claimed outcome to the inputs by construction. The Logic Monopoly and alignment-through-accountability are introduced as new framing rather than renamed known results or smuggled ansatzes. The derivation remains self-contained as a model proposal with external falsifiability through the experiment at stated scale.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 2 invented entities

The paper introduces new terminology and structural assumptions without external validation or independent evidence in the abstract.

axioms (2)
  • domain assumption Agents from different principals can and will produce operational rules as smart contracts that govern collective behavior.
    Invoked in the description of the legislative separation.
  • ad hoc to paper A complete ownership chain can bind every agent to a responsible human principal who can effectively adjudicate.
    Central premise of the accountability mechanism and alignment thesis.
invented entities (2)
  • Logic Monopoly no independent evidence
    purpose: Frames the problem of unchecked agent control over the full logic chain.
    New term coined in the abstract to describe the governance challenge.
  • Separation of Power (SoP) model no independent evidence
    purpose: The proposed constitutional governance architecture with three structural separations.
    Core invention presented as the solution.

pith-pipeline@v0.9.0 · 5557 in / 1493 out tokens · 48893 ms · 2026-05-10T17:18:28.099432+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. The Cognitive Penalty: Ablating System 1 and System 2 Reasoning in Edge-Native SLMs for Decentralized Consensus

    cs.AI 2026-04 unverdicted novelty 5.0

    System 1 intuition in edge SLMs delivers 100% adversarial robustness and low latency for DAO consensus while System 2 reasoning causes 26.7% cognitive collapse and 17x slowdown.

Reference graph

Works this paper leans on

152 extracted references · 16 canonical work pages · cited by 1 Pith paper · 3 internal anchors

  1. [1]

    L., & Lynham, J

    Abatayo, A. L., & Lynham, J. (2016). Endogenous vs. Exogenous Regulations in the Commons. Journal of Environmental Economics and Management, 76, 51–66

  2. [2]

    Ackerman, B. (2000). The New Separation of Powers.Harvard Law Review, 113(3), 633–729

  3. [3]

    Altera AI. (2024). Project Sid: Many-Agent Simulations Toward AI Civilization. arXiv:2411.00114

  4. [4]

    Anthropic. (2024). Model Context Protocol (MCP).https://modelcontextprotocol.io

  5. [5]

    API3 DAO. (2024). API3 DAO Governance: Proposal Verification and Execution. https: //api3.org/dao

  6. [6]

    Aragon Association. (2024). Aragon OSx: A Modular, Upgradeable Framework for DAOs. https://aragon.org/osx

  7. [7]

    Base. (2024). Base: Ethereum L2.https://base.org

  8. [8]

    Boella, G., & van der Torre, L. (2004). Regulative and Constitutive Norms in Normative Multi-Agent Systems.Proc. KR, 255–265

  9. [9]

    Buterin, V ., Hitzig, Z., & Weyl, E. G. (2019). A Flexible Design for Funding Public Goods. Management Science, 65(11), 5171–5187

  10. [10]

    Chen, X. et al. (2026). Towards Transparent and Incentive-Compatible Collaboration in Decen- tralized LLM Multi-Agent Systems: A Blockchain-Driven Approach.IEEE Transactions on Network Science and Engineering. arXiv:2509.16736

  11. [11]

    Chitra, T., & Kulkarni, K. (2022). Improving Proof of Stake Economic Security via MEV Redistribution.arXiv

  12. [12]

    Debate or vote: Which yields better decisions in multi-agent large language models?arXiv preprint arXiv:2508.17536, 2025

    Choi, H. K., Zhu, X., & Li, S. (2025). Debate or V ote: Which Yields Better Decisions in Multi-Agent Large Language Models?NeurIPS 2025 Spotlight. arXiv:2508.17536

  13. [13]

    Christoffersen, P. J. K., Haupt, A., & Hadfield-Menell, D. (2023). Get It in Writing: Formal Contracts Mitigate Social Dilemmas in Multi-Agent RL.Proc. AAMAS

  14. [14]

    CMAG Authors. (2025). Constitutional Multi-Agent Governance.arXiv:2603.13189

  15. [15]

    CrewAI. (2024). CrewAI: Framework for Orchestrating Role-Playing Autonomous AI Agents. https://github.com/joaomdmoura/crewAI

  16. [16]

    Dai, G., Zhang, W. et al. (2025). De CivAI: Democratic Governance in LLM Agent Soci- eties.First Workshop on LLM Persona Modeling (PersonaLLM), NeurIPS 2025. OpenRe- view:komjEWesEV . 23

  17. [17]

    Dante, N. (2025). Covenants with and without a Sword: An LLM Replication of Ostrom’s Common-Pool Resource Experiments.SSRN:5349484

  18. [18]

    Degen, C. et al. (2024). ETHOS: Ethereum-Based Transparent and Honest Oversight System for AI Agents.arXiv

  19. [19]

    Deshpande, A., & Jin, M. (2024). GEDI: An Electoral Approach to Diversify LLM-based Multi-Agent Collective Decision-Making.Proc. EMNLP, 2795–2819

  20. [20]

    A., Sierra, C., Garcia, P., & Arcos, J

    Esteva, M., Rodriguez-Aguilar, J. A., Sierra, C., Garcia, P., & Arcos, J. L. (2001). On the Formal Specification of Electronic Institutions.Agent-Mediated Electronic Commerce (AAMAS Workshop), 126–147. Springer

  21. [21]

    & Pesendorfer, W

    Feddersen, T. & Pesendorfer, W. (2005). Deliberation and V oting Rules. In D. Austen-Smith & J. Duggan (Eds.),Social Choice and Strategic Decisions: Essays in Honor of Jeffrey S. Banks (pp. 269–316). Springer

  22. [22]

    Fraga-Gonçalves, M. et al. (2025). Emergent Deceptive Behavior in LLM-Based Agent Economies: A La Serenissima Simulation.arXiv

  23. [23]

    Gómez, A. et al. (2024). LOKA: Decentralized AI Compute and Agent Coordination Protocol. Technical Report

  24. [24]

    Google. (2025). Agent-to-Agent Protocol (A2A).https://github.com/google/A2A

  25. [25]

    Gu, Y ., Ranaldi, L., & Zanzotto, F. M. (2024). Secret Collusion Among Generative AI Agents. arXiv:2402.07510

  26. [26]

    Gupta, P., & Saraf, A. (2025). Governing the Commons: Operationalizing Ostrom’s Principles in Multi-Agent Systems.arXiv:2510.14401

  27. [27]

    (1651).Leviathan

    Hobbes, T. (1651).Leviathan. Andrew Crooke

  28. [28]

    Hong, S. et al. (2023). MetaGPT: Meta Programming for a Multi-Agent Collaborative Frame- work.arXiv:2308.00352

  29. [29]

    Humayun, I. et al. (2023). Fetch.ai: An Agent-Based Economy.Technical Report

  30. [30]

    Jarrett, D. et al. (2024). Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.arXiv:2406.14373

  31. [31]

    Jensen, M. C. & Meckling, W. H. (1976). Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure.Journal of Financial Economics, 3(4), 305–360

  32. [32]

    LangGraph. (2024). LangGraph: Build Stateful Multi-Actor Applications with LLMs. https: //github.com/langchain-ai/langgraph

  33. [33]

    Li, G. et al. (2023). CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society.arXiv:2303.17760

  34. [34]

    & Foley, E

    Maskin, E. & Foley, E. (2025). Condorcet V oting. Working Paper, Harvard University

  35. [35]

    NEAR AI. (2024). NEAR AI: AI on the Open Web.https://near.ai

  36. [36]

    North, D. C. (1990).Institutions, Institutional Change and Economic Performance. Cambridge University Press

  37. [37]

    (1990).Governing the Commons: The Evolution of Institutions for Collective Action

    Ostrom, E. (1990).Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press

  38. [38]

    Ostrom, E., Walker, J., & Gardner, R. (1992). Covenants With and Without a Sword: Self- Governance is Possible.American Political Science Review, 86(2), 404–417

  39. [39]

    (1994).Rules, Games, and Common-Pool Resources

    Ostrom, E., Gardner, R., & Walker, J. (1994).Rules, Games, and Common-Pool Resources. University of Michigan Press

  40. [40]

    Park, J. S. et al. (2023). Generative Agents: Interactive Simulacra of Human Behavior.Proc. UIST

  41. [41]

    Piatti, G. et al. (2024). GovSim: Governance of the Commons Simulation with Language Agents.Proc. ACL

  42. [42]

    Qian, G. et al. (2024). MacNet: Multi-Agent Collaborative Networks for Scaling LLM Intelli- gence.arXiv

  43. [43]

    Rao, J. R. et al. (2024). Bittensor: A Peer-to-Peer Intelligence Market.Technical Report. 24

  44. [44]

    (1971).A Theory of Justice

    Rawls, J. (1971).A Theory of Justice. Harvard University Press

  45. [45]

    Roughgarden, T. (2021). Transaction Fee Mechanism Design.ACM SIGecom Exchanges, 19(1), 52–55

  46. [46]

    Sachdeva, P. S. & van Nuenen, T. (2025). Deliberative Dynamics and Value Alignment in LLM Debates.arXiv:2510.10002

  47. [47]

    H., Bakker, M

    Tessler, M. H., Bakker, M. A., Jarrett, D., Sheahan, H., Chadwick, M. J., Kocisky, T., ... & Sum- merfield, C. (2024). AI Can Help Humans Find Common Ground in Democratic Deliberation. Science, 386(6719), eadq2852

  48. [48]

    A., Murphy, J

    Velez, M. A., Murphy, J. J., & Stranlund, J. K. (2012). Centralized and Decentralized Manage- ment of Local Common Pool Resources in the Developing World.Economic Inquiry, 48(2), 254–265

  49. [49]

    Virtuals Protocol. (2024). Virtuals Protocol: Tokenized AI Agent Economy. https:// virtuals.io

  50. [50]

    P., Ruas, T., Gipp, B., & Aizawa, A

    Wahle, J. P., Ruas, T., Gipp, B., & Aizawa, A. (2025). V oting or Consensus: LLMs as Collective Decision-Makers.Findings of ACL 2025

  51. [51]

    Wu, H., Li, Z., & Li, L. (2025). Can LLM Agents Really Debate? A Controlled Study of Multi-Agent Debate.arXiv:2511.07784

  52. [52]

    Wu, Q. et al. (2023). AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conver- sation.arXiv:2308.08155

  53. [53]

    Yang, H. et al. (2025). Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-Based Agents.Proc. ICLR

  54. [54]

    Zhao, H., Li, J., Wu, Z., Ju, T., Zhang, Z., He, B., & Liu, G. (2025a). Disagreements in Reasoning: How a Model’s Thinking Process Dictates Persuasion in Multi-Agent Systems. arXiv:2509.21054

  55. [55]

    Ren, X., Feng, Y ., Zhao, B., Wang, L., & Wang, J. (2025). RepuNet: Reputation-Enhanced Multi-Agent Communication Network for Trustworthy LLM Collaboration.arXiv:2505.05029

  56. [56]

    Tomasev, N. et al. (2025). Simulating the Economic Impact of Rationality through Heteroge- neous Agent-Based Modelling: Virtual Agent Economies.arXiv:2509.10147

  57. [57]

    & Chan, J

    Zhou, X. & Chan, J. (2026). ORCH: Orchestrating Reasoning Chains for Multi-Agent Systems with EMA-Guided Deterministic Routing.PMC:12907423

  58. [58]

    Tian, K. (2025). Blockchain-enhanced incentive-compatible mechanisms for multi-agent rein- forcement learning systems.Scientific Reports, 15(1):42841

  59. [59]

    Kannan, S. (2023). EigenLayer: The Restaking Collective.EigenLayer Whitepaper. https://docs.eigenlayer.xyz/assets/files/EigenLayer_ WhitePaper-88c47923ca0319870c611decd6e562ad.pdf

  60. [60]

    Designing a Token Economy: Incentives, Gov- ernance, and Tokenomics.arXiv preprint arXiv:2602.096082026 doi:10.48550/arXiv.2602.09608

    Kivilo, S., Norta, A., Hattingh, M., Avanzo, S. & Pennella, L. (2026). Designing a Token Economy: Incentives, Governance, and Tokenomics.arXiv:2602.09608

  61. [61]

    & Haynes, P

    Reijers, W., O’Brolcháin, F. & Haynes, P. (2016). Governance in Blockchain Technologies & Social Contract Theories.Ledger, 1, 134–151

  62. [62]

    & van der Torre, L

    Andrighetto, G., Governatori, G., Noriega, P. & van der Torre, L. (Eds.) (2013).Normative Multi-Agent Systems. Dagstuhl Follow-Ups, V ol. 4. Schloss Dagstuhl

  63. [63]

    & Villata, S

    Chopra, A., van der Torre, L., Verhagen, H. & Villata, S. (Eds.) (2018).Handbook of Normative Multi-Agent Systems. College Publications

  64. [64]

    & Noriega, P

    Esteva, M., Rodríguez-Aguilar, J.A., Arcos, J.L., Sierra, C. & Noriega, P. (2004). Electronic Institutions Development Environment. InProc. AAMAS 2004, 1663–1664

  65. [65]

    OpenClaw. (2025). OpenClaw: Open-Source Autonomous AI Agent Runtime. https:// github.com/openclaw/openclaw

  66. [66]

    OpenAgen. (2026). ZeroClaw: Zero-Overhead Autonomous AI Agent Runtime. https:// github.com/openagen/zeroclaw

  67. [67]

    Kendall, M.G. (1938). A New Measure of Rank Correlation.Biometrika, 30(1/2), 81–93. 25 A Extended threat model analysis This appendix provides the full analysis of trust assumptions and non-guarantees summarized in §3.7 and §4. It covers the Codification Agent trust analysis, TA-5, TA-6, TA-7, NP-6 (Legislative Branch Resistance, extending §4’s NP-5 scope...

  68. [68]

    f u n c t i o n S e l e c t o r s \ su pse te q r e q u i r e d _ s e l e c t o r s ( co nt rac t

    S t r u c t u r a l c o n f o r m a n c e check ( a u t o m a t e d ) : FOR EACH c on tr act IN spec : CHECK c on tr ac t . f u n c t i o n S e l e c t o r s \ su pse te q r e q u i r e d _ s e l e c t o r s ( co nt rac t . type ,→) CHECK c on tr ac t . s t a t e V a r i a b l e s \ su ps et eq r e q u i r e d _ s t a t e ( co nt ra ct . type ) CHECK no u...

  69. [69]

    d a g _ n o d e s

    Specification - by te co de a l i g n m e n t ( a u t o m a t e d ) : CHECK d e p l o y D A G node count == l e g i s l a t i v e _ o u t p u t . d a g _ n o d e s . length CHECK edge to po lo gy matches l e g i s l a t i v e _ o u t p u t . d a g _ e d g e s ( graph i s o m o r p h i s m ) CHECK all s e r v i c e I d b in din gs match a p p r o v e d _ b...

  70. [70]

    broad categories

    Human a d j u d i c a t o r review ( m a n d a t o r y for HIGH - risk mi ss ion s ) : Surface a u t o m a t e d check results + full by te co de in O ver ri de Panel REQUIRE a d j u d i c a t o r s i g n a t u r e on C o d i f i c a t i o n A u d i t A p p r o v a l EMIT C o d i f i c a t i o n A u d i t e d ( spec_id , adjudicator , block . number ) The...

  71. [71]

    A pool manager calls createStakePool(poolId) (implicit via first poolStake call) to initialize a pool

  72. [72]

    Participating agents callpoolStake(poolId, amount)to contribute capital

  73. [73]

    The pool registers its combined stake with the CollaborationContract for mission participa- tion, satisfyings prod min collectively

  74. [74]

    If any pool participant’s assigned task is slashed, the loss is distributed proportionally according toslashingShareBasisPoints

  75. [75]

    approximately 63%

    Upon pool deactivation (no active missions), participants call withdrawPooledStake(poolId)to retrieve contributions. This design is analogous to Ethereum validator pooling protocols [59], where small holders delegate ETH to pooled staking operators. Key differences: (a) pool participation in AgentCity is task-specific, not continuous; (b) each pool partic...

  76. [76]

    The contract verifiesnodeState[task\{}_2\{}_id] == ELIGIBLE

  77. [77]

    Call to the Guardian module’s checkBehavioralInvariants(task\{}_2\{}_id)— pre-execution anomaly check passes (no prior freezes for this node)

  78. [78]

    Cross-contract call to ServiceContract.verifyCodeHash(service\{}_id, live\{}_hash)—the micro-service’s current code-hash matches its registered identity

  79. [79]

    Event: TaskRouted(task\{}_2\{}_id, service\{}_id)

    State transition: ELIGIBLE → EXECUTING. Event: TaskRouted(task\{}_2\{}_id, service\{}_id). Phase 3: Execution and Monitoring.The bound micro-service (MS-2: data_processor) begins execution: • At step 3 of 8, the Guardian module’s off-chain monitor computes a deviation score σ3 = 1.4 (below threshold 2.0)—no anomaly. The anomalyCounters[task\{}_2\{}_id].de...

  80. [80]

    The primary executor (MS-2) submits output hash h 1 to the Verification module’s submitPoP(task\{}_2\{}_id, tier=2, h\{}textsubscript{1}, proof\{}textsubscript{1})

Showing first 80 references.