Recognition: unknown
Anumati: Proof of Adherence as a Formal Consent Model for Autonomous Agent Protocols
Pith reviewed 2026-05-10 11:13 UTC · model grok-4.3
The pith
Autonomous agents can prove they evaluated and followed specific policy clauses for each action taken.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper establishes that the accountability gap in agent-to-agent calls can be closed by requiring each permitted action to produce an AdherenceEvent that cites the exact clause from the current PolicyDocument version, with the entire history preserved through linked ConsentRecords in an append-only structure.
What carries the argument
The three primitives PolicyDocument, ConsentRecord, and AdherenceEvent that together form a versioned, append-only consent model for tracking per-action policy evaluations.
If this is right
- Callee agents gain the ability to audit whether each incoming call was made under a valid and correctly interpreted version of their policy.
- Policy changes can be introduced without invalidating prior consents because records always reference a specific document version and clause.
- Existing authentication protocols such as OAuth or mutual TLS continue to handle identity while the new primitives add the missing condition-checking layer.
- Formal TLA+ models and reference validators can be used to check that the consent trail remains consistent across multiple agent interactions.
Where Pith is reading between the lines
- Human principals could later query the adherence trail to determine which policy clause an agent relied on when delegating a task.
- The model could support automated dispute resolution between agents by providing machine-readable evidence of clause evaluation.
- Overhead measurements from the reference Python implementation would indicate whether the approach scales to high-frequency agent calls.
Load-bearing premise
Calling agents can generate and store a per-action adherence record that correctly cites the relevant policy clause without prohibitive cost or delay, and that callee policies remain stable enough to be referenced precisely during live interactions.
What would settle it
A working counter-example in which an agent correctly accepts a policy yet produces an AdherenceEvent that either cites a non-existent clause or omits the actual reasoning used for the action, while still passing the chain-integrity validator.
Figures
read the original abstract
As autonomous AI agents increasingly call other agents to complete tasks on behalf of a human principal, a structural accountability gap has emerged: the calling agent accepts the terms of service of the callee without any protocol-level mechanism to prove that it understood those terms or that it subsequently honoured them. Authentication protocols such as OAuth and mutual TLS establish who may call which capability. They do not address under what conditions a permitted call may be made, and those conditions change as the callee's policies evolve. In this paper we formalise the distinction between proof of acceptance (a timestamped acknowledgement) and proof of adherence (a per-action reasoning record citing the specific clause evaluated). We propose three primitives (PolicyDocument, ConsentRecord, and AdherenceEvent) that together constitute a versioned, append-only consent model for agent-to-agent communication. The model is instantiated as a non-breaking extension to two widely used agent protocols: the Agent2Agent (A2A) protocol and the Model Context Protocol (MCP). A TLA+ specification of the consent lifecycle, together with a reference Python implementation of the chain integrity and adherence trail validators, is available in the accompanying repository.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes Anumati, a formal consent model for autonomous AI agent protocols. It distinguishes proof of acceptance (a timestamped acknowledgement) from proof of adherence (a per-action reasoning record citing the specific policy clause evaluated). The model is defined via three primitives—PolicyDocument, ConsentRecord, and AdherenceEvent—that together form a versioned, append-only consent structure. This is instantiated as a non-breaking extension to the Agent2Agent (A2A) and Model Context Protocol (MCP) protocols, supported by a TLA+ specification of the consent lifecycle and a reference Python implementation providing validators for chain integrity and adherence trails.
Significance. If the formalization is sound, the work addresses a genuine accountability gap in agent-to-agent interactions where authentication protocols alone do not establish policy understanding or subsequent adherence. The explicit separation of acceptance from per-action adherence records is a clear conceptual contribution. The provision of a TLA+ specification together with reproducible Python validators for chain integrity is a notable strength, as it enables machine-checked verification and supports independent validation of the append-only property.
minor comments (2)
- A concrete worked example showing the generation of an AdherenceEvent that cites a specific clause from a PolicyDocument would improve readability and help readers assess how the per-action reasoning record is constructed in practice.
- The repository containing the TLA+ specification and Python validators should be referenced with an explicit, permanent URL or commit hash in the main text (rather than only in the abstract) to ensure long-term reproducibility.
Simulated Author's Rebuttal
We thank the referee for their positive review and recommendation of minor revision. The referee summary accurately reflects the paper's focus on distinguishing proof of acceptance from proof of adherence, the three primitives, and the TLA+ specification with Python validators as a non-breaking extension to A2A and MCP.
Circularity Check
No significant circularity: purely definitional formalization
full rationale
The paper introduces a consent model by defining three new primitives (PolicyDocument, ConsentRecord, AdherenceEvent) that distinguish timestamped acceptance from per-action adherence records and form a versioned append-only structure. This is presented as an original formalization instantiated in A2A/MCP protocols, backed by a TLA+ specification and Python validators for chain integrity. No equations, fitted parameters, self-citations, or reductions appear in the provided text that would make any claim equivalent to its inputs by construction; the central contribution is definitional and self-contained.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Calling agents can generate per-action reasoning records that cite specific policy clauses
invented entities (3)
-
PolicyDocument
no independent evidence
-
ConsentRecord
no independent evidence
-
AdherenceEvent
no independent evidence
Reference graph
Works this paper leans on
-
[1]
The 2025 AI Agent Index: Documenting Technical and Safety Features of Deployed Agentic AI Systems
AI Agent Index 2025. arXiv:2602.17753, February 2026. (The report covers the 2025 landscape; the preprint was published in February 2026.)
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[2]
Regulation (EU) 2024/1689
EU AI Act. Regulation (EU) 2024/1689. https://digital-strategy.ec.europa.e u/en/policies/regulatory-framework-ai
2024
-
[3]
AI Act Service Desk: Frequently Asked Questions
EU AI Office. AI Act Service Desk: Frequently Asked Questions. https: //ai-act-service-desk.ec.europa.eu/en/faq (As of March 2026, no FAQ or guidance document addresses consent mechanisms for autonomous agent-to- agent transactions.)
2026
-
[4]
https://a2a-protocol.org/latest/specification/
A2A Protocol Specification. https://a2a-protocol.org/latest/specification/
-
[5]
https://modelcontextprotocol.io/spec ification/
Model Context Protocol Specification. https://modelcontextprotocol.io/spec ification/
-
[6]
https://modelcontextprotocol.io/specificat ion/2025-11-25/changelog
MCP November 2025 Changelog. https://modelcontextprotocol.io/specificat ion/2025-11-25/changelog
2025
-
[7]
https://www.uniformlaw s.org/
Uniform Electronic Transactions Act (UETA), §14. https://www.uniformlaw s.org/
-
[8]
https://www.pr oskauer.com/blog/contract-law-in-the-age-of-agentic-ai-whos-really-clicking- accept
Proskauer: Contract Law in the Age of Agentic AI, 2025. https://www.pr oskauer.com/blog/contract-law-in-the-age-of-agentic-ai-whos-really-clicking- accept
2025
-
[9]
https://www.w3.org/TR/odrl-model/
W3C ODRL Information Model 2.2. https://www.w3.org/TR/odrl-model/
-
[10]
https://datatracker.ietf.org/doc /html/rfc7515
RFC 7515: JSON Web Signature (JWS). https://datatracker.ietf.org/doc /html/rfc7515
-
[11]
https://github.com/google-agentic- commerce/ap2
AP2: Agent Payments Protocol. https://github.com/google-agentic- commerce/ap2
-
[12]
Open Agent Governance Specification (OAGS)
Ngozo, J.F. Open Agent Governance Specification (OAGS). Sekuire, 2026. https://sekuire.ai/blog/introducing-open-agent-governance-specification
2026
-
[13]
https://air-governance-framework
FINOS AI Governance Framework v2.0. https://air-governance-framework. finos.org/
-
[14]
Aylward, J. et al. AIGA: AI Governance and Accountability Protocol. IETF Internet-Draft, draft-aylward-aiga-1. https://datatracker.ietf.org/doc/draft- aylward-aiga-1/
-
[15]
OpenMandate: Governing AI Agents by Authority, Not Instruction
McDonough, R. OpenMandate: Governing AI Agents by Authority, Not Instruction. Law://WhatsNext, 2026. https://lawwhatsnext.substack.com/p/op enmandate-governing-ai-agents-by
2026
-
[16]
Policy Cards: Machine-Readable Runtime Governance for Autonomous AI Agents,
Mavračić, J. Policy Cards: Machine-Readable Runtime Governance for Autonomous AI Agents. arXiv:2510.24383, 2025. 24
-
[17]
Palumbo, N. et al. PCAS: Policy Compiler for Secure Agentic Systems. arXiv:2602.16708, 2026
work page internal anchor Pith review arXiv 2026
- [18]
- [19]
-
[20]
https://standards.ieee.org/ieee/7012/
IEEE P7012: Standard for Machine-Readable Personal Privacy Terms. https://standards.ieee.org/ieee/7012/
-
[21]
https://w3c.github.io/dpv/dpv/
W3C Data Privacy Vocabulary (DPV) v2.2. https://w3c.github.io/dpv/dpv/
-
[22]
https://kantarainiti ative.org/
Kantara Initiative: Consent Receipt Specification v1.1. https://kantarainiti ative.org/
-
[23]
and Liang, B
Li, D., Yu, G., Wang, X. and Liang, B. AuditableLLM: A Hash-Chain- Backed, Compliance-Aware Auditable Framework for Large Language Models. Electronics, 15(1), 56. MDPI, 2025
2025
-
[24]
When an AI Agent Says ‘I Agree,’ Who’s Consenting? TechPol- icy.Press, December 2025
Rida, C. When an AI Agent Says ‘I Agree,’ Who’s Consenting? TechPol- icy.Press, December 2025. https://www.techpolicy.press/when-an-ai-agent-says- i-agree-whos-consenting/
2025
-
[25]
https://datatracker.ie tf.org/doc/html/rfc9396
RFC 9396: OAuth 2.0 Rich Authorization Requests. https://datatracker.ie tf.org/doc/html/rfc9396
-
[26]
January 2017
Kantara Initiative.User-Managed Access (UMA) 2.0 Grant for OAuth 2.0 Authorization. January 2017. https://kantarainitiative.org/uma-specifications/ 25
2017
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.