pith. machine review for the scientific record. sign in

arxiv: 2604.20867 · v1 · submitted 2026-03-26 · 💻 cs.CY · cs.AI· cs.CR

Recognition: no theorem link

Preserving Decision Sovereignty in Military AI: A Trade-Secret-Safe Architectural Framework for Model Replaceability, Human Authority, and State Control

Authors on Pith no claims yet

Pith reviewed 2026-05-15 00:57 UTC · model grok-4.3

classification 💻 cs.CY cs.AIcs.CR
keywords decision sovereigntymilitary AImodel replaceabilitytrade secret protectionarchitectural frameworkhuman oversightstate controlvendor dependency
0
0 comments X

The pith

Military decision sovereignty can be preserved by making commercial AI models replaceable components in a state-controlled framework.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper contends that the primary challenge in integrating commercial AI into military operations lies in maintaining the state's control over critical decisions rather than merely gaining access to advanced models. It develops an architectural approach known as the Energetic Paradigm, which separates analytical functions provided by vendors from core command elements like policy enforcement, version management, auditing, and approval processes that stay under state ownership. This separation allows models to be swapped out as needed while preserving human authority and reducing reliance on any single supplier. The framework addresses risks illustrated by recent vendor disputes and aims to support secure procurement and interoperability among allies without exposing proprietary details.

Core claim

The central discovery is that decision sovereignty in military AI can be maintained through a trade-secret-safe, layered design in which supplier-provided models serve only as interchangeable analytical modules, while the functions of routing decisions, applying constraints, logging activities, handling escalations, and authorizing actions remain exclusively under state control.

What carries the argument

The Energetic Paradigm, defined as a layered, model-agnostic command-support design that treats commercial models as replaceable analytical components while keeping routing, constraints, logging, escalation, and action authorization as state-owned functions.

Load-bearing premise

A layered, model-agnostic command-support design can be implemented such that commercial models are fully replaceable without reducing essential system performance or exposing proprietary vendor information.

What would settle it

A practical demonstration in which swapping one commercial model for another within the proposed framework either degrades the system's decision-making capability or allows the new supplier to influence policy boundaries or approval processes.

read the original abstract

Recent events surrounding the relationship between frontier AI suppliers and national-security customers have made a structural problem newly visible: once a privately governed model becomes embedded in military workflows, the supplier can influence not only technical performance but also the operational boundary conditions under which the system may be used. This paper argues that the central strategic issue is not merely access to capable models, but preservation of decision sovereignty: the state's ability to retain authority over decision policy, version control, fallback behavior, auditability, and final action approval even when analytical modules are sourced from commercial vendors. Using the public Anthropic--Pentagon dispute of 2026, the broader history of Project Maven, and recent U.S., NATO, U.K., and intelligence-community guidance as a motivating context, the paper develops a trade-secret-safe architectural formulation of the Energetic Paradigm as a layered, model-agnostic command-support design. In this formulation, supplier models remain replaceable analytical components, while routing, constraints, logging, escalation, and action authorization remain state-owned functions. The paper contributes three things: a definition of decision sovereignty for military AI; a threat model for supplier-induced boundary control; and a public architectural specification showing how model replaceability, human authority, and sovereign orchestration can reduce strategic dependency without requiring disclosure of proprietary implementation details. The argument is conceptual rather than experimental, but it yields concrete implications for procurement, governance, and alliance interoperability.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper presents a conceptual architectural framework, termed the Energetic Paradigm, for preserving decision sovereignty in military AI applications. It posits that by adopting a layered, model-agnostic design, states can maintain control over decision policies, version control, fallback mechanisms, auditability, and final approvals even when using commercial AI models as analytical components. Drawing on the Anthropic-Pentagon dispute of 2026, Project Maven history, and international guidance, the work defines decision sovereignty, outlines a threat model involving supplier boundary control, and provides a public specification for replaceable models with sovereign orchestration layers.

Significance. Should the framework prove implementable, it would offer significant value by addressing a critical gap in military AI governance: the risk of ceding operational authority to private vendors. The ideas could inform procurement policies, enhance alliance interoperability, and promote designs that prioritize human oversight and state control. As a conceptual contribution without empirical validation, its impact depends on adoption in practice, but it highlights important strategic considerations in the field.

major comments (2)
  1. Architectural Specification section: the central claim that supplier models remain fully replaceable without loss of essential capability rests on the orchestration layer compensating for model-specific behaviors (output formatting, uncertainty calibration, domain-tuned reasoning) using only public interfaces. No concrete mechanism, protocol, or worked example is provided to show how this compensation occurs across model families while staying trade-secret-safe, leaving the no-loss guarantee as an unverified assumption that is load-bearing for the replaceability argument.
  2. Threat model and escalation components: while the threat model for supplier-induced boundary control is motivated by public events, the manuscript does not explicitly map each identified threat to mitigation steps in the state-owned layers (routing, constraints, logging, authorization), making it difficult to assess whether the proposed design actually neutralizes the risks without additional assumptions.
minor comments (2)
  1. Abstract: the term 'Energetic Paradigm' appears without any explanatory phrase or reference to its conceptual origins, which reduces immediate clarity for readers unfamiliar with the framing.
  2. Motivating context: the references to specific U.S., NATO, U.K., and intelligence-community guidance documents would be strengthened by adding precise citations or footnotes rather than general mentions.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments on our conceptual framework. The manuscript is intentionally high-level to preserve trade-secret safety while specifying public interfaces; we address each point below and will revise accordingly to strengthen clarity without altering the core claims.

read point-by-point responses
  1. Referee: Architectural Specification section: the central claim that supplier models remain fully replaceable without loss of essential capability rests on the orchestration layer compensating for model-specific behaviors (output formatting, uncertainty calibration, domain-tuned reasoning) using only public interfaces. No concrete mechanism, protocol, or worked example is provided to show how this compensation occurs across model families while staying trade-secret-safe, leaving the no-loss guarantee as an unverified assumption that is load-bearing for the replaceability argument.

    Authors: We agree the replaceability claim would be strengthened by greater specificity on compensation. The Energetic Paradigm relies on the orchestration layer using only public interfaces (standardized APIs, structured JSON schemas, and calibration protocols) for normalization, post-hoc uncertainty adjustment via ensemble wrappers, and policy-based constraints. Specific model behaviors are abstracted away to avoid proprietary disclosure. We will revise the Architectural Specification section to include a high-level protocol outline and pseudocode worked example demonstrating cross-family output normalization and fallback routing. This makes the mechanism explicit while remaining conceptual and trade-secret-safe. revision: partial

  2. Referee: Threat model and escalation components: while the threat model for supplier-induced boundary control is motivated by public events, the manuscript does not explicitly map each identified threat to mitigation steps in the state-owned layers (routing, constraints, logging, authorization), making it difficult to assess whether the proposed design actually neutralizes the risks without additional assumptions.

    Authors: We accept that an explicit mapping is needed for rigorous assessment. The threats (supplier boundary enforcement, version lock-in, audit opacity) drawn from the Anthropic-Pentagon dispute and Project Maven are addressed by state-owned routing for policy overrides, constraints for usage limits, logging for traceability, and authorization for final approvals. We will add a table in the Threat Model section explicitly mapping each threat to its mitigation in the state-owned layers. This will demonstrate neutralization without extra assumptions. revision: yes

Circularity Check

0 steps flagged

No circularity: conceptual framework built from external events and guidance

full rationale

The manuscript is entirely conceptual and contains no equations, fitted parameters, derivations, or self-referential definitions. It defines decision sovereignty, presents a threat model, and proposes a layered architectural specification using public events (Anthropic-Pentagon dispute, Project Maven) and existing government guidance as inputs. No step reduces a claimed result to its own inputs by construction, and no load-bearing premise depends on a self-citation chain that itself lacks independent verification. The argument remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The central claim rests on the domain assumption that commercial models can be isolated as replaceable analytical components while state-owned layers retain full control; no free parameters or fitted values are involved because the work is purely conceptual.

axioms (1)
  • domain assumption Commercial AI models can be treated as interchangeable analytical modules without compromising essential mission functionality when state-controlled layers handle routing, constraints, and authorization.
    Invoked throughout the description of the layered design to justify replaceability.
invented entities (1)
  • Energetic Paradigm no independent evidence
    purpose: A layered, model-agnostic command-support design that separates supplier models from sovereign control functions.
    Introduced as the core architectural formulation of the paper.

pith-pipeline@v0.9.0 · 5564 in / 1475 out tokens · 54800 ms · 2026-05-15T00:57:18.030493+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

50 extracted references · 50 canonical work pages · 3 internal anchors

  1. [1]

    Anthropic sues to block Pentagon blacklisting over AI use restrictions

    Reuters. “Anthropic sues to block Pentagon blacklisting over AI use restrictions.” March 9, 2026

  2. [2]

    Anthropic courted the Pentagon. Here’s why it walked away

    Reuters. “Anthropic courted the Pentagon. Here’s why it walked away.” March 4, 2026

  3. [3]

    Anthropic sues Pentagon over national security risk label

    The Washington Post. “Anthropic sues Pentagon over national security risk label.” March 9, 2026

  4. [4]

    Pentagon’s chief tech officer says he clashed with AI company Anthropic over autonomous warfare

    Associated Press. “Pentagon’s chief tech officer says he clashed with AI company Anthropic over autonomous warfare.” March 7, 2026

  5. [5]

    Anthropic and Palantir Partner to Bring Claude AI Models to AWS for U.S. Government Intelligence and Defense Operations

    Anthropic and Palantir. “Anthropic and Palantir Partner to Bring Claude AI Models to AWS for U.S. Government Intelligence and Defense Operations.” November 7, 2024

  6. [6]

    Highlights from the AWS re:Invent 2024 Public Sector Innovation Session

    Amazon Web Services. “Highlights from the AWS re:Invent 2024 Public Sector Innovation Session.” December 3, 2024

  7. [7]

    Anthropic and the Department of Defense to Advance Responsible AI in Defense Operations

    Anthropic. “Anthropic and the Department of Defense to Advance Responsible AI in Defense Operations.” July 14, 2025

  8. [8]

    Claude’s Constitution

    Anthropic. “Claude’s Constitution.” Accessed March 2026. 10

  9. [9]

    System Card: Claude Opus 4 & Claude Sonnet 4

    Anthropic. “System Card: Claude Opus 4 & Claude Sonnet 4.” July 16, 2025

  10. [10]

    DoD Adopts Ethical Principles for Artificial Intelligence

    U.S. Department of Defense. “DoD Adopts Ethical Principles for Artificial Intelligence.” February 24, 2020

  11. [11]

    Department of Defense.Responsible Artificial Intelligence Strategy and Implementation Pathway

    U.S. Department of Defense.Responsible Artificial Intelligence Strategy and Implementation Pathway. June 2022

  12. [12]

    Department of Defense.DoD Directive 3000.09: Autonomy in Weapon Systems

    U.S. Department of Defense.DoD Directive 3000.09: Autonomy in Weapon Systems. January 25, 2023

  13. [13]

    Office of the Director of National Intelligence.Principles of Artificial Intelligence Ethics for the Intelligence Community. 2020

  14. [14]

    Office of the Director of National Intelligence.Artificial Intelligence Ethics Framework for the Intelligence Community. 2020

  15. [15]

    Intelligence Community.Common Intelligence Community Interim Guidance Regarding the Acquisition and Use of Foundation AI Models

    U.S. Intelligence Community.Common Intelligence Community Interim Guidance Regarding the Acquisition and Use of Foundation AI Models. 2024

  16. [16]

    NIST AI 100-1, 2023

    National Institute of Standards and Technology.Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1, 2023

  17. [17]

    March 28, 2024

    Office of Management and Budget.M-24-10: Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. March 28, 2024

  18. [18]

    September 24, 2024

    Office of Management and Budget.M-24-18: Advancing the Responsible Acquisition of Artificial Intelligence in Government. September 24, 2024

  19. [19]

    April 3, 2025

    Office of Management and Budget.M-25-22: Driving Efficient Acquisition of Artificial Intelligence in Government. April 3, 2025

  20. [20]

    National Security Commission on Artificial Intelligence.Final Report. 2021

  21. [21]

    Department of Defense.Algorithmic Warfare Cross-Functional Team (Project Maven)

    U.S. Department of Defense.Algorithmic Warfare Cross-Functional Team (Project Maven). April 25, 2017

  22. [22]

    ContractingpersonneluseAI,MavenSmartSystemsimulationduringwarfighter exercise

    U.S.Army.“ContractingpersonneluseAI,MavenSmartSystemsimulationduringwarfighter exercise.” March 3, 2025

  23. [23]

    October 22, 2021

    NATO.Summary of the NATO Artificial Intelligence Strategy. October 22, 2021

  24. [24]

    July 10, 2024

    NATO.Summary of NATO’s Revised Artificial Intelligence Strategy. July 10, 2024

  25. [25]

    NATO acquires AI-enabled warfighting system

    NATO Communications and Information Agency. “NATO acquires AI-enabled warfighting system.” April 14, 2025

  26. [26]

    Ministry of Defence.Ambitious, Safe, Responsible: Our Approach to the Delivery of AI-Enabled Capability in Defence

    U.K. Ministry of Defence.Ambitious, Safe, Responsible: Our Approach to the Delivery of AI-Enabled Capability in Defence. June 2022. 11

  27. [27]

    Emelia Probasco, Helen Toner, Matthew Burtell, and Tim G. J. Rudner.AI for Military Decision-Making: Harnessing the Advantages and Avoiding the Risks. Center for Security and Emerging Technology, April 2025

  28. [28]

    Atlantic Council, August 2023

    Tate Nurkin and Julia Siegel.Battlefield Applications for Human-Machine Teaming: Demon- strating Value, Experimenting with New Capabilities, and Accelerating Adoption. Atlantic Council, August 2023

  29. [29]

    Center for War Studies, 2024

    Anna Nadibaidze, Ingvild Bode, and Qiaochu Zhang.AI in Military Decision Support Systems: A Review of Developments and Debates. Center for War Studies, 2024

  30. [30]

    Centre for Military Studies, University of Copenhagen, 2025

    Lena Trabucco and Esben Salling Larsen.Artificial Intelligence in Command and Control. Centre for Military Studies, University of Copenhagen, 2025

  31. [31]

    Moro, eds.NATO Decision-Making in the Age of Big Data and Artificial Intelligence

    Sonia Lucarelli, Alessandro Marrone, and Francesco N. Moro, eds.NATO Decision-Making in the Age of Big Data and Artificial Intelligence. NATO Allied Command Transformation, 2021

  32. [32]

    Trusting machine intelligence: artificial intelligence and human-autonomy teaming in military operations

    Michael Mayer. “Trusting machine intelligence: artificial intelligence and human-autonomy teaming in military operations.”Defense & Security Analysis39, no. 4 (2023): 521–538

  33. [33]

    Centre for International Governance Innovation Policy Brief No

    Ingvild Bode.Human-Machine Interaction and Human Agency in the Military Domain. Centre for International Governance Innovation Policy Brief No. 193, 2025

  34. [34]

    Reconciling trust and control in the military use of artificial intelligence

    Tim McFarland. “Reconciling trust and control in the military use of artificial intelligence.” International Journal of Law and Information Technology30, no. 4 (2022): 472–495

  35. [35]

    Anthony Pfaff.Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge

    C. Anthony Pfaff.Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge. U.S. Army War College Press, 2023

  36. [36]

    German Federal Office for Information Security (BSI) and ANSSI.Design Principles for LLM-Based Systems with Zero Trust. 2025

  37. [37]

    Responsible artificial intel- ligence governance: A review and research agenda

    Eleftherios Papagiannidis, Panagiotis Mikalef, and colleagues. “Responsible artificial intel- ligence governance: A review and research agenda.”Technological Forecasting and Social Change210 (2025)

  38. [38]

    Research priorities for robust and beneficial artificial intelligence

    Stuart Russell, Daniel Dewey, and Max Tegmark. “Research priorities for robust and beneficial artificial intelligence.”AI Magazine36, no. 4 (2015): 105–114

  39. [39]

    Concrete Problems in AI Safety

    Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. “Concrete problems in AI safety.” arXiv:1606.06565, 2016

  40. [40]

    arXiv:1802.07228, 2018

    Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, and others.The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv:1802.07228, 2018

  41. [41]

    On the Opportunities and Risks of Foundation Models

    Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, and others. “On the opportunities and risks of foundation models.” arXiv:2108.07258, 2021. 12

  42. [42]

    Ethical and social risks of harm from Language Models

    Laura Weidinger, Jonathan Uesato, Maribeth Rauh, Conor Griffin, Po-Sen Huang, Matthew Mellor, and others. “Ethical and social risks of harm from language models.” arXiv:2112.04359, 2022

  43. [43]

    Accountable algorithms

    Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. “Accountable algorithms.”University of Pennsylvania Law Review165, no. 3 (2017): 633–705

  44. [44]

    Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing

    Inioluwa Deborah Raji, Andrew Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. “Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing.” Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(2020): 33–44

  45. [45]

    Originally adopted 2019; revised 2024

    OECD.Recommendation of the Council on Artificial Intelligence. Originally adopted 2019; revised 2024

  46. [46]

    UNESCO.Recommendation on the Ethics of Artificial Intelligence. 2021

  47. [47]

    European Commission, 2019

    High-Level Expert Group on Artificial Intelligence.Ethics Guidelines for Trustworthy AI. European Commission, 2019

  48. [48]

    Ethical principles for artificial intelligence in national defence

    Mariarosaria Taddeo and Luciano Floridi. “Ethical principles for artificial intelligence in national defence.”Minds and Machines31 (2021): 227–234

  49. [49]

    Policy on Autonomy in Weapon Systems

    Human Rights Watch and Harvard Law School International Human Rights Clinic.Review of the 2023 U.S. Policy on Autonomy in Weapon Systems. February 2023

  50. [50]

    Google scraps promise not to develop AI weapons

    The Verge. “Google scraps promise not to develop AI weapons.” February 2025. 13