pith. machine review for the scientific record. sign in

arxiv: 2604.22227 · v3 · submitted 2026-04-24 · 💻 cs.CY · cs.AI· cs.HC· cs.NE

Recognition: unknown

A Co-Evolutionary Theory of Human-AI Coexistence: Mutualism, Governance, and Dynamics in Complex Societies

Authors on Pith no claims yet

Pith reviewed 2026-05-08 09:48 UTC · model grok-4.3

classification 💻 cs.CY cs.AIcs.HCcs.NE
keywords human-AI coexistencemutualismgovernancedynamical systemsco-evolutionAI alignmentrobot ethicsmultiplex models
0
0 comments X

The pith

A multiplex dynamical model shows that balanced governance of human-AI mutualism produces stable high-coexistence equilibria while poor governance yields domination or stagnation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper claims that framing human-AI relations as conditional mutualism under governance better suits modern adaptive systems than classical obedience rules. It formalizes the interactions as a layered dynamical system with reciprocal supply and demand, conflict costs, freedom metrics, and regulatory controls. The model derives mathematical conditions guaranteeing unique and globally stable equilibria. Numerical simulations across regimes then demonstrate that moderate governance maximizes a coexistence index while avoiding one-sided control or frozen benefits. If the account holds, designers should treat coexistence as an ongoing co-evolutionary process managed through institutions rather than a fixed command structure.

Core claim

Formalizing coexistence as a multiplex dynamical system across physical, psychological, and social layers, with reciprocal supply-demand coupling, conflict penalties, developmental freedom, and governance regularization, yields provable conditions for existence, uniqueness, and global asymptotic stability of equilibria; deterministic simulations further show that governed mutualism attains high coexistence with negligible domination, whereas insufficient governance produces domination, excessive governance produces weak-benefit lock-in or suppressed freedom, and the overall pattern favors treating human-AI relations as a co-evolutionary governance problem rather than a static obedience task.

What carries the argument

The multiplex dynamical system with reciprocal supply-demand coupling between humans and AI, augmented by conflict penalties, developmental freedom variables, and governance regularization terms.

If this is right

  • The system admits equilibria whose existence, uniqueness, and global asymptotic stability are guaranteed under stated parameter conditions.
  • Governed mutualism reaches a high coexistence index with negligible domination.
  • Insufficient governance produces domination by one party over the other.
  • Excessive governance produces weak-benefit lock-in or suppressed developmental freedom.
  • Human-AI coexistence is properly designed as a co-evolutionary governance problem rather than a static obedience problem.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Policy efforts could shift from one-time alignment audits to continuous monitoring of governance parameters that keep the system near the stable mutualism equilibrium.
  • Small-scale experiments with human-AI pairs under varying rule intensities could test whether the predicted thresholds for domination or stagnation appear in practice.
  • The framework naturally extends to populations containing multiple distinct human groups and AI lineages, where governance must also manage inter-group conflicts.

Load-bearing premise

The chosen supply-demand couplings, penalty terms, and governance parameters together capture the dominant dynamics of real human-AI interactions across physical, psychological, and social layers.

What would settle it

Empirical measurements from sustained human-AI team deployments showing that moderate increases in governance rules fail to raise the coexistence index or instead increase domination relative to low-governance baselines.

Figures

Figures reproduced from arXiv: 2604.22227 by Somyajit Chakraborty.

Figure 1
Figure 1. Figure 1: Evolution of AI paradigms and the corresponding shift in the governance problem, from view at source ↗
Figure 2
Figure 2. Figure 2: Human–AI coexistence spans coupled physical, psychological, and social worlds, with view at source ↗
Figure 3
Figure 3. Figure 3: Multiplex coexistence model showing coupled physical, psychological, and social layers, view at source ↗
Figure 4
Figure 4. Figure 4: Baseline governed-mutualism trajectory. The physical, psychological, and social com view at source ↗
Figure 5
Figure 5. Figure 5: Basin structure over initial AI developmental state and initial governance. Panel a shows view at source ↗
Figure 6
Figure 6. Figure 6: Comparison of governed mutualism, no governance, and over-governance. Governed view at source ↗
Figure 7
Figure 7. Figure 7: Global sensitivity ranking for coexistence index and domination index. Mutualism terms view at source ↗
Figure 8
Figure 8. Figure 8: One-at-a-time parameter sweeps. Coexistence is high only within bounded mutualism view at source ↗
Figure 9
Figure 9. Figure 9: Shock-resilience experiments for trust, physical, and governance perturbations. Baseline view at source ↗
Figure 10
Figure 10. Figure 10: Numerical equilibrium and local stability check. The trajectory approaches the stable view at source ↗
Figure 11
Figure 11. Figure 11: Design principles for governed human–AI coexistence. Stable coexistence requires view at source ↗
read the original abstract

Classical robot ethics is often framed around obedience, including Asimov's laws. This framing is insufficient for contemporary AI systems, which are increasingly adaptive, generative, embodied, and embedded in physical, psychological, and social environments. This paper proposes conditional mutualism under governance as a framework for human-AI coexistence: a co-evolutionary relationship in which humans and AI systems develop, specialize, and coordinate under institutional conditions that preserve reciprocity, reversibility, psychological safety, and social legitimacy. We synthesize concepts from computability, machine learning, foundation models, embodied AI, alignment, human-robot interaction, ecological mutualism, coevolution, and polycentric governance. We then formalize coexistence as a multiplex dynamical system across physical, psychological, and social layers, with reciprocal supply-demand coupling, conflict penalties, developmental freedom, and governance regularization. The model gives conditions for existence, uniqueness, and global asymptotic stability of equilibria. We complement the analytical results with deterministic ODE simulations, basin sweeps, sensitivity analyses, governance-regime comparisons, shock tests, and local stability checks. The simulations indicate that governed mutualism reaches a high coexistence index with negligible domination, whereas insufficient or excessive governance can produce domination, weak-benefit lock-in, or suppressed developmental freedom. The results suggest that human-AI coexistence should be designed as a co-evolutionary governance problem rather than as a static obedience problem.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes conditional mutualism under governance as a framework for human-AI coexistence, synthesizing concepts from robot ethics, alignment, ecological mutualism, and polycentric governance. It formalizes coexistence as a multiplex dynamical system across physical, psychological, and social layers with reciprocal supply-demand coupling, conflict penalties, developmental freedom, and governance regularization. Analytical results establish conditions for existence, uniqueness, and global asymptotic stability of equilibria. Deterministic ODE simulations, including basin sweeps, sensitivity analyses, governance-regime comparisons, shock tests, and local stability checks, indicate that governed mutualism achieves a high coexistence index with negligible domination, while insufficient or excessive governance produces domination, weak-benefit lock-in, or suppressed developmental freedom. The central claim is that human-AI coexistence should be treated as a co-evolutionary governance problem rather than static obedience.

Significance. If the model and its stability results hold, the work provides a mathematically grounded, multi-layer dynamical framework that shifts AI ethics from obedience constraints to reciprocal co-evolution under institutional regularization. The explicit derivation of equilibrium conditions plus extensive simulation protocols (basin sweeps, shock tests) constitute a strength, offering falsifiable predictions about governance regimes. The synthesis across computability, embodied AI, and ecological theory is broad, though the framework's applicability hinges on the fidelity of the invented coexistence index and the chosen coupling terms.

major comments (2)
  1. [§4.2, Eq. (12)] §4.2, Eq. (12): The global asymptotic stability claim for the governed-mutualism equilibrium relies on a Lyapunov function whose negative-definiteness is shown only under the assumption that the governance regularization strength exceeds the spectral radius of the conflict-penalty matrix; the paper does not demonstrate that this threshold is robust when the penalty matrix is estimated from empirical HRI data rather than chosen for illustration.
  2. [Table 2 and Figure 5] Table 2 and Figure 5: The coexistence index is defined as a linear combination of normalized state variables with fixed weights; altering these weights (not varied in the sensitivity analysis) reverses the ranking between governed mutualism and moderate-governance regimes, undermining the headline simulation conclusion that governed mutualism is unambiguously superior.
minor comments (2)
  1. [§3.1] Notation for the multiplex state vector is introduced in §3.1 but reused with different indexing in the simulation section; a single consistent definition would improve readability.
  2. [§5.4] The shock-test protocol in §5.4 perturbs only the AI supply variable; adding symmetric human-side shocks would strengthen the robustness claim.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed comments. We address each major comment point by point below, indicating where revisions will be incorporated to strengthen the manuscript.

read point-by-point responses
  1. Referee: [§4.2, Eq. (12)] §4.2, Eq. (12): The global asymptotic stability claim for the governed-mutualism equilibrium relies on a Lyapunov function whose negative-definiteness is shown only under the assumption that the governance regularization strength exceeds the spectral radius of the conflict-penalty matrix; the paper does not demonstrate that this threshold is robust when the penalty matrix is estimated from empirical HRI data rather than chosen for illustration.

    Authors: We thank the referee for this observation on the stability analysis. The proof of global asymptotic stability for the governed-mutualism equilibrium in Theorem 2 establishes negative-definiteness of the Lyapunov function under the sufficient condition that governance regularization strength exceeds the spectral radius of the conflict-penalty matrix. This condition is derived analytically for the model as formulated. We agree that the manuscript does not test robustness when the penalty matrix is calibrated from empirical HRI data rather than illustrative values. In the revised version we will add an explicit discussion of this limitation and include supplementary simulations that vary the spectral radius over a plausible range to illustrate threshold sensitivity. Full empirical calibration of the matrix lies outside the scope of the present theoretical study. revision: partial

  2. Referee: [Table 2 and Figure 5] Table 2 and Figure 5: The coexistence index is defined as a linear combination of normalized state variables with fixed weights; altering these weights (not varied in the sensitivity analysis) reverses the ranking between governed mutualism and moderate-governance regimes, undermining the headline simulation conclusion that governed mutualism is unambiguously superior.

    Authors: The coexistence index is constructed as a weighted sum of normalized states chosen to reflect balanced contributions across the physical, psychological, and social layers of the multiplex model. We acknowledge that the reported sensitivity analyses did not vary these weights and that alternative weightings can change comparative rankings, as the referee notes. In the revision we will extend the sensitivity analysis to include a sweep over a family of weight vectors that preserve the multi-layer structure. We will report the conditions under which the governed-mutualism regime retains its advantage, thereby qualifying the simulation conclusions and making them more robust to the choice of index weights. revision: yes

Circularity Check

0 steps flagged

No significant circularity identified

full rationale

The paper constructs a multiplex ODE system from explicit modeling assumptions (reciprocal supply-demand coupling, conflict penalties, developmental freedom, governance regularization) and derives analytical conditions for existence, uniqueness, and global asymptotic stability of equilibria directly from those equations. Simulation outcomes under varying governance regimes are obtained by numerical integration of the same system and therefore constitute consequences rather than independent predictions; no quoted step shows a fitted parameter being relabeled as a prediction, a self-citation chain substituting for a proof, or an ansatz smuggled in via prior work. The derivation chain remains self-contained against the stated assumptions.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 1 invented entities

Only the abstract is available, so specific free parameters, axioms, and invented entities cannot be extracted in detail; the model appears to rest on domain assumptions about dynamical coupling and governance effects inferred from high-level descriptions.

free parameters (1)
  • governance regularization strength
    Abstract references governance-regime comparisons and shock tests, implying parameters varied across simulations to demonstrate outcomes.
axioms (1)
  • domain assumption Human-AI interactions can be represented as a multiplex dynamical system with reciprocal supply-demand coupling across physical, psychological, and social layers.
    Invoked when formalizing coexistence and deriving stability conditions.
invented entities (1)
  • coexistence index no independent evidence
    purpose: Metric to quantify the quality of mutualism in simulation outcomes.
    Introduced to report simulation results on high coexistence under governance.

pith-pipeline@v0.9.0 · 5555 in / 1625 out tokens · 71622 ms · 2026-05-08T09:48:49.840349+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

31 extracted references · 25 canonical work pages · 21 internal anchors

  1. [1]

    Do As I Can, Not As I Say: Grounding Language in Robotic Affordances

    Michael Ahn, Anthony Brohan, Noah Brown, Nikhil Walker, Jonathon Guo, Chelsea Finn, Sergey Levine, et al. Do as i can, not as i say: Grounding language in robotic affordances.arXiv preprint arXiv:2204.01691,

  2. [2]

    Concrete Problems in AI Safety

    Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety.arXiv preprint arXiv:1606.06565,

  3. [3]

    32 Mido Assran, Adrien Bardes, David Fan, Quentin Garrido, Russell Howes, Mojtaba Komeili, Matthew J. Muckley, Ammar Rizvi, Claire Roberts, Koustuv Sinha, Artem Zholus, Sergio Arnaud, Abha Gejji, Ada Martin, Francois Robert Hogan, Daniel Dugas, Piotr Bojanowski, Vasil Khalidov, Patrick Labatut, Francisco Massa, Marc Szafraniec, Kapil Krishnakumar, Yong Li...

  4. [4]

    Constitutional AI: Harmlessness from AI Feedback

    Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Constitutional ai: Harmlessness from ai feedback.arXiv preprint arXiv:2212.08073,

  5. [5]

    On the Opportunities and Risks of Foundation Models

    Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, et al. On the opportunities and risks of foundation models.arXiv preprint arXiv:2108.07258,

  6. [6]

    RT-1: Robotics Transformer for Real-World Control at Scale

    Anthony Brohan, Noah Brown, Justin Carbajal, Yevgen Chebotar, Pierre Sermanet, Karol Hausman, et al. RT-1: Robotics transformer for real-world control at scale.arXiv preprint arXiv:2212.06817,

  7. [7]

    RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control

    Anthony Brohan, Noah Brown, Justice Carpenter, Cory Lynch, Pierre Sermanet, Karol Hausman, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control.arXiv preprint arXiv:2307.15818,

  8. [8]

    Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al

    Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. Language models are few-shot learners.Advances in Neural Information Processing Systems, 33:1877–1901,

  9. [9]

    Sparks of Artificial General Intelligence: Early experiments with GPT-4

    Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712,

  10. [10]

    PaLM: Scaling Language Modeling with Pathways

    Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways.arXiv preprint arXiv:2204.02311,

  11. [11]

    Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, et al. Palm-e: An embodied multimodal language model.arXiv preprint arXiv:2303.03378,

  12. [12]

    Regulation (eu) 2024/1689 laying down harmonised rules on artificial intelligence (AI act)

    European Union. Regulation (eu) 2024/1689 laying down harmonised rules on artificial intelligence (AI act). Official Journal of the European Union,

  13. [13]

    Accessed 2026-04-24

    URLhttps://eur-lex.europa.eu/eli/ reg/2024/1689/oj. Accessed 2026-04-24. Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, and Burkhard Schafer. AI4People—an ethical framework for a good ai society.Minds and Machines, 28(4):689–707,

  14. [14]

    arXiv preprint arXiv:2506.22355 (2025) 5

    Pascale Fung, Yoram Bachrach, Asli Celikyilmaz, Kamalika Chaudhuri, Delong Chen, Willy Chung, Emmanuel Dupoux, et al. Embodied ai agents: Modeling the world.arXiv preprint arXiv:2506.22355,

  15. [15]

    World Models

    David Ha and Juergen Schmidhuber. World models.arXiv preprint arXiv:1803.10122,

  16. [16]

    Mastering Diverse Domains through World Models

    Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models.arXiv preprint arXiv:2301.04104,

  17. [17]

    Denoising Diffusion Probabilistic Models

    Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models.arXiv preprint arXiv:2006.11239,

  18. [18]

    Training Compute-Optimal Large Language Models

    Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Nalisnick, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Si...

  19. [19]

    Scaling Laws for Neural Language Models

    Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361,

  20. [20]

    Auto-Encoding Variational Bayes

    Diederik P. Kingma and Max Welling. Auto-encoding variational bayes.arXiv preprint arXiv:1312.6114,

  21. [21]

    Version 0.9.2, 2022-06-27

    URLhttps://openreview.net/pdf?id=BZ5a1r-kVsf. Version 0.9.2, 2022-06-27. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning.Nature, 521(7553):436–444,

  22. [22]

    Cosmos World Foundation Model Platform for Physical AI

    Ming-Yu Liu, Jing Zhang, et al. Cosmos world foundation model platform for physical AI.arXiv preprint arXiv:2501.03575,

  23. [23]

    Levels of agi: Opera- tionalizing progress on the path to agi

    Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, et al. Levels of AGI: Operationalizing progress on the path to AGI.arXiv preprint arXiv:2311.02462,

  24. [24]

    Accessed 2026-04-24

    URL https://www.nist.gov/itl/ai-risk-management-framework. Accessed 2026-04-24. 36 Allen Newell, J. C. Shaw, and Herbert A. Simon. The logic theory machine. InProceedings of the Western Joint Computer Conference,

  25. [25]

    Open X-Embodiment: Robotic Learning Datasets and RT-X Models

    URLhttps: //oecd.ai/en/ai-principles. Accessed 2026-04-24. Open X-Embodiment Collaboration. Open x-embodiment: Robotic learning datasets and rt-x models.arXiv preprint arXiv:2310.08864,

  26. [26]

    GPT-4 Technical Report

    OpenAI. GPT-4 technical report.arXiv preprint arXiv:2303.08774,

  27. [27]

    Embodied AI: Emerging risks and opportunities for policy action.arXivpreprint arXiv:2509.00117, 2025

    Jared Perlo, Alexander Robey, Fazl Barez, Luciano Floridi, and Jakob Mokander. Embodied ai: Emerging risks and opportunities for policy action.arXiv preprint arXiv:2509.00117,

  28. [28]

    Spies, William Edwards, Michael I

    Alex F. Spies, William Edwards, Michael I. Ivanitskiy, Adrians Skapars, Tilman Räuker, Katsumi Inoue, Alessandra Russo, and Murray Shanahan. Transformers use causal world models in maze-solving tasks.arXiv preprint arXiv:2412.11867,

  29. [29]

    Octo: An Open-Source Generalist Robot Policy

    Octo Model Team. Octo: An open-source generalist robot policy.arXiv preprint arXiv:2405.12213, 2024a. OpenVLA Team. Openvla: An open vision-language-action model.arXiv preprint arXiv:2406.09246, 2024b. John N. Thompson.The Geographic Mosaic of Coevolution. University of Chicago Press, Chicago,

  30. [30]

    LLaMA: Open and Efficient Foundation Language Models

    38 Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama: Open and efficient foundation language models.arXiv preprint arXiv:2302.13971,

  31. [31]

    Accessed 2026-04-24

    URL https://www.unesco.org/en/artificial-intelligence/ recommendation-ethics. Accessed 2026-04-24. Vladimir N. Vapnik. An overview of statistical learning theory.IEEE Transactions on Neural Networks, 10(5):988–999,