Recognition: unknown
A Co-Evolutionary Theory of Human-AI Coexistence: Mutualism, Governance, and Dynamics in Complex Societies
Pith reviewed 2026-05-08 09:48 UTC · model grok-4.3
The pith
A multiplex dynamical model shows that balanced governance of human-AI mutualism produces stable high-coexistence equilibria while poor governance yields domination or stagnation.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Formalizing coexistence as a multiplex dynamical system across physical, psychological, and social layers, with reciprocal supply-demand coupling, conflict penalties, developmental freedom, and governance regularization, yields provable conditions for existence, uniqueness, and global asymptotic stability of equilibria; deterministic simulations further show that governed mutualism attains high coexistence with negligible domination, whereas insufficient governance produces domination, excessive governance produces weak-benefit lock-in or suppressed freedom, and the overall pattern favors treating human-AI relations as a co-evolutionary governance problem rather than a static obedience task.
What carries the argument
The multiplex dynamical system with reciprocal supply-demand coupling between humans and AI, augmented by conflict penalties, developmental freedom variables, and governance regularization terms.
If this is right
- The system admits equilibria whose existence, uniqueness, and global asymptotic stability are guaranteed under stated parameter conditions.
- Governed mutualism reaches a high coexistence index with negligible domination.
- Insufficient governance produces domination by one party over the other.
- Excessive governance produces weak-benefit lock-in or suppressed developmental freedom.
- Human-AI coexistence is properly designed as a co-evolutionary governance problem rather than a static obedience problem.
Where Pith is reading between the lines
- Policy efforts could shift from one-time alignment audits to continuous monitoring of governance parameters that keep the system near the stable mutualism equilibrium.
- Small-scale experiments with human-AI pairs under varying rule intensities could test whether the predicted thresholds for domination or stagnation appear in practice.
- The framework naturally extends to populations containing multiple distinct human groups and AI lineages, where governance must also manage inter-group conflicts.
Load-bearing premise
The chosen supply-demand couplings, penalty terms, and governance parameters together capture the dominant dynamics of real human-AI interactions across physical, psychological, and social layers.
What would settle it
Empirical measurements from sustained human-AI team deployments showing that moderate increases in governance rules fail to raise the coexistence index or instead increase domination relative to low-governance baselines.
Figures
read the original abstract
Classical robot ethics is often framed around obedience, including Asimov's laws. This framing is insufficient for contemporary AI systems, which are increasingly adaptive, generative, embodied, and embedded in physical, psychological, and social environments. This paper proposes conditional mutualism under governance as a framework for human-AI coexistence: a co-evolutionary relationship in which humans and AI systems develop, specialize, and coordinate under institutional conditions that preserve reciprocity, reversibility, psychological safety, and social legitimacy. We synthesize concepts from computability, machine learning, foundation models, embodied AI, alignment, human-robot interaction, ecological mutualism, coevolution, and polycentric governance. We then formalize coexistence as a multiplex dynamical system across physical, psychological, and social layers, with reciprocal supply-demand coupling, conflict penalties, developmental freedom, and governance regularization. The model gives conditions for existence, uniqueness, and global asymptotic stability of equilibria. We complement the analytical results with deterministic ODE simulations, basin sweeps, sensitivity analyses, governance-regime comparisons, shock tests, and local stability checks. The simulations indicate that governed mutualism reaches a high coexistence index with negligible domination, whereas insufficient or excessive governance can produce domination, weak-benefit lock-in, or suppressed developmental freedom. The results suggest that human-AI coexistence should be designed as a co-evolutionary governance problem rather than as a static obedience problem.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes conditional mutualism under governance as a framework for human-AI coexistence, synthesizing concepts from robot ethics, alignment, ecological mutualism, and polycentric governance. It formalizes coexistence as a multiplex dynamical system across physical, psychological, and social layers with reciprocal supply-demand coupling, conflict penalties, developmental freedom, and governance regularization. Analytical results establish conditions for existence, uniqueness, and global asymptotic stability of equilibria. Deterministic ODE simulations, including basin sweeps, sensitivity analyses, governance-regime comparisons, shock tests, and local stability checks, indicate that governed mutualism achieves a high coexistence index with negligible domination, while insufficient or excessive governance produces domination, weak-benefit lock-in, or suppressed developmental freedom. The central claim is that human-AI coexistence should be treated as a co-evolutionary governance problem rather than static obedience.
Significance. If the model and its stability results hold, the work provides a mathematically grounded, multi-layer dynamical framework that shifts AI ethics from obedience constraints to reciprocal co-evolution under institutional regularization. The explicit derivation of equilibrium conditions plus extensive simulation protocols (basin sweeps, shock tests) constitute a strength, offering falsifiable predictions about governance regimes. The synthesis across computability, embodied AI, and ecological theory is broad, though the framework's applicability hinges on the fidelity of the invented coexistence index and the chosen coupling terms.
major comments (2)
- [§4.2, Eq. (12)] §4.2, Eq. (12): The global asymptotic stability claim for the governed-mutualism equilibrium relies on a Lyapunov function whose negative-definiteness is shown only under the assumption that the governance regularization strength exceeds the spectral radius of the conflict-penalty matrix; the paper does not demonstrate that this threshold is robust when the penalty matrix is estimated from empirical HRI data rather than chosen for illustration.
- [Table 2 and Figure 5] Table 2 and Figure 5: The coexistence index is defined as a linear combination of normalized state variables with fixed weights; altering these weights (not varied in the sensitivity analysis) reverses the ranking between governed mutualism and moderate-governance regimes, undermining the headline simulation conclusion that governed mutualism is unambiguously superior.
minor comments (2)
- [§3.1] Notation for the multiplex state vector is introduced in §3.1 but reused with different indexing in the simulation section; a single consistent definition would improve readability.
- [§5.4] The shock-test protocol in §5.4 perturbs only the AI supply variable; adding symmetric human-side shocks would strengthen the robustness claim.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments. We address each major comment point by point below, indicating where revisions will be incorporated to strengthen the manuscript.
read point-by-point responses
-
Referee: [§4.2, Eq. (12)] §4.2, Eq. (12): The global asymptotic stability claim for the governed-mutualism equilibrium relies on a Lyapunov function whose negative-definiteness is shown only under the assumption that the governance regularization strength exceeds the spectral radius of the conflict-penalty matrix; the paper does not demonstrate that this threshold is robust when the penalty matrix is estimated from empirical HRI data rather than chosen for illustration.
Authors: We thank the referee for this observation on the stability analysis. The proof of global asymptotic stability for the governed-mutualism equilibrium in Theorem 2 establishes negative-definiteness of the Lyapunov function under the sufficient condition that governance regularization strength exceeds the spectral radius of the conflict-penalty matrix. This condition is derived analytically for the model as formulated. We agree that the manuscript does not test robustness when the penalty matrix is calibrated from empirical HRI data rather than illustrative values. In the revised version we will add an explicit discussion of this limitation and include supplementary simulations that vary the spectral radius over a plausible range to illustrate threshold sensitivity. Full empirical calibration of the matrix lies outside the scope of the present theoretical study. revision: partial
-
Referee: [Table 2 and Figure 5] Table 2 and Figure 5: The coexistence index is defined as a linear combination of normalized state variables with fixed weights; altering these weights (not varied in the sensitivity analysis) reverses the ranking between governed mutualism and moderate-governance regimes, undermining the headline simulation conclusion that governed mutualism is unambiguously superior.
Authors: The coexistence index is constructed as a weighted sum of normalized states chosen to reflect balanced contributions across the physical, psychological, and social layers of the multiplex model. We acknowledge that the reported sensitivity analyses did not vary these weights and that alternative weightings can change comparative rankings, as the referee notes. In the revision we will extend the sensitivity analysis to include a sweep over a family of weight vectors that preserve the multi-layer structure. We will report the conditions under which the governed-mutualism regime retains its advantage, thereby qualifying the simulation conclusions and making them more robust to the choice of index weights. revision: yes
Circularity Check
No significant circularity identified
full rationale
The paper constructs a multiplex ODE system from explicit modeling assumptions (reciprocal supply-demand coupling, conflict penalties, developmental freedom, governance regularization) and derives analytical conditions for existence, uniqueness, and global asymptotic stability of equilibria directly from those equations. Simulation outcomes under varying governance regimes are obtained by numerical integration of the same system and therefore constitute consequences rather than independent predictions; no quoted step shows a fitted parameter being relabeled as a prediction, a self-citation chain substituting for a proof, or an ansatz smuggled in via prior work. The derivation chain remains self-contained against the stated assumptions.
Axiom & Free-Parameter Ledger
free parameters (1)
- governance regularization strength
axioms (1)
- domain assumption Human-AI interactions can be represented as a multiplex dynamical system with reciprocal supply-demand coupling across physical, psychological, and social layers.
invented entities (1)
-
coexistence index
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Michael Ahn, Anthony Brohan, Noah Brown, Nikhil Walker, Jonathon Guo, Chelsea Finn, Sergey Levine, et al. Do as i can, not as i say: Grounding language in robotic affordances.arXiv preprint arXiv:2204.01691,
work page internal anchor Pith review arXiv
-
[2]
Concrete Problems in AI Safety
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. Concrete problems in ai safety.arXiv preprint arXiv:1606.06565,
work page internal anchor Pith review arXiv
-
[3]
32 Mido Assran, Adrien Bardes, David Fan, Quentin Garrido, Russell Howes, Mojtaba Komeili, Matthew J. Muckley, Ammar Rizvi, Claire Roberts, Koustuv Sinha, Artem Zholus, Sergio Arnaud, Abha Gejji, Ada Martin, Francois Robert Hogan, Daniel Dugas, Piotr Bojanowski, Vasil Khalidov, Patrick Labatut, Francisco Massa, Marc Szafraniec, Kapil Krishnakumar, Yong Li...
work page internal anchor Pith review arXiv
-
[4]
Constitutional AI: Harmlessness from AI Feedback
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Constitutional ai: Harmlessness from ai feedback.arXiv preprint arXiv:2212.08073,
work page internal anchor Pith review arXiv
-
[5]
On the Opportunities and Risks of Foundation Models
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, et al. On the opportunities and risks of foundation models.arXiv preprint arXiv:2108.07258,
work page internal anchor Pith review arXiv
-
[6]
RT-1: Robotics Transformer for Real-World Control at Scale
Anthony Brohan, Noah Brown, Justin Carbajal, Yevgen Chebotar, Pierre Sermanet, Karol Hausman, et al. RT-1: Robotics transformer for real-world control at scale.arXiv preprint arXiv:2212.06817,
work page internal anchor Pith review arXiv
-
[7]
RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control
Anthony Brohan, Noah Brown, Justice Carpenter, Cory Lynch, Pierre Sermanet, Karol Hausman, et al. Rt-2: Vision-language-action models transfer web knowledge to robotic control.arXiv preprint arXiv:2307.15818,
work page internal anchor Pith review arXiv
-
[8]
Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. Language models are few-shot learners.Advances in Neural Information Processing Systems, 33:1877–1901,
1901
-
[9]
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712,
work page internal anchor Pith review arXiv
-
[10]
PaLM: Scaling Language Modeling with Pathways
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways.arXiv preprint arXiv:2204.02311,
work page internal anchor Pith review arXiv
-
[11]
Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, et al. Palm-e: An embodied multimodal language model.arXiv preprint arXiv:2303.03378,
work page internal anchor Pith review arXiv
-
[12]
Regulation (eu) 2024/1689 laying down harmonised rules on artificial intelligence (AI act)
European Union. Regulation (eu) 2024/1689 laying down harmonised rules on artificial intelligence (AI act). Official Journal of the European Union,
2024
-
[13]
Accessed 2026-04-24
URLhttps://eur-lex.europa.eu/eli/ reg/2024/1689/oj. Accessed 2026-04-24. Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, and Burkhard Schafer. AI4People—an ethical framework for a good ai society.Minds and Machines, 28(4):689–707,
2024
-
[14]
arXiv preprint arXiv:2506.22355 (2025) 5
Pascale Fung, Yoram Bachrach, Asli Celikyilmaz, Kamalika Chaudhuri, Delong Chen, Willy Chung, Emmanuel Dupoux, et al. Embodied ai agents: Modeling the world.arXiv preprint arXiv:2506.22355,
-
[15]
David Ha and Juergen Schmidhuber. World models.arXiv preprint arXiv:1803.10122,
work page internal anchor Pith review arXiv
-
[16]
Mastering Diverse Domains through World Models
Danijar Hafner, Jurgis Pasukonis, Jimmy Ba, and Timothy Lillicrap. Mastering diverse domains through world models.arXiv preprint arXiv:2301.04104,
work page internal anchor Pith review arXiv
-
[17]
Denoising Diffusion Probabilistic Models
Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models.arXiv preprint arXiv:2006.11239,
work page internal anchor Pith review arXiv 2006
-
[18]
Training Compute-Optimal Large Language Models
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Nalisnick, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Si...
work page internal anchor Pith review arXiv
-
[19]
Scaling Laws for Neural Language Models
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361,
work page internal anchor Pith review arXiv 2001
-
[20]
Auto-Encoding Variational Bayes
Diederik P. Kingma and Max Welling. Auto-encoding variational bayes.arXiv preprint arXiv:1312.6114,
work page internal anchor Pith review arXiv
-
[21]
Version 0.9.2, 2022-06-27
URLhttps://openreview.net/pdf?id=BZ5a1r-kVsf. Version 0.9.2, 2022-06-27. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning.Nature, 521(7553):436–444,
2022
-
[22]
Cosmos World Foundation Model Platform for Physical AI
Ming-Yu Liu, Jing Zhang, et al. Cosmos world foundation model platform for physical AI.arXiv preprint arXiv:2501.03575,
work page internal anchor Pith review arXiv
-
[23]
Levels of agi: Opera- tionalizing progress on the path to agi
Meredith Ringel Morris, Jascha Sohl-Dickstein, Noah Fiedel, et al. Levels of AGI: Operationalizing progress on the path to AGI.arXiv preprint arXiv:2311.02462,
-
[24]
Accessed 2026-04-24
URL https://www.nist.gov/itl/ai-risk-management-framework. Accessed 2026-04-24. 36 Allen Newell, J. C. Shaw, and Herbert A. Simon. The logic theory machine. InProceedings of the Western Joint Computer Conference,
2026
-
[25]
Open X-Embodiment: Robotic Learning Datasets and RT-X Models
URLhttps: //oecd.ai/en/ai-principles. Accessed 2026-04-24. Open X-Embodiment Collaboration. Open x-embodiment: Robotic learning datasets and rt-x models.arXiv preprint arXiv:2310.08864,
work page internal anchor Pith review arXiv 2026
-
[26]
OpenAI. GPT-4 technical report.arXiv preprint arXiv:2303.08774,
work page internal anchor Pith review arXiv
-
[27]
Embodied AI: Emerging risks and opportunities for policy action.arXivpreprint arXiv:2509.00117, 2025
Jared Perlo, Alexander Robey, Fazl Barez, Luciano Floridi, and Jakob Mokander. Embodied ai: Emerging risks and opportunities for policy action.arXiv preprint arXiv:2509.00117,
-
[28]
Spies, William Edwards, Michael I
Alex F. Spies, William Edwards, Michael I. Ivanitskiy, Adrians Skapars, Tilman Räuker, Katsumi Inoue, Alessandra Russo, and Murray Shanahan. Transformers use causal world models in maze-solving tasks.arXiv preprint arXiv:2412.11867,
-
[29]
Octo: An Open-Source Generalist Robot Policy
Octo Model Team. Octo: An open-source generalist robot policy.arXiv preprint arXiv:2405.12213, 2024a. OpenVLA Team. Openvla: An open vision-language-action model.arXiv preprint arXiv:2406.09246, 2024b. John N. Thompson.The Geographic Mosaic of Coevolution. University of Chicago Press, Chicago,
work page internal anchor Pith review arXiv
-
[30]
LLaMA: Open and Efficient Foundation Language Models
38 Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama: Open and efficient foundation language models.arXiv preprint arXiv:2302.13971,
work page internal anchor Pith review arXiv
-
[31]
Accessed 2026-04-24
URL https://www.unesco.org/en/artificial-intelligence/ recommendation-ethics. Accessed 2026-04-24. Vladimir N. Vapnik. An overview of statistical learning theory.IEEE Transactions on Neural Networks, 10(5):988–999,
2026
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.