Recognition: unknown
Human-AI Co-Evolution and Epistemic Collapse: A Dynamical Systems Perspective
Pith reviewed 2026-05-08 07:05 UTC · model grok-4.3
The pith
Humans and AI form a coupled dynamical system whose feedback can drive it into a low-diversity, suboptimal state.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Treating humans and language models as a coupled dynamical system linked by a feedback loop of usage, generation, and retraining, the analysis reveals three regimes—co-evolutionary enhancement, fragile equilibrium, and degenerative convergence—with the transition to the latter driven by increasing AI reliance and appearing as an emergent information bottleneck in the loop.
What carries the argument
A minimal three-variable dynamical model consisting of human cognition, data quality, and model capability connected in a feedback loop.
Load-bearing premise
The minimal model with three variables and the assumed feedback loop structure is sufficient to capture the essential dynamics of human-AI co-evolution.
What would settle it
Observing whether knowledge diversity and quality decline in real-world settings where AI usage has increased substantially over time, such as in online content creation or academic writing.
Figures
read the original abstract
Large language models (LLMs) are reshaping how knowledge is produced, with increasing reliance on AI systems for generation, summarization, and reasoning. While prior work has studied cognitive offloading in humans and model collapse in recursive training, these effects are typically considered in isolation. We propose a unified perspective: humans and language models form a coupled dynamical system linked by a feedback loop of usage, generation, and retraining. We introduce a minimal model with three variables -- human cognition, data quality, and model capability -- and show that this feedback can give rise to distinct dynamical regimes. Our analysis identifies three regimes: co-evolutionary enhancement, fragile equilibrium, and degenerative convergence. Through a simple simulation, we demonstrate that increasing reliance on AI can induce a transition toward a low-diversity, suboptimal equilibrium. From an information-theoretic perspective, this transition corresponds to an emergent information bottleneck in the human-AI loop, where entropy reduction reflects loss of diversity and support under closed-loop feedback rather than beneficial compression. These results suggest that the trajectory of AI systems is shaped not only by model design, but by the dynamics of human-AI co-evolution.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript claims that humans and large language models form a coupled dynamical system through feedback loops of usage, generation, and retraining. Using a minimal three-variable model (human cognition, data quality, model capability), the authors identify three dynamical regimes: co-evolutionary enhancement, fragile equilibrium, and degenerative convergence. A simple simulation demonstrates that increasing AI reliance can drive the system toward a low-diversity, suboptimal equilibrium, which from an information-theoretic view corresponds to an emergent information bottleneck in the human-AI loop rather than beneficial compression.
Significance. If the simulation results hold under scrutiny, this work provides a novel unified framework for understanding the interplay between cognitive offloading and model collapse in AI systems. It highlights how human-AI co-evolution dynamics, rather than just model design, can shape trajectories toward epistemic issues. The approach is timely given rapid LLM adoption, but its significance is tempered by the absence of empirical validation or sensitivity analysis in the current presentation.
major comments (3)
- The minimal model is introduced but its governing equations, the specific forms of the feedback loops, parameter values (e.g., coupling rates and thresholds), and simulation implementation details are not provided. This omission is load-bearing because the three regimes and the transition to degenerative convergence are generated by these equations and parameters.
- No error analysis, sensitivity checks to parameter variations, or comparison to real-world data or benchmarks are described. This undermines confidence in the robustness of the claimed transition induced by increasing AI reliance.
- The information-bottleneck interpretation relies on internal entropy reduction within the closed-loop model; it would benefit from a concrete test or external data to confirm it reflects loss of diversity rather than other effects.
minor comments (1)
- The abstract could more clearly distinguish the proposed regimes from prior work on cognitive offloading and model collapse.
Simulated Author's Rebuttal
We thank the referee for their constructive and detailed feedback on our manuscript. The comments highlight important areas for strengthening the presentation of the model and its analysis. We will revise the paper to incorporate additional details and robustness checks where feasible. Our responses to each major comment are provided below.
read point-by-point responses
-
Referee: The minimal model is introduced but its governing equations, the specific forms of the feedback loops, parameter values (e.g., coupling rates and thresholds), and simulation implementation details are not provided. This omission is load-bearing because the three regimes and the transition to degenerative convergence are generated by these equations and parameters.
Authors: We agree that the absence of these details limits reproducibility and clarity. In the revised manuscript, we will add a dedicated section presenting the full set of governing equations for the three-variable system (human cognition, data quality, model capability), the explicit functional forms of all feedback loops, the complete list of parameter values including coupling rates and thresholds, and a description of the numerical simulation implementation (including integration method, time steps, and initial conditions). This will allow full reproduction of the reported dynamical regimes. revision: yes
-
Referee: No error analysis, sensitivity checks to parameter variations, or comparison to real-world data or benchmarks are described. This undermines confidence in the robustness of the claimed transition induced by increasing AI reliance.
Authors: We acknowledge the value of robustness analysis. We will add a new subsection performing sensitivity analysis by varying key parameters such as AI reliance rates and coupling strengths, and report the stability of the identified regimes and transition points. However, direct comparisons to real-world data or benchmarks are not possible at this stage, as the model is intentionally minimal and abstract; we will expand the discussion to outline potential empirical proxies and limitations. revision: partial
-
Referee: The information-bottleneck interpretation relies on internal entropy reduction within the closed-loop model; it would benefit from a concrete test or external data to confirm it reflects loss of diversity rather than other effects.
Authors: The interpretation arises directly from the model's entropy measures on the human cognition variable. We will enhance the analysis by adding explicit calculations and visualizations of diversity metrics (e.g., state variance) to demonstrate the correspondence to loss of diversity. A concrete external test or dataset is not available for this specific closed-loop system, so we will present the result as a theoretical prediction and suggest directions for future empirical validation. revision: partial
- Direct comparison to real-world data or benchmarks for the claimed transition and information-bottleneck effect, as no suitable external datasets exist for this coupled human-AI dynamical system.
Circularity Check
No significant circularity detected
full rationale
The paper introduces a minimal three-variable dynamical model (human cognition, data quality, model capability) with an assumed feedback loop and analyzes its behaviors through simulation to identify three regimes and a transition to low-diversity equilibrium. These outcomes are direct consequences of the model's equations and parameters, as is standard and expected in theoretical modeling papers; the information-bottleneck interpretation is an additional post-hoc perspective applied to the simulation results rather than a reduction of independent claims to inputs. No self-citations, fitted parameters renamed as predictions, ansatzes smuggled via citation, or uniqueness theorems are referenced in the provided text. The derivation is self-contained as an exploration of the proposed model under its stated assumptions.
Axiom & Free-Parameter Ledger
free parameters (1)
- feedback coupling rates and thresholds
axioms (1)
- domain assumption Human cognition, data quality, and model capability can be adequately represented as three coupled continuous dynamical variables.
invented entities (1)
-
Co-evolutionary enhancement, fragile equilibrium, and degenerative convergence regimes
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Langley , title =
P. Langley , title =. Proceedings of the 17th International Conference on Machine Learning (ICML 2000) , address =. 2000 , pages =
2000
-
[2]
T. M. Mitchell. The Need for Biases in Learning Generalizations. 1980
1980
-
[3]
M. J. Kearns , title =
-
[4]
Machine Learning: An Artificial Intelligence Approach, Vol. I. 1983
1983
-
[5]
R. O. Duda and P. E. Hart and D. G. Stork. Pattern Classification. 2000
2000
-
[6]
Suppressed for Anonymity , author=
-
[7]
Newell and P
A. Newell and P. S. Rosenbloom. Mechanisms of Skill Acquisition and the Law of Practice. Cognitive Skills and Their Acquisition. 1981
1981
-
[8]
A. L. Samuel. Some Studies in Machine Learning Using the Game of Checkers. IBM Journal of Research and Development. 1959
1959
-
[9]
Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task , author=. arXiv preprint arXiv:2506.08872 , year=
-
[10]
Nature , year=
The Curse of Recursion: Training on Generated Data Makes Models Forget , author=. Nature , year=
-
[11]
Nature , volume=
Machine behaviour , author=. Nature , volume=
-
[12]
science , volume=
Google effects on memory: Cognitive consequences of having information at our fingertips , author=. science , volume=. 2011 , publisher=
2011
-
[13]
The information bottleneck method
The information bottleneck method , author=. arXiv preprint physics/0004057 , year=
work page internal anchor Pith review arXiv
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.