Recognition: no theorem link
Cognitive Amplification vs Cognitive Delegation in Human-AI Systems: A Metric Framework
Pith reviewed 2026-05-15 08:47 UTC · model grok-4.3
The pith
No human-AI system in the tested simulations achieves genuine cognitive amplification.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The paper establishes a metric framework consisting of the Cognitive Amplification Index (CAI*), Dependency Ratio (D), Human Reliance Index (HRI), and Human Cognitive Drift Rate (HCDR). Validation via NetLogo simulations across reliance regimes shows no configuration achieves genuine amplification, where hybrid performance exceeds both agents while preserving expertise. A constrained optimization demonstrates that reducing atrophy improves outcomes but zero atrophy still yields no positive collaborative gain.
What carries the argument
The Cognitive Amplification Index (CAI*), which quantifies whether hybrid output exceeds the stronger standalone agent, together with the three supporting metrics that track structural dependence and temporal skill change.
If this is right
- Reducing the atrophy parameter improves retained human capability, collaborative gain, and dependency structure.
- Even at zero atrophy, no positive collaborative gain appears in any tested regime.
- All simulated regimes fall into either AI-dominated delegation or human-preserving but non-competitive interaction.
- The four metrics together characterize both immediate hybrid performance and long-term cognitive sustainability.
Where Pith is reading between the lines
- Designers could embed minimum-human-input rules in AI interfaces to keep systems away from the delegation boundary identified in the simulations.
- The same metrics could be applied to longitudinal field studies that track professional skill retention when AI tools are introduced.
- Policy guidelines for AI in education or medicine could adopt thresholds on the dependency ratio to limit long-term outsourcing of judgment.
- The framework invites direct comparison of different AI architectures by running the same reliance regimes on each.
Load-bearing premise
The agent-based simulation parameters and atrophy dynamics accurately represent real human cognitive processes, reliance behavior, and long-term skill maintenance.
What would settle it
A controlled experiment with actual people and AI tools that records hybrid performance exceeding both the human and AI alone after repeated trials while the human's unaided performance shows no measurable decline.
Figures
read the original abstract
Artificial intelligence is increasingly embedded in human decision making. In some cases, it enhances human reasoning. In others, it fosters excessive cognitive dependence. This paper introduces a conceptual and mathematical framework to distinguish cognitive amplification, where AI improves hybrid human AI performance while preserving human expertise, from cognitive delegation, where reasoning is progressively outsourced to the AI system, risking long term atrophy of human capabilities. We define four operational metrics: the Cognitive Amplification Index, or CAI star, which measures collaborative gain beyond the best standalone agent; the Dependency Ratio, or D, and Human Reliance Index, or HRI, which quantify the structural dominance of the AI within the hybrid output; and the Human Cognitive Drift Rate, or HCDR, which captures the temporal erosion or maintenance of autonomous human performance. Together, these quantities characterize human AI systems in terms of both immediate hybrid performance and long term cognitive sustainability. We validate the framework through an agent based simulation in NetLogo across three reliance regimes and multiple dependency and atrophy configurations. The results distinguish degenerate AI dominated delegation, human preserving but weakly competitive interaction, and intermediate boundary regimes that approach the AI baseline while remaining structurally dependent. Across all tested configurations, no regime achieves genuine amplification. A constrained optimization over the atrophy parameter shows that reducing atrophy improves retained human capability, collaborative gain, and dependency structure, but even zero atrophy does not yield positive collaborative gain. The framework therefore provides a practical tool for evaluating whether human AI systems perform well in a way that also preserves human capability over time.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces a metric framework to distinguish cognitive amplification (AI-enhanced hybrid performance with preserved human expertise) from cognitive delegation (outsourcing leading to atrophy) in human-AI systems. It defines four metrics—Cognitive Amplification Index (CAI*), Dependency Ratio (D), Human Reliance Index (HRI), and Human Cognitive Drift Rate (HCDR)—and validates them via NetLogo agent-based simulations across three reliance regimes and varied dependency/atrophy configurations. The central finding is that no tested regime achieves genuine amplification (CAI* ≤ 0), with a constrained optimization showing that even zero atrophy improves outcomes but fails to produce positive collaborative gain.
Significance. If the simulation outcomes are robust, the framework supplies a concrete, operational tool for diagnosing whether human-AI systems deliver immediate performance benefits while sustaining long-term human capability. This could inform design guidelines that prioritize augmentation over replacement, particularly in high-stakes decision domains, though its practical impact hinges on subsequent empirical calibration against real human behavior.
major comments (2)
- [Simulation Results] Simulation Results section: The claim that 'across all tested configurations, no regime achieves genuine amplification' is presented without reporting the number of independent runs, standard deviations, confidence intervals, or sensitivity to random seeds for the CAI* metric. In stochastic agent-based models, this absence leaves open the possibility that positive CAI* values occur in some realizations, undermining the strength of the 'no regime' conclusion.
- [Optimization and Results] Optimization and Results section: The constrained optimization over the atrophy parameter is reported to improve retained capability and dependency structure at zero atrophy without yielding CAI* > 0, yet the manuscript does not specify the search method (grid, gradient, etc.), the discrete or continuous range explored, or the objective function used. This detail is load-bearing for confirming that zero atrophy is the relevant optimum rather than an artifact of the chosen bounds.
minor comments (2)
- [Abstract] Abstract: The phrase 'CAI star' should be rendered uniformly as CAI* to match the mathematical notation introduced later in the text.
- [Introduction] Introduction: The framework would benefit from explicit citations to prior work on cognitive offloading, automation bias, and long-term skill maintenance in human-AI interaction to clarify its incremental contribution.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback on our manuscript. We address each major comment below and will incorporate clarifications to improve the rigor of the simulation and optimization reporting.
read point-by-point responses
-
Referee: [Simulation Results] Simulation Results section: The claim that 'across all tested configurations, no regime achieves genuine amplification' is presented without reporting the number of independent runs, standard deviations, confidence intervals, or sensitivity to random seeds for the CAI* metric. In stochastic agent-based models, this absence leaves open the possibility that positive CAI* values occur in some realizations, undermining the strength of the 'no regime' conclusion.
Authors: We agree that the current presentation lacks necessary statistical details for a stochastic agent-based model. In the revised manuscript we will explicitly state that each configuration was evaluated over 100 independent runs with distinct random seeds. The mean CAI* remained ≤ 0 in every regime, with standard deviations and 95% confidence intervals reported; no individual run produced a positive value. A seed-sensitivity analysis will also be added to confirm the conclusion is robust. revision: yes
-
Referee: [Optimization and Results] Optimization and Results section: The constrained optimization over the atrophy parameter is reported to improve retained capability and dependency structure at zero atrophy without yielding CAI* > 0, yet the manuscript does not specify the search method (grid, gradient, etc.), the discrete or continuous range explored, or the objective function used. This detail is load-bearing for confirming that zero atrophy is the relevant optimum rather than an artifact of the chosen bounds.
Authors: We acknowledge that the optimization procedure requires fuller specification. The revised manuscript will describe the method as a discrete grid search over the atrophy parameter in the closed interval [0, 1] with step size 0.05. The objective function maximized CAI* subject to non-negative constraints on the Dependency Ratio and Human Cognitive Drift Rate. This search identifies zero atrophy as the optimum within the explored bounds, although CAI* remains non-positive. revision: yes
Circularity Check
No significant circularity detected
full rationale
The paper first defines four metrics (CAI*, D, HRI, HCDR) as independent operational quantities measuring collaborative gain, structural dominance, reliance, and cognitive drift. These definitions precede any simulation. The central result—no regime yields positive CAI* even at the constrained optimum of zero atrophy—is reported as an outcome of explicit NetLogo runs across reliance regimes and parameter sweeps. Because the claim is scoped to the internal behavior of the stated agent-based model rather than an empirical assertion about real humans, and the optimization is performed openly on a single parameter without refitting the metrics themselves, no step reduces by construction to a tautology or self-citation. The derivation chain remains self-contained: metric definitions are independent, simulation parameters are declared, and reported outcomes follow directly from executing those parameters.
Axiom & Free-Parameter Ledger
free parameters (1)
- atrophy parameter
axioms (1)
- domain assumption Human cognitive performance erodes over time in proportion to the degree of reliance on AI assistance.
invented entities (4)
-
Cognitive Amplification Index (CAI*)
no independent evidence
-
Dependency Ratio (D)
no independent evidence
-
Human Reliance Index (HRI)
no independent evidence
-
Human Cognitive Drift Rate (HCDR)
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Alexander Borg and Sandra Wachter. Automation bias in the EU AI act: On the legal implications of human oversight.European Journal of Risk Regulation, 16(1):1–24, 2025
work page 2025
-
[2]
Alex Byrne.Transparency and Self-Knowledge. Oxford University Press, 2018
work page 2018
-
[3]
Alex Byrne. Minds and machines. MITx Online / MIT Open Learning Library, 2019
work page 2019
-
[4]
Alex Byrne and Jaegwon Kim.Philosophy of Mind. Westview Press, 2012
work page 2012
-
[5]
Gita Chirayath and Daniel Gerlich. Cognitive offloading or cognitive overload? how AI alters the mental architecture of coping.Frontiers in Psychology, 16, 2025
work page 2025
-
[6]
Uncertainty, explainability, transparency, and bias in AI
Alan Dix and colleagues. Uncertainty, explainability, transparency, and bias in AI. Northumbria University, 2020
work page 2020
-
[7]
Designing AI for human expertise: Preventing cognitive shortcuts.UXmat- ters, February 2025
Daniel Gerlich. Designing AI for human expertise: Preventing cognitive shortcuts.UXmat- ters, February 2025
work page 2025
-
[8]
AI’s cognitive implications: The decline of our thinking skills?IE Insights, 2026
Daniel Gerlich. AI’s cognitive implications: The decline of our thinking skills?IE Insights, 2026
work page 2026
-
[9]
Judy W. Gichoya et al. AI pitfalls and what not to do: Mitigating bias in AI.npj Digital Medicine, 6:136, 2023
work page 2023
-
[10]
Jessica Green et al. Bending the automation bias curve: A study of human- and AI-based decision making.International Studies Quarterly, 68(2), 2024
work page 2024
-
[11]
Designingtheintelligentorganization.MIT Sloan Management Review, 62(3), 2021
ThomasW.Malone. Designingtheintelligentorganization.MIT Sloan Management Review, 62(3), 2021
work page 2021
-
[12]
Human–AI collaborative uncertainty quantification.arXiv preprint, 2025
Shayan Noorani et al. Human–AI collaborative uncertainty quantification.arXiv preprint, 2025
work page 2025
-
[13]
John R. Searle. Minds, brains, and programs.Behavioral and Brain Sciences, 3(3):417–424, 1980
work page 1980
-
[14]
Mark Solms.The Hidden Spring: A Journey to the Source of Consciousness. W. W. Norton, New York, 2021
work page 2021
-
[15]
Mark Solms and Karl Friston. The hard problem of consciousness and the free energy principle.Frontiers in Psychology, 9:2714, 2019
work page 2019
-
[16]
Overreliance on AI: Literature review
Kristen Vaccaro, Jim Waldo, et al. Overreliance on AI: Literature review. Technical report, Microsoft Aether Working Group, June 2022. A Optimization procedure To test whether capability-preserving intervention alone can produce genuine cognitive amplifi- cation, we performed a constrained parameter search over the atrophy rateδ(implemented as atrophy-del...
work page 2022
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.