pith. machine review for the scientific record. sign in

arxiv: 2603.21735 · v2 · submitted 2026-03-23 · 💻 cs.HC · cs.AI

Recognition: 2 theorem links

· Lean Theorem

Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction

Authors on Pith no claims yet

Pith reviewed 2026-05-15 00:53 UTC · model grok-4.3

classification 💻 cs.HC cs.AI
keywords cognitive agency surrenderepistemic sovereigntyscaffolded cognitive frictionmulti-agent systemsautomation biasAI governancecognitive offloading
0
0 comments X

The pith

Intentionally designed friction in AI interfaces is a technical requirement to prevent cognitive agency surrender and preserve epistemic sovereignty.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper claims that zero-friction designs in generative AI encourage users to offload thinking, leading to automation bias and loss of cognitive control. By examining trends in over 1,200 AI-HCI papers, it documents a recent decline in research supporting human sovereignty in favor of autonomous agents. The authors introduce Scaffolded Cognitive Friction as a solution, where multi-agent systems act as forcing functions to create beneficial epistemic tension. They also propose using physiological measures like eye tracking and brain imaging to quantify this cognitive effort separately from decision results. If this holds, AI systems must incorporate such friction as a core governance feature to sustain societal thinking capacity.

Core claim

The analysis of AI-HCI literature from 2023 to early 2026 reveals an agentic takeover where defenses of human epistemic sovereignty dropped while optimization for machine agents rose, with frictionless usability dominating. To counter the resulting cognitive agency surrender, the paper theorizes Scaffolded Cognitive Friction that repurposes multi-agent systems to serve as computational Devil's Advocates, injecting germane epistemic tension. This is paired with a multimodal phenotyping approach using gaze entropy, pupillometry, fNIRS, and drift diffusion modeling to separate cognitive effort from outcomes.

What carries the argument

Scaffolded Cognitive Friction, which uses Multi-Agent Systems as explicit cognitive forcing functions such as computational Devil's Advocates to deliberately introduce epistemic tension and disrupt premature cognitive closure.

Load-bearing premise

Trends observed in published research papers on AI-HCI directly reflect and influence the actual cognitive behaviors of everyday users interacting with generative AI systems.

What would settle it

A controlled experiment comparing user cognitive outcomes, such as decision quality and bias levels, between zero-friction AI interfaces and those with added scaffolded friction, showing no difference in cognitive agency surrender.

read the original abstract

The proliferation of Generative Artificial Intelligence has transformed benign cognitive offloading into a systemic risk of cognitive agency surrender. Driven by the commercial dogma of "zero-friction" design, highly fluent AI interfaces actively exploit human cognitive miserliness, prematurely satisfying the need for cognitive closure and inducing severe automation bias. To empirically quantify this epistemic erosion, we deployed a zero-shot semantic classification pipeline ($\tau=0.7$) on 1,223 high-confidence AI-HCI papers from 2023 to early 2026. Our analysis reveals an escalating "agentic takeover": a brief 2025 surge in research defending human epistemic sovereignty (19.1%) was abruptly suppressed in early 2026 (13.1%) by an explosive shift toward optimizing autonomous machine agents (19.6%), while frictionless usability maintained a structural hegemony (67.3%). To dismantle this trap, we theorize "Scaffolded Cognitive Friction," repurposing Multi-Agent Systems (MAS) as explicit cognitive forcing functions (e.g., computational Devil's Advocates) to inject germane epistemic tension and disrupt heuristic execution. Furthermore, we outline a multimodal computational phenotyping agenda -- integrating gaze transition entropy, task-evoked pupillometry, fNIRS, and Hierarchical Drift Diffusion Modeling (HDDM) -- to mathematically decouple decision outcomes from cognitive effort. Ultimately, intentionally designed friction is not merely a psychological intervention, but a foundational technical prerequisite for enforcing global AI governance and preserving societal cognitive resilience.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper claims that generative AI promotes cognitive agency surrender through zero-friction interfaces, empirically quantifies an 'agentic takeover' in AI-HCI research via zero-shot classification of 1,223 papers showing declining sovereignty defense and rising autonomous agents, and proposes 'Scaffolded Cognitive Friction' using multi-agent systems as cognitive forcing functions, along with a multimodal phenotyping agenda, arguing that designed friction is essential for global AI governance.

Significance. If the connection between publication trends and real-world user cognitive surrender is established, the work could inform critical discussions on AI interface design and societal resilience. However, the current analysis remains largely theoretical and correlational without direct behavioral evidence.

major comments (2)
  1. [Abstract / Empirical Analysis] The zero-shot semantic classification pipeline (τ=0.7) on 1,223 papers is presented without validation metrics, inter-rater reliability checks, or robustness tests, making the reported percentages (19.1% → 13.1% for sovereignty defense; 19.6% for autonomous agents; 67.3% for frictionless usability) unreliable as evidence for 'epistemic erosion'.
  2. [Abstract] The central claim that shifts in AI-HCI publication topics directly indicate and drive real-world cognitive agency surrender among users lacks supporting user studies, behavioral data, or outcome measures linking research output to automation bias or decision-making in deployed systems.
minor comments (1)
  1. [Abstract] The notation for the threshold τ=0.7 is introduced without prior definition or explanation of its selection.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments, which help clarify the scope and evidentiary basis of our work. We address each major point below and will incorporate revisions to strengthen validation and temper causal language while preserving the core theoretical contribution.

read point-by-point responses
  1. Referee: [Abstract / Empirical Analysis] The zero-shot semantic classification pipeline (τ=0.7) on 1,223 papers is presented without validation metrics, inter-rater reliability checks, or robustness tests, making the reported percentages (19.1% → 13.1% for sovereignty defense; 19.6% for autonomous agents; 67.3% for frictionless usability) unreliable as evidence for 'epistemic erosion'.

    Authors: We acknowledge the absence of validation metrics in the submitted manuscript. In revision, we will add a dedicated validation subsection reporting results from a manually annotated subset of 150 papers (with inter-rater reliability via Cohen's kappa), precision/recall/F1 scores for each category, and robustness analyses varying τ and prompt phrasing. These additions will be placed in the Methods section to support the reported trends. revision: yes

  2. Referee: [Abstract] The central claim that shifts in AI-HCI publication topics directly indicate and drive real-world cognitive agency surrender among users lacks supporting user studies, behavioral data, or outcome measures linking research output to automation bias or decision-making in deployed systems.

    Authors: We agree that the current analysis is correlational and does not include direct behavioral or user studies. We will revise the abstract, introduction, and discussion to explicitly frame the publication trends as indicators of shifting research priorities rather than direct evidence of user-level cognitive surrender. A new limitations section will note the lack of outcome measures and outline planned future work using the proposed multimodal phenotyping methods to test behavioral effects. revision: partial

Circularity Check

0 steps flagged

No significant circularity; derivation uses literature classification as interpretive evidence rather than reducing to self-referential inputs

full rationale

The paper's chain starts from a conceptual premise about GenAI offloading risks, applies a zero-shot semantic classification (τ=0.7) to 1,223 papers to report topic percentages (19.1%→13.1% sovereignty defense, 19.6% autonomous agents, 67.3% frictionless usability), and then proposes Scaffolded Cognitive Friction via MAS as a countermeasure, plus a phenotyping agenda. This does not reduce by construction to the inputs: the percentages are outputs of the classification pipeline and serve as motivation for an independent theoretical proposal involving computational forcing functions and multimodal measures (gaze entropy, pupillometry, fNIRS, HDDM). No equations equate the proposed governance prerequisite to the classification results, no parameters are fitted then renamed as predictions, and no self-citations or ansatzes are invoked. The central claim is an argumentative extension, not a definitional or fitted tautology, making the derivation self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 1 invented entities

The central claim rests on untested assumptions about human cognitive miserliness and the interpretation of publication trends as evidence of societal epistemic erosion; the new concept of scaffolded cognitive friction is introduced without independent empirical grounding.

free parameters (1)
  • tau = 0.7
    Threshold value of 0.7 used in the zero-shot semantic classification pipeline to select high-confidence papers.
axioms (1)
  • domain assumption Generative AI interfaces exploit human cognitive miserliness and the need for cognitive closure
    Invoked in the abstract to explain automation bias and agency surrender.
invented entities (1)
  • Scaffolded Cognitive Friction no independent evidence
    purpose: Repurposing multi-agent systems as explicit cognitive forcing functions to inject epistemic tension
    Newly theorized mechanism presented as a technical solution to the identified risk.

pith-pipeline@v0.9.0 · 5575 in / 1413 out tokens · 55284 ms · 2026-05-15T00:53:25.700074+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

105 extracted references · 105 canonical work pages · 3 internal anchors

  1. [1]

    Chalmers , title =

    Clark, A., Chalmers, D.: The extended mind. Analysis58(1), 7–19 (1998) https: //doi.org/10.1093/analys/58.1.7

  2. [2]

    Trends in Cognitive Sciences (2016) https://doi.org/10.1016/j.tics.2016.07.002

    Risko, E.F.,et al.: Cognitive offloading. Trends in Cognitive Sciences (2016) https://doi.org/10.1016/j.tics.2016.07.002

  3. [3]

    Nature Human Behaviour (2024) https://doi.org/10.1038/s41562-024-01995-5

    Chiriatti, M., Ganapini, M., Panai, E., Ubiali, M., Riva, G.: The case for human–ai interaction as system 0 thinking. Nature Human Behaviour (2024) https://doi.org/10.1038/s41562-024-01995-5

  4. [4]

    Cyberpsychol- ogy, Behavior, and Social Networking28(7), 534–542 (2025) https://doi.org/10

    Chiriatti, M., Ganapini, M.B., Panai, E., Wiederhold, B.K., Riva, G.: System 0: Transforming artificial intelligence into a cognitive extension. Cyberpsychol- ogy, Behavior, and Social Networking28(7), 534–542 (2025) https://doi.org/10. 1089/cyber.2025.0201

  5. [5]

    Perspect Psychol Sci.8(3), 223–241 (2013) https://doi.org/10

    Evans, J.S., Stanovich, K.E.: Dual-process theories of higher cognition: Advanc- ing the debate. Perspect Psychol Sci.8(3), 223–241 (2013) https://doi.org/10. 1177/1745691612460685

  6. [6]

    Thinking & Reasoning24(4), 423–444 (2018) https: //doi.org/10.1080/13546783.2018.1459314

    Stanovich, K.E.: Miserliness in human cognition: The interaction of detection, override and mindware. Thinking & Reasoning24(4), 423–444 (2018) https: //doi.org/10.1080/13546783.2018.1459314

  7. [7]

    Yin, W., Hay, J., Roth, D.: Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3914–

  8. [8]

    Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach

    Association for Computational Linguistics, Hong Kong, China (2019). https://doi.org/10.18653/v1/D19-1404 .https://aclanthology.org/D19-1404/ 14

  9. [9]

    Nature Communications16(2025) https://doi.org/10.1038/s41467-025-66354-y

    Vidal-Pi˜ neiro, D., Sørensen, Ø., Strømstad, M., Amlien, I.K., Baar´ e, W.F.C., Bartr´ es-Faz, D., Brandmaier, A.M., Cattaneo, G., D¨ uzel, S., Ghisletta, P., Henson, R.N., K¨ uhn, S., Lindenberger, U., Mowinckel, A.M., Nyberg, L., Pascual-Leone, ´A., Roe, J.M., Solana-S´ anchez, J., Sol´ e-Padull´ es, C., Watne, L.O., Wolfers, T., Pa, C.E.W.S.M.J.A.M.J....

  10. [10]

    Wister, A., Pinigina, E., Liang, J., Linkov, I.: AI-enabled resilience modeling for brain health. Front. Mol. Med.5, 1671337 (2025) https://doi.org/10.3389/ fmmed.2025.1671337

  11. [11]

    Springer, Cham (2023)

    Huang, L., Zhu, Q.: Cognitive Security: A System-Scientific Approach. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30709-6

  12. [12]

    Computers13(7), 165 (2024) https://doi.org/10.3390/ computers13070165

    Tilbury, J., Flowerday, S.: Automation bias and complacency in security operation centers. Computers13(7), 165 (2024) https://doi.org/10.3390/ computers13070165

  13. [13]

    Skitka, L.J., Mosier, K.L., Burdick, M.: Does automation bias decision-making? International Journal of Human Computer Studies51(5), 991–1006 (1999) https: //doi.org/10.1006/ijhc.1999.0252

  14. [14]

    Cognitive Science12(2), 257–285 (1988) https://doi.org/10.1207/s15516709cog1202 4

    Sweller, J.: Cognitive load during problem solving: Effects on learning. Cognitive Science12(2), 257–285 (1988) https://doi.org/10.1207/s15516709cog1202 4

  15. [15]

    Bjork, E.L., Bjork, R.A.: Making Things Hard on Yourself, but in a Good Way: Creating Desirable Difficulties to Enhance Learning, pp. 56–64. Worth Publishers, NY (2011)

  16. [16]

    In: The Thirteenth International Conference on Learning Representations (2025)

    Liu, Y., Cao, J., Li, Z., He, R., Tan, T.: Breaking mental set to improve reasoning through diverse multi-agent debate. In: The Thirteenth International Conference on Learning Representations (2025)

  17. [17]

    Psy- chological Review93(1), 23–32 (1986) https://doi.org/10.1037/0033-295X.93.1

    Nemeth, C.J.: Differential contributions of majority and minority influence. Psy- chological Review93(1), 23–32 (1986) https://doi.org/10.1037/0033-295X.93.1. 23 15

  18. [18]

    Nat Hum Behav8, 917–931 (2024) https://doi.org/10.1038/ s41562-024-01814-x

    Schurr, R., Reznik, D., Hillman, H.,et al.: Dynamic computational phenotyping of human cognition. Nat Hum Behav8, 917–931 (2024) https://doi.org/10.1038/ s41562-024-01814-x

  19. [19]

    Scientific Reports14, 23405 (2024) https://doi.org/10.1038/s41598-024-74244-4

    Cui, Z., Sato, T., Jackson, A.,et al.: Gaze transition entropy as a measure of attention allocation in a dynamic workspace involving automation. Scientific Reports14, 23405 (2024) https://doi.org/10.1038/s41598-024-74244-4

  20. [20]

    PLoS Computational Biology22(1), 1013173 (2026) https://doi.org/ 10.1371/journal.pcbi.1013173

    S´ anchez Pacheco, T., Nolte, D., K¨ onig, S.U., Pipa, G., K¨ onig, P.: Beyond the first glance: How human presence enhances visual entropy and promotes spatial learning. PLoS Computational Biology22(1), 1013173 (2026) https://doi.org/ 10.1371/journal.pcbi.1013173

  21. [21]

    Steyvers, M., Tejeda, H., Kerrigan, G., Smyth, P.: Bayesian modeling of human–AI complementarity. Proc. Natl. Acad. Sci. U.S.A.119(11), 2111547119 (2022) https://doi.org/10.1073/pnas.2111547119

  22. [22]

    Cambridge University Press, NY (2014)

    Lee, M.D., Wagenmakers, E.J.: Bayesian Cognitive Modeling: A Practical Course. Cambridge University Press, NY (2014)

  23. [23]

    Human Factors39(2), 230–253 (1997) https://doi.org/10.1518/ 001872097778543886

    Parasuraman, R., Riley, V.: Humans and automation: Use, misuse, dis- use, abuse. Human Factors39(2), 230–253 (1997) https://doi.org/10.1518/ 001872097778543886

  24. [24]

    Farrar, Straus and Giroux, NY (2011)

    Kahneman, D.: Thinking, Fast and Slow. Farrar, Straus and Giroux, NY (2011)

  25. [25]

    Personality and social psychology review, 219–35 (2009) https://doi.org/ 10.1177/1088868309341564

    Adam L, A., Daniel M, O.: Uniting the tribes of fluency to form a metacognitive nation. Personality and social psychology review, 219–35 (2009) https://doi.org/ 10.1177/1088868309341564

  26. [26]

    Trends in Cognitive Sciences, 607–617 (2017) https: //doi.org/10.1016/j.tics.2017.05.004

    Ackerman, R., Thompson, V.A.: Meta-reasoning: Monitoring and control of thinking and reasoning. Trends in Cognitive Sciences, 607–617 (2017) https: //doi.org/10.1016/j.tics.2017.05.004

  27. [27]

    In: Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Com- puting Systems, pp

    Kaiser, C., Kaiser, J., Schallner, R., Schneider, S.: A new era of online search? a large-scale study of user behavior and personal preferences during practical search tasks with generative ai versus traditional search engines. In: Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Com- puting Systems, pp. 1–7. ACM, Yokohama, ...

  28. [28]

    Klein, C.R., Klein, R.: The extended hollowed mind: why foundational knowl- edge is indispensable in the age of AI. Front. Artif. Intell.8, 1719019 (2025) https://doi.org/10.3389/frai.2025.1719019

  29. [29]

    Information16(12) (2025) https: 16 //doi.org/10.3390/info16121025

    Zhai, N., Ma, X., Ding, X.: Unpacking ai chatbot dependency: A dual-path model of cognitive and affective mechanisms. Information16(12) (2025) https: 16 //doi.org/10.3390/info16121025

  30. [30]

    In: 26th Inter- national Conference on Intelligent User Interfaces, pp

    Chromik, M., Eiband, M., Buchner, F., Kr¨ uger, A., Butz, A.: I think i get your point, AI! the illusion of explanatory depth in explainable AI. In: 26th Inter- national Conference on Intelligent User Interfaces, pp. 307–317. Association for Computing Machinery, USA (2021). https://doi.org/10.1145/3397481.3450644

  31. [31]

    Cognitive Science26(5), 521–562 (2002) https://doi.org/10

    Rozenblit, L., Keil, F.: The misunderstood limits of folk science: an illusion of explanatory depth. Cognitive Science26(5), 521–562 (2002) https://doi.org/10. 1207/s15516709cog2605 1

  32. [32]

    Psychological review, 263–83 (1996) https://doi.org/10.1037/0033-295x.103.2

    AW, K., DM, W.: Motivated closing of the mind: ”seizing” and ”freezing”. Psychological review, 263–83 (1996) https://doi.org/10.1037/0033-295x.103.2. 263

  33. [33]

    Trends in Cognitive Sciences, 464–470 (2006) https://doi.org/10.1016/j.tics.2006.08.004

    Lombrozo, T.: The structure and function of explanations. Trends in Cognitive Sciences, 464–470 (2006) https://doi.org/10.1016/j.tics.2006.08.004

  34. [34]

    In: ICML’24: Proceedings of the 41st International Conference on Machine Learning (2024)

    Du, Y., Li, S., Torralba, A., Tenenbaum, J.B., Mordatch, I.: Improving factual- ity and reasoning in language models through multiagent debate. In: ICML’24: Proceedings of the 41st International Conference on Machine Learning (2024)

  35. [35]

    In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp

    Liang, T., He, Z., Jiao, W., Wang, X., Wang, Y., Wang, R., Yang, Y., Shi, S., Tu, Z.: Encouraging divergent thinking in large language models through multi- agent debate. In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 17889–17904 (2024). https://doi.org/10. 18653/v1/2024.emnlp-main.992

  36. [36]

    Constitutional AI: Harmlessness from AI Feedback

    Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., Chen, C., Olsson, C., Olah, C., Hernandez, D., Drain, D., Ganguli, D., Li, D., Tran-Johnson, E., Perez, E., Kerr, J., Mueller, J., Ladish, J., Landau, J., Ndousse, K., Lukoˇ si¯ ut˙ e, K., Lovitt, L., Sellitto, M., Elhage, N., Schiefer...

  37. [37]

    Self-Consistency Improves Chain of Thought Reasoning in Language Models

    Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E.H., Zhou, D.: Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022). https://doi. org/10.48550/arXiv.2203.11171

  38. [38]

    Journal of Personality and Social Psychology, 655–669 (2000) https://doi.org/10.1037/0022-3514.78.4.655 17

    Schulz-Hardt, S., Frey, D., L¨ uthgens, C., Moscovici, S.: Biased information search in group decision making. Journal of Personality and Social Psychology, 655–669 (2000) https://doi.org/10.1037/0022-3514.78.4.655 17

  39. [39]

    Towards Understanding Sycophancy in Language Models

    Sharma, M., Tong, M., Korbak, T., Duvenaud, D., Askell, A., Bowman, S.R., Cheng, N., Durmus, E., Hatfield-Dodds, Z., Johnston, S.R., Kravec, S., Maxwell, T., McCandlish, S., Ndousse, K., Rausch, O., Schiefer, N., Yan, D., Zhang, M., Perez, E.: Towards Understanding Sycophancy in Language Models (2025). https://doi.org/10.48550/arXiv.2310.13548

  40. [40]

    In: Findings of the Association for Computational Linguistics: ACL 2023, pp

    Perez, E., Ringer, S., Lukosiute, K., Nguyen, K., Chen, E., Heiner, S., Pettit, C., Olsson, C., Kundu, S., Kadavath, S., Jones, A., Chen, A., Mann, B., Israel, B., Seethor, B., McKinnon, C., Olah, C., Yan, D., Amodei, D., Amodei, D., Drain, D., Li, D., Tran-Johnson, E., Khundadze, G., Kernion, J., Landis, J., Kerr, J., Mueller, J., Hyun, J., Landau, J., N...

  41. [41]

    Nature (2024) https://doi.org/10.1038/s41586-024-07566-y

    Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R., Gal, Y.: Ai models collapse when trained on recursively generated data. Nature (2024) https://doi.org/10.1038/s41586-024-07566-y

  42. [42]

    IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans30(3), 286–297 (2000) https: //doi.org/10.1109/3468.844354

    Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans30(3), 286–297 (2000) https: //doi.org/10.1109/3468.844354

  43. [43]

    Trends in Cognitive Sciences (2018) https://doi.org/10.1016/j.tics

    Inzlicht, M., Shenhav, A., Olivola, C.Y.: The effort paradox: Effort is both costly and valued. Trends in Cognitive Sciences (2018) https://doi.org/10.1016/j.tics. 2018.01.007

  44. [44]

    Frontiers in Robotics and AI5, 15 (2018).https://doi

    Filippo, S.D.S., Jeroen, V.D.H.: Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics & Ai5(2018) https: //doi.org/10.3389/frobt.2018.00015

  45. [45]

    InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

    Kaesberg, L.B., Becker, J., Wahle, J.P., Ruas, T., Gipp, B.: Voting or con- sensus? decision-making in multi-agent debate. In: Findings of the Association for Computational Linguistics: ACL 2025 (2025). https://doi.org/10.18653/v1/ 2025.findings-acl.606

  46. [46]

    Edu- cation Sciences15(8), 1006 (2025) https://doi.org/10.3390/educsci15081006

    Katsenou, R., Kotsidis, K., Papadopoulou, A., Anastasiadis, P., Deliyannis, I.: Beyond assistance: Embracing AI as a collaborative co-agent in education. Edu- cation Sciences15(8), 1006 (2025) https://doi.org/10.3390/educsci15081006

  47. [47]

    Sustainable Futures 10, 101152 (2025) https://doi.org/10.1016/j.sftr.2025.101152

    Hao, X., Demir, E., Eyers, D.: Beyond human-in-the-loop: Sensemaking between 18 artificial intelligence and human intelligence collaboration. Sustainable Futures 10, 101152 (2025) https://doi.org/10.1016/j.sftr.2025.101152

  48. [48]

    In: Proceedings of the Extended Abstracts of the CHI Conference on Human Fac- tors in Computing Systems, pp

    Lee, S., Hwang, S., Kim, D., Lee, K.: Conversational agents as catalysts for critical thinking: Challenging social influence in group decision-making. In: Proceedings of the Extended Abstracts of the CHI Conference on Human Fac- tors in Computing Systems, pp. 1–12. ACM, Yokohama, Japan (2025). https: //doi.org/10.1145/3706599.3719792

  49. [49]

    Karl, F.: The free-energy principle: a unified brain theory? Nature Reviews Neuroscience11(2), 127 (2010) https://doi.org/10.1038/nrn2787

  50. [50]

    Redwood City: Stan- ford University Press, Stanford, California (1957)

    Festinger, L.: A Theory of Cognitive Dissonance. Redwood City: Stan- ford University Press, Stanford, California (1957). https://doi.org/10.1515/ 9781503620766

  51. [51]

    Botvinick, M.M., Braver, T.S., Barch, D.M., Carter, J.D. C. S. adn Cohen: Conflict monitoring and cognitive control. Psychological Review108, 624–652 (2001) https://doi.org/10.1037/0033-295X.108.3.624

  52. [52]

    Data10, 172 (2025) https://doi.org/10.3390/data10110172

    Gerlich, M.: From offloading to engagement: An experimental study on struc- tured prompting and critical reasoning with generative AI. Data10, 172 (2025) https://doi.org/10.3390/data10110172

  53. [53]

    In: Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), Barcelona, Spain (2026)

    Zhu, Z., Yu, J., Luo, Y.: Scaffolding metacognition with GenAI: Exploring design opportunities to support task management for university students with ADHD. In: Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), Barcelona, Spain (2026). https://doi.org/10.1145/3772318. 3790697

  54. [54]

    Human factors, 381–410 (2010) https://doi.org/ 0.1177/0018720810376055

    Parasuraman, R., Manzey, D.H.: Complacency and bias in human use of automa- tion: an attentional integration. Human factors, 381–410 (2010) https://doi.org/ 0.1177/0018720810376055

  55. [55]

    Psychological Bulletin, 276–292 (1982) https://doi.org/ 10.1037/0033-2909.91.2.276

    Beatty, J.: Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychological Bulletin, 276–292 (1982) https://doi.org/ 10.1037/0033-2909.91.2.276

  56. [56]

    Neuron, 221–34 (2015) https://doi.org/10.1016/j.neuron.2015.11.028

    Joshi, S., Li, Y., Kalwani, R.M., Gold, J.I.: Relationships between pupil diame- ter and neuronal activity in the locus coeruleus, colliculi, and cingulate cortex. Neuron, 221–34 (2015) https://doi.org/10.1016/j.neuron.2015.11.028

  57. [57]

    Neural computation, 873–922 (2009) https://doi.org/ 10.1162/neco.2008.12-06-420

    Ratcliff, R., McKoon, G.: The diffusion decision model: Theory and data for two-choice decision tasks. Neural computation, 873–922 (2009) https://doi.org/ 10.1162/neco.2008.12-06-420

  58. [58]

    Frontiers in neuroinformatics (2013) https: //doi.org/10.3389/fninf.2013.00014

    Wiecki, T.V., Sofer, I., Frank, M.J.: Hddm: Hierarchical bayesian estimation of 19 the drift-diffusion model in python. Frontiers in neuroinformatics (2013) https: //doi.org/10.3389/fninf.2013.00014

  59. [59]

    PLoS Comput Biol18(11), 1009866 (2022) https://doi.org/10.1371/journal.pcbi.1009866

    Eltet˝ o, N., Nemeth, D., Janacsek, K., Dayan, P.: Tracking human skill learning with a hierarchical bayesian sequence model. PLoS Comput Biol18(11), 1009866 (2022) https://doi.org/10.1371/journal.pcbi.1009866

  60. [60]

    Memory & Cognition32(7), 1206–20 (2004) https://doi.org/10.3758/BF03196893

    Voss, A., Rothermund, K., Voss, J.: Interpreting the parameters of the diffusion model: An empirical validation. Memory & Cognition32(7), 1206–20 (2004) https://doi.org/10.3758/BF03196893

  61. [61]

    Frémaux, H

    Mulder, M.J., Wagenmakers, E.-J., Ratcliff, R., Boekel, W., Forstmann, B.U.: Bias in the brain: A diffusion model analysis of prior probability and potential payoff. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience32(7), 2335–2343 (2012) https://doi.org/10.1523/JNEUROSCI. 4156-11.2012

  62. [62]

    Psychonomic Bulletin & Review16, 1026–1036 (2009) https://doi.org/10.3758/16.6.1026

    Dutilh, G., Vandekerckhove, J., Tuerlinckx, F., Wagenmakers, E.-J.: A diffusion model decomposition of the practice effect. Psychonomic Bulletin & Review16, 1026–1036 (2009) https://doi.org/10.3758/16.6.1026

  63. [63]

    Biological Psychology42(3), 249–268 (1996) https://doi.org/10.1016/ 0301-0511(95)05161-9

    Byrne, E.A., Parasuraman, R.: Psychophysiology and adaptive automa- tion. Biological Psychology42(3), 249–268 (1996) https://doi.org/10.1016/ 0301-0511(95)05161-9

  64. [64]

    Frontiers in Human Neuroscience10(2016) https://doi.org/10.3389/fnhum.2016.00539

    Pietro, A., Gianluca, B., Gianluca, D.F., Alfredo, C., Stefano, B., Alessia, G., Simone, P., Jean-Paul, I., G´ eraud, G., Ra?Lane, B.: Adaptive automation triggered by eeg-based mental workload index: A passive brain-computer inter- face application in realistic air traffic control environment. Frontiers in Human Neuroscience10(2016) https://doi.org/10.33...

  65. [65]

    https://doi.org/10.48550/ arXiv.2508.05687

    Reid, A., O’Callaghan, S., Carroll, L., Caetano, T.: Risk Analysis Techniques for Governed LLM-based Multi-Agent Systems (2025). https://doi.org/10.48550/ arXiv.2508.05687

  66. [66]

    Sundstrom, J

    Liu, W., Qin, J., Huang, X., Zeng, X., Xi, Y., Lin, J., Wu, C., Wang, Y., Shang, L., Tang, R., Lian, D., Yu, Y., Zhang, W.: Position: The Real Barrier to LLM Agent Usability is Agentic ROI (2026). https://doi.org/10.48550/arXiv.2505. 17767

  67. [67]

    Technical report, OECD Publishing, Paris (2025)

    OECD: Survey of adult skills 2023 technical report. Technical report, OECD Publishing, Paris (2025). https://doi.org/10.1787/80d9f692-en

  68. [68]

    AI Ethics2(4), 747–761 (2022) https://doi.org/10.1007/s43681-022-00135-x 20

    Bleher, H., Braun, M.: Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems. AI Ethics2(4), 747–761 (2022) https://doi.org/10.1007/s43681-022-00135-x 20

  69. [69]

    https://doi.org/10.48550/arXiv.2602.10701

    Faas, C., Uth, R., Sterz, S., Langer, M., Feit, A.M.: Don’t blame me: How Intelligent Support Affects Moral Responsibility in Human Oversight (2026). https://doi.org/10.48550/arXiv.2602.10701

  70. [70]

    Engaging Science, Technology, and Society (2019)

    Elish, M.C.: Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society (2019)

  71. [71]

    https://doi.org/10.48550/arXiv.2507.02582

    Jiang, J., Naumov, P.: Responsibility Gap and Diffusion in Sequential Decision- Making Mechanisms (2025). https://doi.org/10.48550/arXiv.2507.02582

  72. [72]

    Engineering Applications of Artificial Intelligence 167(3), 113892 (2026) https://doi.org/10.1016/j.engappai.2026.113892

    Alam, S., Altiparmak, Z.: XAI-CF — examining the role of explainable artificial intelligence in cyber forensics. Engineering Applications of Artificial Intelligence 167(3), 113892 (2026) https://doi.org/10.1016/j.engappai.2026.113892

  73. [73]

    Social Media + Society6(1) (2020) https://doi.org/10.1177/2056305120903408

    Vaccari, C., Chadwick, A.: Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society6(1) (2020) https://doi.org/10.1177/2056305120903408

  74. [74]

    Trends in Cognitive Sciences25(5), 388–402 (2021) https://doi.org/10.1016/j.tics.2021.02.007

    Pennycook, G., Rand, D.G.: The psychology of fake news. Trends in Cognitive Sciences25(5), 388–402 (2021) https://doi.org/10.1016/j.tics.2021.02.007

  75. [75]

    Human Factors59(1), 5–27 (2017) https://doi.org/10.1177/0018720816681350

    Endsley, M.R.: From here to autonomy: Lessons learned from human–automation research. Human Factors59(1), 5–27 (2017) https://doi.org/10.1177/0018720816681350

  76. [76]

    AI Ethics5, 5783–5793 (2025) https: //doi.org/10.1007/s43681-025-00825-2

    Saadeh, M.I., Janhonen, J., Beer, E.,et al.: Automation complacency: risks of abdicating medical decision making. AI Ethics5, 5783–5793 (2025) https: //doi.org/10.1007/s43681-025-00825-2

  77. [77]

    International Journal of Human - Computer Studies198, 103474 (2025) https://doi.org/10.1016/j.ijhcs.2025.103474

    Vicente, L., Matute, H., Fregosi, C.,et al.: Machine learning systems as mentors in human learning: A user study on machine bias transmission in medical train- ing. International Journal of Human - Computer Studies198, 103474 (2025) https://doi.org/10.1016/j.ijhcs.2025.103474

  78. [78]

    Journal of Human- Technology Relations3(1), 1–34 (2025) https://doi.org/10.59490/jhtr.2025.3

    Metikoˇ s, L., Van Domselaar, I.: Procedural justice and judicial AI: Substan- tiating explainability rights with values of contestation. Journal of Human- Technology Relations3(1), 1–34 (2025) https://doi.org/10.59490/jhtr.2025.3. 8163

  79. [79]

    (eds.): Agnotology: The Making and Unmaking of Ignorance

    Proctor, R.N., Schiebinger, L. (eds.): Agnotology: The Making and Unmaking of Ignorance. Stanford University Press, Stanford, California (2008)

  80. [80]

    Addison-Wesley Press, MA (1949)

    Zipf, G.K.: Human Behavior and the Principle of Least Effort. Addison-Wesley Press, MA (1949)

Showing first 80 references.