Recognition: 2 theorem links
· Lean TheoremCognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction
Pith reviewed 2026-05-15 00:53 UTC · model grok-4.3
The pith
Intentionally designed friction in AI interfaces is a technical requirement to prevent cognitive agency surrender and preserve epistemic sovereignty.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The analysis of AI-HCI literature from 2023 to early 2026 reveals an agentic takeover where defenses of human epistemic sovereignty dropped while optimization for machine agents rose, with frictionless usability dominating. To counter the resulting cognitive agency surrender, the paper theorizes Scaffolded Cognitive Friction that repurposes multi-agent systems to serve as computational Devil's Advocates, injecting germane epistemic tension. This is paired with a multimodal phenotyping approach using gaze entropy, pupillometry, fNIRS, and drift diffusion modeling to separate cognitive effort from outcomes.
What carries the argument
Scaffolded Cognitive Friction, which uses Multi-Agent Systems as explicit cognitive forcing functions such as computational Devil's Advocates to deliberately introduce epistemic tension and disrupt premature cognitive closure.
Load-bearing premise
Trends observed in published research papers on AI-HCI directly reflect and influence the actual cognitive behaviors of everyday users interacting with generative AI systems.
What would settle it
A controlled experiment comparing user cognitive outcomes, such as decision quality and bias levels, between zero-friction AI interfaces and those with added scaffolded friction, showing no difference in cognitive agency surrender.
read the original abstract
The proliferation of Generative Artificial Intelligence has transformed benign cognitive offloading into a systemic risk of cognitive agency surrender. Driven by the commercial dogma of "zero-friction" design, highly fluent AI interfaces actively exploit human cognitive miserliness, prematurely satisfying the need for cognitive closure and inducing severe automation bias. To empirically quantify this epistemic erosion, we deployed a zero-shot semantic classification pipeline ($\tau=0.7$) on 1,223 high-confidence AI-HCI papers from 2023 to early 2026. Our analysis reveals an escalating "agentic takeover": a brief 2025 surge in research defending human epistemic sovereignty (19.1%) was abruptly suppressed in early 2026 (13.1%) by an explosive shift toward optimizing autonomous machine agents (19.6%), while frictionless usability maintained a structural hegemony (67.3%). To dismantle this trap, we theorize "Scaffolded Cognitive Friction," repurposing Multi-Agent Systems (MAS) as explicit cognitive forcing functions (e.g., computational Devil's Advocates) to inject germane epistemic tension and disrupt heuristic execution. Furthermore, we outline a multimodal computational phenotyping agenda -- integrating gaze transition entropy, task-evoked pupillometry, fNIRS, and Hierarchical Drift Diffusion Modeling (HDDM) -- to mathematically decouple decision outcomes from cognitive effort. Ultimately, intentionally designed friction is not merely a psychological intervention, but a foundational technical prerequisite for enforcing global AI governance and preserving societal cognitive resilience.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims that generative AI promotes cognitive agency surrender through zero-friction interfaces, empirically quantifies an 'agentic takeover' in AI-HCI research via zero-shot classification of 1,223 papers showing declining sovereignty defense and rising autonomous agents, and proposes 'Scaffolded Cognitive Friction' using multi-agent systems as cognitive forcing functions, along with a multimodal phenotyping agenda, arguing that designed friction is essential for global AI governance.
Significance. If the connection between publication trends and real-world user cognitive surrender is established, the work could inform critical discussions on AI interface design and societal resilience. However, the current analysis remains largely theoretical and correlational without direct behavioral evidence.
major comments (2)
- [Abstract / Empirical Analysis] The zero-shot semantic classification pipeline (τ=0.7) on 1,223 papers is presented without validation metrics, inter-rater reliability checks, or robustness tests, making the reported percentages (19.1% → 13.1% for sovereignty defense; 19.6% for autonomous agents; 67.3% for frictionless usability) unreliable as evidence for 'epistemic erosion'.
- [Abstract] The central claim that shifts in AI-HCI publication topics directly indicate and drive real-world cognitive agency surrender among users lacks supporting user studies, behavioral data, or outcome measures linking research output to automation bias or decision-making in deployed systems.
minor comments (1)
- [Abstract] The notation for the threshold τ=0.7 is introduced without prior definition or explanation of its selection.
Simulated Author's Rebuttal
We thank the referee for the constructive comments, which help clarify the scope and evidentiary basis of our work. We address each major point below and will incorporate revisions to strengthen validation and temper causal language while preserving the core theoretical contribution.
read point-by-point responses
-
Referee: [Abstract / Empirical Analysis] The zero-shot semantic classification pipeline (τ=0.7) on 1,223 papers is presented without validation metrics, inter-rater reliability checks, or robustness tests, making the reported percentages (19.1% → 13.1% for sovereignty defense; 19.6% for autonomous agents; 67.3% for frictionless usability) unreliable as evidence for 'epistemic erosion'.
Authors: We acknowledge the absence of validation metrics in the submitted manuscript. In revision, we will add a dedicated validation subsection reporting results from a manually annotated subset of 150 papers (with inter-rater reliability via Cohen's kappa), precision/recall/F1 scores for each category, and robustness analyses varying τ and prompt phrasing. These additions will be placed in the Methods section to support the reported trends. revision: yes
-
Referee: [Abstract] The central claim that shifts in AI-HCI publication topics directly indicate and drive real-world cognitive agency surrender among users lacks supporting user studies, behavioral data, or outcome measures linking research output to automation bias or decision-making in deployed systems.
Authors: We agree that the current analysis is correlational and does not include direct behavioral or user studies. We will revise the abstract, introduction, and discussion to explicitly frame the publication trends as indicators of shifting research priorities rather than direct evidence of user-level cognitive surrender. A new limitations section will note the lack of outcome measures and outline planned future work using the proposed multimodal phenotyping methods to test behavioral effects. revision: partial
Circularity Check
No significant circularity; derivation uses literature classification as interpretive evidence rather than reducing to self-referential inputs
full rationale
The paper's chain starts from a conceptual premise about GenAI offloading risks, applies a zero-shot semantic classification (τ=0.7) to 1,223 papers to report topic percentages (19.1%→13.1% sovereignty defense, 19.6% autonomous agents, 67.3% frictionless usability), and then proposes Scaffolded Cognitive Friction via MAS as a countermeasure, plus a phenotyping agenda. This does not reduce by construction to the inputs: the percentages are outputs of the classification pipeline and serve as motivation for an independent theoretical proposal involving computational forcing functions and multimodal measures (gaze entropy, pupillometry, fNIRS, HDDM). No equations equate the proposed governance prerequisite to the classification results, no parameters are fitted then renamed as predictions, and no self-citations or ansatzes are invoked. The central claim is an argumentative extension, not a definitional or fitted tautology, making the derivation self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
free parameters (1)
- tau =
0.7
axioms (1)
- domain assumption Generative AI interfaces exploit human cognitive miserliness and the need for cognitive closure
invented entities (1)
-
Scaffolded Cognitive Friction
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Foundation/ArithmeticFromLogic.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Scaffolded Cognitive Friction... MAS architectures must be radically repurposed... as explicit cognitive forcing functions, deploying engineered 'devil’s advocate' mechanisms
-
IndisputableMonolith/Foundation/AlexanderDuality.leanalexander_duality_circle_linking unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
Hierarchical Drift Diffusion Models (HDDM)... starting-point bias (z) and the Drift Rate (v)
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Clark, A., Chalmers, D.: The extended mind. Analysis58(1), 7–19 (1998) https: //doi.org/10.1093/analys/58.1.7
-
[2]
Trends in Cognitive Sciences (2016) https://doi.org/10.1016/j.tics.2016.07.002
Risko, E.F.,et al.: Cognitive offloading. Trends in Cognitive Sciences (2016) https://doi.org/10.1016/j.tics.2016.07.002
-
[3]
Nature Human Behaviour (2024) https://doi.org/10.1038/s41562-024-01995-5
Chiriatti, M., Ganapini, M., Panai, E., Ubiali, M., Riva, G.: The case for human–ai interaction as system 0 thinking. Nature Human Behaviour (2024) https://doi.org/10.1038/s41562-024-01995-5
-
[4]
Cyberpsychol- ogy, Behavior, and Social Networking28(7), 534–542 (2025) https://doi.org/10
Chiriatti, M., Ganapini, M.B., Panai, E., Wiederhold, B.K., Riva, G.: System 0: Transforming artificial intelligence into a cognitive extension. Cyberpsychol- ogy, Behavior, and Social Networking28(7), 534–542 (2025) https://doi.org/10. 1089/cyber.2025.0201
-
[5]
Perspect Psychol Sci.8(3), 223–241 (2013) https://doi.org/10
Evans, J.S., Stanovich, K.E.: Dual-process theories of higher cognition: Advanc- ing the debate. Perspect Psychol Sci.8(3), 223–241 (2013) https://doi.org/10. 1177/1745691612460685
work page 2013
-
[6]
Thinking & Reasoning24(4), 423–444 (2018) https: //doi.org/10.1080/13546783.2018.1459314
Stanovich, K.E.: Miserliness in human cognition: The interaction of detection, override and mindware. Thinking & Reasoning24(4), 423–444 (2018) https: //doi.org/10.1080/13546783.2018.1459314
-
[7]
Yin, W., Hay, J., Roth, D.: Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 3914–
work page 2019
-
[8]
Benchmarking Zero-shot Text Classification: Datasets, Evaluation and Entailment Approach
Association for Computational Linguistics, Hong Kong, China (2019). https://doi.org/10.18653/v1/D19-1404 .https://aclanthology.org/D19-1404/ 14
-
[9]
Nature Communications16(2025) https://doi.org/10.1038/s41467-025-66354-y
Vidal-Pi˜ neiro, D., Sørensen, Ø., Strømstad, M., Amlien, I.K., Baar´ e, W.F.C., Bartr´ es-Faz, D., Brandmaier, A.M., Cattaneo, G., D¨ uzel, S., Ghisletta, P., Henson, R.N., K¨ uhn, S., Lindenberger, U., Mowinckel, A.M., Nyberg, L., Pascual-Leone, ´A., Roe, J.M., Solana-S´ anchez, J., Sol´ e-Padull´ es, C., Watne, L.O., Wolfers, T., Pa, C.E.W.S.M.J.A.M.J....
- [10]
-
[11]
Huang, L., Zhu, Q.: Cognitive Security: A System-Scientific Approach. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30709-6
-
[12]
Computers13(7), 165 (2024) https://doi.org/10.3390/ computers13070165
Tilbury, J., Flowerday, S.: Automation bias and complacency in security operation centers. Computers13(7), 165 (2024) https://doi.org/10.3390/ computers13070165
work page 2024
-
[13]
Skitka, L.J., Mosier, K.L., Burdick, M.: Does automation bias decision-making? International Journal of Human Computer Studies51(5), 991–1006 (1999) https: //doi.org/10.1006/ijhc.1999.0252
-
[14]
Cognitive Science12(2), 257–285 (1988) https://doi.org/10.1207/s15516709cog1202 4
Sweller, J.: Cognitive load during problem solving: Effects on learning. Cognitive Science12(2), 257–285 (1988) https://doi.org/10.1207/s15516709cog1202 4
-
[15]
Bjork, E.L., Bjork, R.A.: Making Things Hard on Yourself, but in a Good Way: Creating Desirable Difficulties to Enhance Learning, pp. 56–64. Worth Publishers, NY (2011)
work page 2011
-
[16]
In: The Thirteenth International Conference on Learning Representations (2025)
Liu, Y., Cao, J., Li, Z., He, R., Tan, T.: Breaking mental set to improve reasoning through diverse multi-agent debate. In: The Thirteenth International Conference on Learning Representations (2025)
work page 2025
-
[17]
Psy- chological Review93(1), 23–32 (1986) https://doi.org/10.1037/0033-295X.93.1
Nemeth, C.J.: Differential contributions of majority and minority influence. Psy- chological Review93(1), 23–32 (1986) https://doi.org/10.1037/0033-295X.93.1. 23 15
-
[18]
Nat Hum Behav8, 917–931 (2024) https://doi.org/10.1038/ s41562-024-01814-x
Schurr, R., Reznik, D., Hillman, H.,et al.: Dynamic computational phenotyping of human cognition. Nat Hum Behav8, 917–931 (2024) https://doi.org/10.1038/ s41562-024-01814-x
work page 2024
-
[19]
Scientific Reports14, 23405 (2024) https://doi.org/10.1038/s41598-024-74244-4
Cui, Z., Sato, T., Jackson, A.,et al.: Gaze transition entropy as a measure of attention allocation in a dynamic workspace involving automation. Scientific Reports14, 23405 (2024) https://doi.org/10.1038/s41598-024-74244-4
-
[20]
PLoS Computational Biology22(1), 1013173 (2026) https://doi.org/ 10.1371/journal.pcbi.1013173
S´ anchez Pacheco, T., Nolte, D., K¨ onig, S.U., Pipa, G., K¨ onig, P.: Beyond the first glance: How human presence enhances visual entropy and promotes spatial learning. PLoS Computational Biology22(1), 1013173 (2026) https://doi.org/ 10.1371/journal.pcbi.1013173
-
[21]
Steyvers, M., Tejeda, H., Kerrigan, G., Smyth, P.: Bayesian modeling of human–AI complementarity. Proc. Natl. Acad. Sci. U.S.A.119(11), 2111547119 (2022) https://doi.org/10.1073/pnas.2111547119
-
[22]
Cambridge University Press, NY (2014)
Lee, M.D., Wagenmakers, E.J.: Bayesian Cognitive Modeling: A Practical Course. Cambridge University Press, NY (2014)
work page 2014
-
[23]
Human Factors39(2), 230–253 (1997) https://doi.org/10.1518/ 001872097778543886
Parasuraman, R., Riley, V.: Humans and automation: Use, misuse, dis- use, abuse. Human Factors39(2), 230–253 (1997) https://doi.org/10.1518/ 001872097778543886
work page 1997
-
[24]
Farrar, Straus and Giroux, NY (2011)
Kahneman, D.: Thinking, Fast and Slow. Farrar, Straus and Giroux, NY (2011)
work page 2011
-
[25]
Personality and social psychology review, 219–35 (2009) https://doi.org/ 10.1177/1088868309341564
Adam L, A., Daniel M, O.: Uniting the tribes of fluency to form a metacognitive nation. Personality and social psychology review, 219–35 (2009) https://doi.org/ 10.1177/1088868309341564
-
[26]
Trends in Cognitive Sciences, 607–617 (2017) https: //doi.org/10.1016/j.tics.2017.05.004
Ackerman, R., Thompson, V.A.: Meta-reasoning: Monitoring and control of thinking and reasoning. Trends in Cognitive Sciences, 607–617 (2017) https: //doi.org/10.1016/j.tics.2017.05.004
-
[27]
Kaiser, C., Kaiser, J., Schallner, R., Schneider, S.: A new era of online search? a large-scale study of user behavior and personal preferences during practical search tasks with generative ai versus traditional search engines. In: Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Com- puting Systems, pp. 1–7. ACM, Yokohama, ...
-
[28]
Klein, C.R., Klein, R.: The extended hollowed mind: why foundational knowl- edge is indispensable in the age of AI. Front. Artif. Intell.8, 1719019 (2025) https://doi.org/10.3389/frai.2025.1719019
-
[29]
Information16(12) (2025) https: 16 //doi.org/10.3390/info16121025
Zhai, N., Ma, X., Ding, X.: Unpacking ai chatbot dependency: A dual-path model of cognitive and affective mechanisms. Information16(12) (2025) https: 16 //doi.org/10.3390/info16121025
-
[30]
In: 26th Inter- national Conference on Intelligent User Interfaces, pp
Chromik, M., Eiband, M., Buchner, F., Kr¨ uger, A., Butz, A.: I think i get your point, AI! the illusion of explanatory depth in explainable AI. In: 26th Inter- national Conference on Intelligent User Interfaces, pp. 307–317. Association for Computing Machinery, USA (2021). https://doi.org/10.1145/3397481.3450644
-
[31]
Cognitive Science26(5), 521–562 (2002) https://doi.org/10
Rozenblit, L., Keil, F.: The misunderstood limits of folk science: an illusion of explanatory depth. Cognitive Science26(5), 521–562 (2002) https://doi.org/10. 1207/s15516709cog2605 1
work page 2002
-
[32]
Psychological review, 263–83 (1996) https://doi.org/10.1037/0033-295x.103.2
AW, K., DM, W.: Motivated closing of the mind: ”seizing” and ”freezing”. Psychological review, 263–83 (1996) https://doi.org/10.1037/0033-295x.103.2. 263
-
[33]
Trends in Cognitive Sciences, 464–470 (2006) https://doi.org/10.1016/j.tics.2006.08.004
Lombrozo, T.: The structure and function of explanations. Trends in Cognitive Sciences, 464–470 (2006) https://doi.org/10.1016/j.tics.2006.08.004
-
[34]
In: ICML’24: Proceedings of the 41st International Conference on Machine Learning (2024)
Du, Y., Li, S., Torralba, A., Tenenbaum, J.B., Mordatch, I.: Improving factual- ity and reasoning in language models through multiagent debate. In: ICML’24: Proceedings of the 41st International Conference on Machine Learning (2024)
work page 2024
-
[35]
In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp
Liang, T., He, Z., Jiao, W., Wang, X., Wang, Y., Wang, R., Yang, Y., Shi, S., Tu, Z.: Encouraging divergent thinking in large language models through multi- agent debate. In: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pp. 17889–17904 (2024). https://doi.org/10. 18653/v1/2024.emnlp-main.992
work page 2024
-
[36]
Constitutional AI: Harmlessness from AI Feedback
Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J., Jones, A., Chen, A., Goldie, A., Mirhoseini, A., McKinnon, C., Chen, C., Olsson, C., Olah, C., Hernandez, D., Drain, D., Ganguli, D., Li, D., Tran-Johnson, E., Perez, E., Kerr, J., Mueller, J., Ladish, J., Landau, J., Ndousse, K., Lukoˇ si¯ ut˙ e, K., Lovitt, L., Sellitto, M., Elhage, N., Schiefer...
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2212.08073 2022
-
[37]
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E.H., Zhou, D.: Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022). https://doi. org/10.48550/arXiv.2203.11171
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2203.11171 2022
-
[38]
Schulz-Hardt, S., Frey, D., L¨ uthgens, C., Moscovici, S.: Biased information search in group decision making. Journal of Personality and Social Psychology, 655–669 (2000) https://doi.org/10.1037/0022-3514.78.4.655 17
-
[39]
Towards Understanding Sycophancy in Language Models
Sharma, M., Tong, M., Korbak, T., Duvenaud, D., Askell, A., Bowman, S.R., Cheng, N., Durmus, E., Hatfield-Dodds, Z., Johnston, S.R., Kravec, S., Maxwell, T., McCandlish, S., Ndousse, K., Rausch, O., Schiefer, N., Yan, D., Zhang, M., Perez, E.: Towards Understanding Sycophancy in Language Models (2025). https://doi.org/10.48550/arXiv.2310.13548
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2310.13548 2025
-
[40]
In: Findings of the Association for Computational Linguistics: ACL 2023, pp
Perez, E., Ringer, S., Lukosiute, K., Nguyen, K., Chen, E., Heiner, S., Pettit, C., Olsson, C., Kundu, S., Kadavath, S., Jones, A., Chen, A., Mann, B., Israel, B., Seethor, B., McKinnon, C., Olah, C., Yan, D., Amodei, D., Amodei, D., Drain, D., Li, D., Tran-Johnson, E., Khundadze, G., Kernion, J., Landis, J., Kerr, J., Mueller, J., Hyun, J., Landau, J., N...
-
[41]
Nature (2024) https://doi.org/10.1038/s41586-024-07566-y
Shumailov, I., Shumaylov, Z., Zhao, Y., Papernot, N., Anderson, R., Gal, Y.: Ai models collapse when trained on recursively generated data. Nature (2024) https://doi.org/10.1038/s41586-024-07566-y
-
[42]
Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans30(3), 286–297 (2000) https: //doi.org/10.1109/3468.844354
-
[43]
Trends in Cognitive Sciences (2018) https://doi.org/10.1016/j.tics
Inzlicht, M., Shenhav, A., Olivola, C.Y.: The effort paradox: Effort is both costly and valued. Trends in Cognitive Sciences (2018) https://doi.org/10.1016/j.tics. 2018.01.007
-
[44]
Frontiers in Robotics and AI5, 15 (2018).https://doi
Filippo, S.D.S., Jeroen, V.D.H.: Meaningful human control over autonomous systems: A philosophical account. Frontiers in Robotics & Ai5(2018) https: //doi.org/10.3389/frobt.2018.00015
-
[45]
Kaesberg, L.B., Becker, J., Wahle, J.P., Ruas, T., Gipp, B.: Voting or con- sensus? decision-making in multi-agent debate. In: Findings of the Association for Computational Linguistics: ACL 2025 (2025). https://doi.org/10.18653/v1/ 2025.findings-acl.606
-
[46]
Edu- cation Sciences15(8), 1006 (2025) https://doi.org/10.3390/educsci15081006
Katsenou, R., Kotsidis, K., Papadopoulou, A., Anastasiadis, P., Deliyannis, I.: Beyond assistance: Embracing AI as a collaborative co-agent in education. Edu- cation Sciences15(8), 1006 (2025) https://doi.org/10.3390/educsci15081006
-
[47]
Sustainable Futures 10, 101152 (2025) https://doi.org/10.1016/j.sftr.2025.101152
Hao, X., Demir, E., Eyers, D.: Beyond human-in-the-loop: Sensemaking between 18 artificial intelligence and human intelligence collaboration. Sustainable Futures 10, 101152 (2025) https://doi.org/10.1016/j.sftr.2025.101152
-
[48]
Lee, S., Hwang, S., Kim, D., Lee, K.: Conversational agents as catalysts for critical thinking: Challenging social influence in group decision-making. In: Proceedings of the Extended Abstracts of the CHI Conference on Human Fac- tors in Computing Systems, pp. 1–12. ACM, Yokohama, Japan (2025). https: //doi.org/10.1145/3706599.3719792
-
[49]
Karl, F.: The free-energy principle: a unified brain theory? Nature Reviews Neuroscience11(2), 127 (2010) https://doi.org/10.1038/nrn2787
-
[50]
Redwood City: Stan- ford University Press, Stanford, California (1957)
Festinger, L.: A Theory of Cognitive Dissonance. Redwood City: Stan- ford University Press, Stanford, California (1957). https://doi.org/10.1515/ 9781503620766
work page 1957
-
[51]
Botvinick, M.M., Braver, T.S., Barch, D.M., Carter, J.D. C. S. adn Cohen: Conflict monitoring and cognitive control. Psychological Review108, 624–652 (2001) https://doi.org/10.1037/0033-295X.108.3.624
-
[52]
Data10, 172 (2025) https://doi.org/10.3390/data10110172
Gerlich, M.: From offloading to engagement: An experimental study on struc- tured prompting and critical reasoning with generative AI. Data10, 172 (2025) https://doi.org/10.3390/data10110172
-
[53]
Zhu, Z., Yu, J., Luo, Y.: Scaffolding metacognition with GenAI: Exploring design opportunities to support task management for university students with ADHD. In: Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (CHI ’26), Barcelona, Spain (2026). https://doi.org/10.1145/3772318. 3790697
-
[54]
Human factors, 381–410 (2010) https://doi.org/ 0.1177/0018720810376055
Parasuraman, R., Manzey, D.H.: Complacency and bias in human use of automa- tion: an attentional integration. Human factors, 381–410 (2010) https://doi.org/ 0.1177/0018720810376055
work page 2010
-
[55]
Psychological Bulletin, 276–292 (1982) https://doi.org/ 10.1037/0033-2909.91.2.276
Beatty, J.: Task-evoked pupillary responses, processing load, and the structure of processing resources. Psychological Bulletin, 276–292 (1982) https://doi.org/ 10.1037/0033-2909.91.2.276
-
[56]
Neuron, 221–34 (2015) https://doi.org/10.1016/j.neuron.2015.11.028
Joshi, S., Li, Y., Kalwani, R.M., Gold, J.I.: Relationships between pupil diame- ter and neuronal activity in the locus coeruleus, colliculi, and cingulate cortex. Neuron, 221–34 (2015) https://doi.org/10.1016/j.neuron.2015.11.028
-
[57]
Neural computation, 873–922 (2009) https://doi.org/ 10.1162/neco.2008.12-06-420
Ratcliff, R., McKoon, G.: The diffusion decision model: Theory and data for two-choice decision tasks. Neural computation, 873–922 (2009) https://doi.org/ 10.1162/neco.2008.12-06-420
-
[58]
Frontiers in neuroinformatics (2013) https: //doi.org/10.3389/fninf.2013.00014
Wiecki, T.V., Sofer, I., Frank, M.J.: Hddm: Hierarchical bayesian estimation of 19 the drift-diffusion model in python. Frontiers in neuroinformatics (2013) https: //doi.org/10.3389/fninf.2013.00014
-
[59]
PLoS Comput Biol18(11), 1009866 (2022) https://doi.org/10.1371/journal.pcbi.1009866
Eltet˝ o, N., Nemeth, D., Janacsek, K., Dayan, P.: Tracking human skill learning with a hierarchical bayesian sequence model. PLoS Comput Biol18(11), 1009866 (2022) https://doi.org/10.1371/journal.pcbi.1009866
-
[60]
Memory & Cognition32(7), 1206–20 (2004) https://doi.org/10.3758/BF03196893
Voss, A., Rothermund, K., Voss, J.: Interpreting the parameters of the diffusion model: An empirical validation. Memory & Cognition32(7), 1206–20 (2004) https://doi.org/10.3758/BF03196893
-
[61]
Mulder, M.J., Wagenmakers, E.-J., Ratcliff, R., Boekel, W., Forstmann, B.U.: Bias in the brain: A diffusion model analysis of prior probability and potential payoff. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience32(7), 2335–2343 (2012) https://doi.org/10.1523/JNEUROSCI. 4156-11.2012
-
[62]
Psychonomic Bulletin & Review16, 1026–1036 (2009) https://doi.org/10.3758/16.6.1026
Dutilh, G., Vandekerckhove, J., Tuerlinckx, F., Wagenmakers, E.-J.: A diffusion model decomposition of the practice effect. Psychonomic Bulletin & Review16, 1026–1036 (2009) https://doi.org/10.3758/16.6.1026
-
[63]
Biological Psychology42(3), 249–268 (1996) https://doi.org/10.1016/ 0301-0511(95)05161-9
Byrne, E.A., Parasuraman, R.: Psychophysiology and adaptive automa- tion. Biological Psychology42(3), 249–268 (1996) https://doi.org/10.1016/ 0301-0511(95)05161-9
work page 1996
-
[64]
Frontiers in Human Neuroscience10(2016) https://doi.org/10.3389/fnhum.2016.00539
Pietro, A., Gianluca, B., Gianluca, D.F., Alfredo, C., Stefano, B., Alessia, G., Simone, P., Jean-Paul, I., G´ eraud, G., Ra?Lane, B.: Adaptive automation triggered by eeg-based mental workload index: A passive brain-computer inter- face application in realistic air traffic control environment. Frontiers in Human Neuroscience10(2016) https://doi.org/10.33...
-
[65]
https://doi.org/10.48550/ arXiv.2508.05687
Reid, A., O’Callaghan, S., Carroll, L., Caetano, T.: Risk Analysis Techniques for Governed LLM-based Multi-Agent Systems (2025). https://doi.org/10.48550/ arXiv.2508.05687
-
[66]
Liu, W., Qin, J., Huang, X., Zeng, X., Xi, Y., Lin, J., Wu, C., Wang, Y., Shang, L., Tang, R., Lian, D., Yu, Y., Zhang, W.: Position: The Real Barrier to LLM Agent Usability is Agentic ROI (2026). https://doi.org/10.48550/arXiv.2505. 17767
-
[67]
Technical report, OECD Publishing, Paris (2025)
OECD: Survey of adult skills 2023 technical report. Technical report, OECD Publishing, Paris (2025). https://doi.org/10.1787/80d9f692-en
-
[68]
AI Ethics2(4), 747–761 (2022) https://doi.org/10.1007/s43681-022-00135-x 20
Bleher, H., Braun, M.: Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems. AI Ethics2(4), 747–761 (2022) https://doi.org/10.1007/s43681-022-00135-x 20
-
[69]
https://doi.org/10.48550/arXiv.2602.10701
Faas, C., Uth, R., Sterz, S., Langer, M., Feit, A.M.: Don’t blame me: How Intelligent Support Affects Moral Responsibility in Human Oversight (2026). https://doi.org/10.48550/arXiv.2602.10701
-
[70]
Engaging Science, Technology, and Society (2019)
Elish, M.C.: Moral crumple zones: Cautionary tales in human-robot interaction. Engaging Science, Technology, and Society (2019)
work page 2019
-
[71]
https://doi.org/10.48550/arXiv.2507.02582
Jiang, J., Naumov, P.: Responsibility Gap and Diffusion in Sequential Decision- Making Mechanisms (2025). https://doi.org/10.48550/arXiv.2507.02582
-
[72]
Alam, S., Altiparmak, Z.: XAI-CF — examining the role of explainable artificial intelligence in cyber forensics. Engineering Applications of Artificial Intelligence 167(3), 113892 (2026) https://doi.org/10.1016/j.engappai.2026.113892
-
[73]
Social Media + Society6(1) (2020) https://doi.org/10.1177/2056305120903408
Vaccari, C., Chadwick, A.: Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society6(1) (2020) https://doi.org/10.1177/2056305120903408
-
[74]
Trends in Cognitive Sciences25(5), 388–402 (2021) https://doi.org/10.1016/j.tics.2021.02.007
Pennycook, G., Rand, D.G.: The psychology of fake news. Trends in Cognitive Sciences25(5), 388–402 (2021) https://doi.org/10.1016/j.tics.2021.02.007
-
[75]
Human Factors59(1), 5–27 (2017) https://doi.org/10.1177/0018720816681350
Endsley, M.R.: From here to autonomy: Lessons learned from human–automation research. Human Factors59(1), 5–27 (2017) https://doi.org/10.1177/0018720816681350
-
[76]
AI Ethics5, 5783–5793 (2025) https: //doi.org/10.1007/s43681-025-00825-2
Saadeh, M.I., Janhonen, J., Beer, E.,et al.: Automation complacency: risks of abdicating medical decision making. AI Ethics5, 5783–5793 (2025) https: //doi.org/10.1007/s43681-025-00825-2
-
[77]
Vicente, L., Matute, H., Fregosi, C.,et al.: Machine learning systems as mentors in human learning: A user study on machine bias transmission in medical train- ing. International Journal of Human - Computer Studies198, 103474 (2025) https://doi.org/10.1016/j.ijhcs.2025.103474
-
[78]
Journal of Human- Technology Relations3(1), 1–34 (2025) https://doi.org/10.59490/jhtr.2025.3
Metikoˇ s, L., Van Domselaar, I.: Procedural justice and judicial AI: Substan- tiating explainability rights with values of contestation. Journal of Human- Technology Relations3(1), 1–34 (2025) https://doi.org/10.59490/jhtr.2025.3. 8163
-
[79]
(eds.): Agnotology: The Making and Unmaking of Ignorance
Proctor, R.N., Schiebinger, L. (eds.): Agnotology: The Making and Unmaking of Ignorance. Stanford University Press, Stanford, California (2008)
work page 2008
-
[80]
Addison-Wesley Press, MA (1949)
Zipf, G.K.: Human Behavior and the Principle of Least Effort. Addison-Wesley Press, MA (1949)
work page 1949
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.