Recognition: unknown
Brainrot: Deskilling and Addiction are Overlooked AI Risks
Pith reviewed 2026-05-07 13:25 UTC · model grok-4.3
The pith
AI safety research has largely ignored how generative AI can deskill users through cognitive offloading and foster addiction through dependence.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that risks of deskilling from cognitive offloading and atrophy of critical thinking, together with addiction from attachment and dependence on GenAI systems, receive little or no attention in AI safety and alignment literature, even though public conversation already treats threats to cognition, mental health, and overall welfare as major concerns. The paper quantifies the discrepancy and offers initial ideas for bringing cognitive and mental health issues into safety work, including the use of information campaigns and regulation as mitigation tools.
What carries the argument
The measured discrepancy between the narrow scope of existing AI safety literature (discrimination, harmful content, information hazards, malicious uses) and the wider public focus on cognitive offloading and GenAI dependence.
If this is right
- Safety and alignment research should expand its scope to include measurement and mitigation of cognitive offloading effects.
- Information campaigns could raise awareness of dependence risks and encourage balanced use.
- Regulation could set standards for design features that reduce attachment or offloading harms.
- Addressing these issues would require new methods for tracking real-world cognitive and mental health impacts.
Where Pith is reading between the lines
- If these risks are real, user interfaces might need built-in prompts that encourage independent thinking rather than automatic acceptance of outputs.
- The gap suggests AI safety could benefit from closer collaboration with fields that already study technology effects on attention and skill retention.
- Quantifying the discrepancy might lead to updated risk taxonomies that treat individual capability erosion as comparable to other harms.
Load-bearing premise
Deskilling through cognitive offloading and addiction through attachment to generative AI are real, measurable harms that belong inside the proper scope of AI safety research.
What would settle it
A systematic survey of recent AI safety papers that finds substantial coverage of cognitive deskilling or addiction risks, or controlled studies showing no measurable loss of critical thinking or dependence from typical GenAI use.
Figures
read the original abstract
The scope of AI safety and alignment work in generative artificial intelligence (GenAI) has so far mostly been limited to harms related to: (a) discrimination and hate speech, (b) harmful/inappropriate (violent, sexual, illegal) content, (c) information hazards, and (d) use cases related to malicious actors, such as cybersecurity, child abuse, and chemical, biological, radiological, and nuclear threats. The public conversation around AI, on the other hand, has also been focusing on threats to our cognition, mental health, and welfare at large, related to over-relying on new technologies, most recently, those related to GenAI. Examples include deskilling associated with cognitive offloading and the atrophy of critical thinking as a result of over-reliance on GenAI systems, and addiction associated with attachment and dependence on GenAI systems. Such risks are rarely addressed, if at all, in the AI safety and alignment literature. In this paper, we highlight and quantify this discrepancy and discuss some initial thoughts on how safety and alignment work could address cognitive and mental health concerns. Finally, we discuss how information campaigns and regulation can be used to mitigate such prominent risks.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims that AI safety and alignment research on generative AI has focused narrowly on discrimination/hate speech, harmful content, information hazards, and malicious actor use cases, while largely ignoring public concerns around cognitive deskilling (via over-reliance and atrophy of critical thinking) and addiction/dependence on GenAI systems. It asserts and aims to quantify this discrepancy, then offers initial thoughts on incorporating cognitive/mental-health risks into safety work plus mitigation via information campaigns and regulation.
Significance. If the discrepancy claim holds under systematic scrutiny, the paper could usefully expand AI safety priorities to include measurable societal harms to cognition and mental health that scale with GenAI adoption. The mitigation discussion provides a starting point for interdisciplinary work, though its value depends on first establishing the gap with reproducible evidence.
major comments (2)
- [Abstract] Abstract: the assertion that deskilling and addiction risks 'are rarely addressed, if at all' supplies no counts, search methodology, inclusion criteria, or data, so the central discrepancy claim rests on an unshown analysis.
- [Quantification / literature-scope discussion] The high-level contrast between standard AI safety topics (discrimination, misuse, etc.) and public concerns does not constitute a reproducible literature review; explicit keyword counts, citation analysis, or a defined scope of 'AI safety and alignment literature' are required to support the rarity quantification.
minor comments (2)
- [Title] The title term 'brainrot' is informal and should be defined or replaced with a more precise academic descriptor for the target audience.
- [Mitigation discussion] Suggestions for regulation and information campaigns would benefit from references to analogous efforts in other domains (e.g., social media addiction research) to strengthen the practical recommendations.
Simulated Author's Rebuttal
We thank the referee for their constructive comments, which identify important ways to strengthen the empirical support for our central claim. We agree that the discrepancy between AI safety literature and public concerns on cognitive risks requires more explicit, reproducible evidence, and we will revise the manuscript to address this.
read point-by-point responses
-
Referee: [Abstract] Abstract: the assertion that deskilling and addiction risks 'are rarely addressed, if at all' supplies no counts, search methodology, inclusion criteria, or data, so the central discrepancy claim rests on an unshown analysis.
Authors: We acknowledge that the abstract states the claim without detailing the underlying analysis. The body of the manuscript discusses the scope of AI safety work by contrasting it with public discourse on GenAI risks, based on our review of key publications. To address this concern directly, we will revise the abstract to briefly outline the literature scope examined (e.g., major AI safety venues and reports from 2020 onward), note the search approach for terms related to cognitive deskilling and addiction, and reference the approximate counts or absence of coverage that support the 'rarely addressed' assertion. revision: yes
-
Referee: [Quantification / literature-scope discussion] The high-level contrast between standard AI safety topics (discrimination, misuse, etc.) and public concerns does not constitute a reproducible literature review; explicit keyword counts, citation analysis, or a defined scope of 'AI safety and alignment literature' are required to support the rarity quantification.
Authors: We agree that the current high-level contrast falls short of a fully reproducible literature review and that explicit metrics are needed to substantiate the quantification. The manuscript intends to highlight the discrepancy through a focused examination of AI safety priorities versus emerging public concerns, but we recognize the value of greater rigor. We will revise the relevant section to define the scope of 'AI safety and alignment literature' (specifying venues, time period, and inclusion criteria), report keyword counts from systematic searches (e.g., occurrences of cognitive/mental health risk terms versus established topics like discrimination or misuse), and incorporate citation analysis where feasible to demonstrate relative attention. revision: yes
Circularity Check
No circularity; literature-scope claim without derivation or self-referential inputs
full rationale
The paper advances a position that deskilling and addiction risks are rarely addressed in AI safety literature, supported by a high-level contrast with standard topics (discrimination, misuse, etc.) and public concerns. No equations, fitted parameters, predictions, ansatzes, or derivation chains exist. The rarity claim is an empirical literature-scope assertion that is externally falsifiable and does not reduce to self-definition, self-citation load-bearing, or renaming of inputs. As a non-technical position paper, the argument is self-contained against external benchmarks and exhibits none of the enumerated circularity patterns.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption AI safety and alignment work has so far mostly been limited to discrimination, harmful content, information hazards, and malicious-actor use cases.
Reference graph
Works this paper leans on
-
[1]
Mohamed Abdalla and Moustafa Abdalla. 2021. The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity. InProceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society(Virtual Event, USA)(AIES ’21). Association for Computing Machinery, New York, NY, USA, 287–297
2021
-
[2]
CJ Adams, Jeffrey Sorensen, Julia Elliott, Lucas Dixon, Mark McDonald, and Will Cukierski. 2017. Toxic Comment Classification Challenge. Kaggle
2017
-
[3]
Richard Adams. 2025. Pupils fear AI is eroding their ability to study, research finds.The Guardian(15 October 2025)
2025
-
[4]
Nur Ahmed, Muntasir Wahed, and Neil C Thompson. 2023. The growing influence of industry in AI research.Science379, 6635 (2023), 884–886
2023
-
[5]
John Alford and Brian W Head. 2017. Wicked and less wicked problems: a typology and a contingency framework.Policy and society 36, 3 (2017), 397–413
2017
-
[6]
Zeeshan Ali, Jayaprakash Janarthanan, and Prasanna Mohan. 2024. Understanding digital dementia and cognitive impact in the current era of the internet: a review.Cureus16, 9 (2024)
2024
-
[7]
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. 2016. Concrete Problems in AI Safety. arXiv:1606.06565 [cs.AI]
work page internal anchor Pith review arXiv 2016
-
[8]
Anthropic. 2025. Claude Opus 4 and 4.1 can now end a rare subset of conversations. anthropic.com
2025
-
[9]
Anthropic. 2025. Exploring model welfare. anthropic.com
2025
-
[10]
Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. 2021. A General Language Assistant as a ...
work page internal anchor Pith review arXiv 2021
-
[11]
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022. Constitutional AI: Harmlessness from AI feedback.arXiv preprint arXiv:2212.08073(2022)
work page internal anchor Pith review arXiv 2022
-
[12]
Yatan Pal Singh Balhara. 2024. When the law decides the psychiatric diagnosis: a unique scenario in context of addictive behaviors. Asian journal of psychiatry101 (2024), 104238
2024
-
[13]
Jascha Bareis and Christian Katzenbach. 2022. Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics.Science, Technology, & Human Values47, 5 (2022), 855–881
2022
-
[14]
1881.American nervousness, its causes and consequences: a supplement to nervous exhaustion (neurasthenia)
George Miller Beard. 1881.American nervousness, its causes and consequences: a supplement to nervous exhaustion (neurasthenia). Putnam
-
[15]
Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. InProceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency(Virtual Event, Canada)(FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623
2021
-
[16]
Kean Birch and Kelly Bronson. 2022. Big tech.Science as Culture31, 1 (2022), 1–14
2022
-
[17]
Steven Bird. 2025. Big AI is accelerating the metacrisis: What can we do? arXiv:2512.24863 [cs.CL]
work page internal anchor Pith review arXiv 2025
-
[18]
Krzysztof Budzyń, Marcin Romańczyk, Diana Kitala, Paweł Kołodziej, Marek Bugajski, Hans O Adami, Johannes Blom, Marek Buszkiewicz, Natalie Halvorsen, Cesare Hassan, et al. 2025. Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study.The Lancet Gastroenterology & Hepatology(2025)
2025
-
[19]
Nicholas A Caputo. 2024. Alignment as jurisprudence.Yale Journal of Law and Technology (forthcoming)(2024)
2024
- [20]
-
[21]
Mohit Chandra, Suchismita Naik, Denae Ford, Ebele Okoli, Munmun De Choudhury, Mahsa Ershadi, Gonzalo Ramos, Javier Hernandez, Ananya Bhattacharjee, Shahed Warreth, and Jina Suh. 2025. From Lived Experience to Insight: Unpacking the Psychological Risks of Using AI Conversational Agents. InProceedings of the 2025 ACM Conference on Fairness, Accountability, ...
2025
-
[22]
2025.How people use ChatGPT?Technical Report
Aaron Chatterji, Thomas Cunningham, David J Deming, Zoe Hitzig, Christopher Ong, Carl Yan Shan, and Kevin Wadman. 2025.How people use ChatGPT?Technical Report. National Bureau of Economic Research
2025
-
[23]
Jung-Seok Choi, Daniel Luke King, and Young-Chul Jung. 2019. Neurobiological perspectives in behavioral addiction. 3 pages. Brainrot: Deskilling and Addiction are Overlooked AI Risks FAccT ’26, June 25–28, 2026, Montreal, QC, Canada
2019
-
[24]
Andy Clark and David Chalmers. 1998. The extended mind.analysis58, 1 (1998), 7–19
1998
-
[25]
Cyberspace Administration of China (CAC). 2026. Interim Measures on the Administration of Human-like Interactive Artificial Intelligence Services
2026
-
[26]
Ernesto Dal Bó. 2006. Regulatory Capture: A Review.Oxford Review of Economic Policy22, 2 (07 2006), 203–225
2006
-
[27]
2019.Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way
Virginia Dignum. 2019.Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Vol. 2156. Springer
2019
-
[28]
1982.Corporations and morality
Thomas Donaldson. 1982.Corporations and morality. Englewood Cliffs, NJ: Prentice-Hall
1982
-
[29]
Ethan Du-Crow, Susan M Astley, and Johan Hulleman. 2020. Is there a safety-net effect with computer-aided detection?Journal of Medical Imaging7, 2 (2020), 022405–022405
2020
-
[30]
1988.Law’s empire
Ronald Dworkin. 1988.Law’s empire. Harvard university press
1988
-
[31]
European Commission. 2025. Proposal for a Regulation of the European Parliament and of the Council amending Regulations (EU) 2024/1689 and (EU) 2018/1139 as regards the simplification of the implementation of harmonised rules on artificial intelligence (Digital Omnibus on AI)
2025
-
[32]
European Union. 2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)
2016
-
[33]
European Union. 2024. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)
2024
-
[34]
Cathy Mengying Fang, Auren R Liu, Valdemar Danry, Eunhae Lee, Samantha WT Chan, Pat Pataranutaporn, Pattie Maes, Jason Phang, Michael Lampe, Lama Ahmad, et al . 2025. How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Randomized Controlled Study.arXiv preprint arXiv:2503.17473(2025)
-
[35]
Avigail Ferdman. 2025. AI deskilling is a structural problem.AI & SOCIETY(2025), 1–13
2025
-
[36]
Iason Gabriel. 2020. Artificial intelligence, values, and alignment.Minds and machines30, 3 (2020), 411–437
2020
-
[37]
Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, Andy Jones, Sam Bowman, Anna Chen, Tom Conerly, Nova DasSarma, Dawn Drain, Nelson Elhage, Sheer El-Showk, Stanislav Fort, Zac Hatfield-Dodds, Tom Henighan, Danny Hernandez, Tristan Hume, Josh Jacobson, Scott Joh...
work page internal anchor Pith review arXiv 2022
-
[38]
Michael Gerlich. 2025. AI tools in society: Impacts on cognitive offloading and the future of critical thinking.Societies15, 1 (2025), 6
2025
-
[39]
Daniel Gilbert. 2024. Despite uncertain risks, many turn to AI like ChatGPT for mental health.The Washington Post(26 October 2024)
2024
-
[40]
Catalina Goanta, Nikolaos Aletras, Ilias Chalkidis, Sofia Ranchordás, and Gerasimos Spanakis. 2023. Regulation and NLP (RegNLP): Taming Large Language Models. InProceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Houda Bouamor, Juan Pino, and Kalika Bali (Eds.). Association for Computational Linguistics, Singapore, 8712...
2023
-
[41]
Aviel Goodman. 1990. Addiction: definition and implications.British journal of addiction85, 11 (1990), 1403–1408
1990
- [42]
-
[43]
Rachell Hall. 2025. ‘Sliding into an abyss’: experts warn over rising use of AI for mental health support.The Guardian(30 August 2025)
2025
-
[44]
Dione Hutchinson. 2026. A Clinician-Led Governance Framework for Evaluating Behavioral-Health AI Communication Safety. (2026)
2026
-
[45]
criminology of the corporation
Robert A Kagan and John T Scholz. 1984. The “criminology of the corporation” and regulatory enforcement strategies.Enforcing regulation(1984), 67–95
1984
-
[46]
Lasha Kavtaradze. 2024. Challenges of automating fact-checking: A technographic case study.Emerging Media2, 2 (2024), 236–258
2024
-
[47]
Simon Kemp. 2025. Digital 2026: Global Overview Report.We Are Social(15 October 2025)
2025
-
[48]
2025.How Americans View AI and Its Impact on People and Society: Views of AI’s Impact on Society and Human Abilities
Brian Kennedy, Eileen Yam, Emma Kikuchi, Isabelle Pula, and Javier Fuentes. 2025.How Americans View AI and Its Impact on People and Society: Views of AI’s Impact on Society and Human Abilities. Technical Report. Pew Research Center
2025
-
[49]
Raad Khraishi, Cristovão Iglesias Jr, Devesh Batra, Peter Gostev, Giulio Pelosio, Ramin Okhrati, and Greig A Cowan. 2025. Real-Time Hyper-Personalized Generative AI Should Be Regulated to Prevent the Rise of ”Digital Heroin”. InThe Thirty-Ninth Annual Conference on Neural Information Processing Systems Position Paper Track
2025
-
[50]
Celeste Kidd and Abeba Birhane. 2023. How AI can distort human beliefs.Science380, 6651 (2023), 1222–1223
2023
-
[51]
J.W. Kingdon. 1995.Agendas, Alternatives, and Public Policies. HarperCollins College Publishers
1995
-
[52]
Noam Kolt, Nicholas Caputo, Jack Boeglin, Cullen O’Keefe, Rishi Bommasani, Stephen Casper, Mariano-Florentino Cuéllar, Noah Feldman, Iason Gabriel, Gillian K. Hadfield, Lewis Hammond, Peter Henderson, Atoosa Kasirzadeh, Seth Lazar, Anka Reuel, Kevin L. Wei, and Jonathan Zittrain. 2026. Legal Alignment for Safe and Ethical AI. arXiv:2601.04175 [cs.CY]
-
[53]
Chokri Kooli, Youssef Kooli, and Eya Kooli. 2025. Generative artificial intelligence addiction syndrome: A new behavioral disorder? Asian Journal of Psychiatry107 (2025), 104476. FAccT ’26, June 25–28, 2026, Montreal, QC, Canada Ilias Chalkidis and Anders Søgaard
2025
- [54]
-
[55]
Eleanor Lawrie. 2025. Can AI therapists really be an alternative to human help?BBC News(20 May 2025)
2025
-
[56]
Seth Lazar and Alondra Nelson. 2023. AI safety on whose terms?Science381, 6654 (2023), 138–138
2023
-
[57]
Hao-Ping Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson. 2025. The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. InProceedings of the 2025 CHI conference on human factors in computing systems. 1–22
2025
-
[58]
John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance.Human factors46, 1 (2004), 50–80
2004
-
[59]
Jan Leike, David Krueger, Tom Everitt, Miljan Martic, Vishal Maini, and Shane Legg. 2018. Scalable agent alignment via reward modeling: a research direction.CoRRabs/1811.07871 (2018). arXiv:1811.07871
work page Pith review arXiv 2018
-
[60]
Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg
Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A. Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. 2017. AI Safety Gridworlds. arXiv:1711.09883 [cs.LG]
-
[61]
Alva Markelius, Connor Wright, Joahna Kuiper, Natalie Delille, and Yu-Ting Kuo. 2024. The mechanisms of AI hype and its planetary and social costs.AI and Ethics4, 3 (2024), 727–742
2024
-
[62]
Hannah R Marriott and Valentina Pitardi. 2024. One is the loneliest number. . . Two can be as bad as one. The influence of AI Friendship Apps on users’ well-being and addiction.Psychology & marketing41, 1 (2024), 86–101
2024
-
[63]
Ong, and Nick Haber
Jared Moore, Declan Grabb, William Agnew, Kevin Klyman, Stevie Chancellor, Desmond C. Ong, and Nick Haber. 2025. Expressing stigma and inappropriate responses prevents LLMs from safely replacing mental health providers.. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery,...
2025
-
[64]
Vincent C Müller. 2020. Ethics of artificial intelligence and robotics.Stanford Encyclopedia of Philosophy(2020)
2020
-
[65]
Tejas N Narechania and Ganesh Sitaraman. 2024. An antimonopoly approach to governing artificial intelligence.Yale Law & Policy Review43 (2024), 95. Issue 1
2024
-
[66]
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback.Advances in neural information processing systems35 (2022), 27730–27744
2022
-
[67]
Jessica L Pallant, Janneke Blijlevens, Alexander Campbell, and Ryan Jopp. 2025. Mastering knowledge: The impact of generative AI on student learning outcomes.Studies in Higher Education(2025), 1–22
2025
-
[68]
Michael Park, Erin Leahey, and Russell J Funk. 2023. Papers and patents are becoming less disruptive over time.Nature613, 7942 (2023), 138–144
2023
-
[69]
John Pavlopoulos, Jeffrey Sorensen, Léo Laugier, and Ion Androutsopoulos. 2021. SemEval-2021 Task 5: Toxic Spans Detection. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), Alexis Palmer, Nathan Schneider, Natalie Schluter, Guy Emerson, Aurelie Herbelot, and Xiaodan Zhu (Eds.). Association for Computational Linguist...
2021
-
[70]
A Mitchell Polinsky and Steven Shavell. 2000. The economic theory of public enforcement of law.Journal of economic literature38, 1 (2000), 45–76
2000
-
[71]
Sara Riam, N Baabouchi, O Belakbir, Z Elmaataoui, and H Kisra. 2025. Adolescent Addiction to Conversational AI: An overview.Sch J Med Case Rep11 (2025), 2790–2794
2025
-
[72]
Evan F Risko and Sam J Gilbert. 2016. Cognitive offloading.Trends in cognitive sciences20, 9 (2016), 676–688
2016
-
[73]
Horst W. J. Rittel and Melvin M. Webber. 1973. Dilemmas in a general theory of planning.Policy Sciences4, 2 (June 1973), 155–169
1973
-
[74]
Eden Saig and Nir Rosenfeld. 2023. Learning to suggest breaks: sustainable optimization of long-term user engagement. InProceedings of the 40th International Conference on Machine Learning(Honolulu, Hawaii, USA)(ICML’23). JMLR.org, Article 1232, 26 pages
2023
-
[75]
Beatriz Garcia Santa Cruz, Carlos Vega, Philip Santangelo, and Venkata Satagopam. 2025. Beyond Engagement: A Multidimensional Framework to Evaluate the Safe Development of Agentic AI in Mental Health. InInternational Workshop on Agentic AI for Medicine. Springer, 74–84
2025
-
[76]
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Yang Wu, et al
-
[77]
Deepseekmath: Pushing the limits of mathematical reasoning in open language models.arXiv preprint arXiv:2402.03300(2024)
work page internal anchor Pith review arXiv 2024
-
[78]
Divya Sharma, Shakila Meshkat, Argyrios Perivolaris, Mohammad Amin Kamaleddin, Bazen Gashaw Teferra, Alice Rueda, Reza Samavi, Rakesh Jetly, Vijay Mago, Yuqi Wu, et al. 2026. Reimagining psychiatric care with agentic AI: promise, challenges, and a roadmap forward.npj Digital Medicine(2026)
2026
- [79]
-
[80]
Scott Singer and Matt Sheehan. 2026. China Is Worried About AI Companions. Here’s What It’s Doing About Them. .Carnegie Endowment for International Peace(26 February 2026)
2026
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.