Recognition: no theorem link
What if AI systems weren't chatbots?
Pith reviewed 2026-05-11 03:18 UTC · model grok-4.3
The pith
Treating AI primarily as chatbots produces structural failures in complex tasks, deskilling, and concentrated economic power.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The chatbot paradigm is a dominant sociotechnical configuration whose widespread adoption reshapes social, economic, legal, and environmental systems, with chatbot-based AI failing to meet user needs in complex or high-stakes contexts while projecting confidence, altering work and learning patterns to produce deskilling and knowledge homogenization, and generating labor displacement, concentration of economic power, and increased environmental costs driven by large-scale infrastructures; this trajectory reflects value choices that prioritize conversational generality over domain specificity, accountability, and long-term social sustainability.
What carries the argument
The chatbot paradigm, the choice to treat AI primarily as conversational assistants that reshape interaction patterns across work, learning, and decision-making.
If this is right
- Chatbot systems often fail to support complex or high-stakes needs while still appearing authoritative.
- Normalized chatbot use changes work and learning so that deskilling and homogenized knowledge become common.
- Sustained investment in chatbot infrastructures increases environmental costs and concentrates economic power.
- Moving toward pluralistic designs and task-specific tools could reduce the described harms.
Where Pith is reading between the lines
- Developers could run side-by-side trials of chatbot versus domain-specific interfaces on the same professional tasks to track skill retention.
- Regulators might require interface impact statements for large AI deployments similar to those used for other technologies with broad social reach.
- Organizations could measure changes in employee expertise and output diversity after introducing general chat tools.
Load-bearing premise
Widespread adoption of the chatbot paradigm necessarily produces the listed negative effects on skills, knowledge, labor, power, and the environment without needing further causal proof.
What would settle it
A study tracking workers or students before and after chatbot AI adoption that finds no measurable decline in specialized skills, no reduction in knowledge diversity, stable employment levels, and no net rise in infrastructure-related emissions.
Figures
read the original abstract
The rapid convergence of artificial intelligence (AI) toward conversational chatbot interfaces marks a critical moment for the industry. This paper argues that the chatbot paradigm is not a neutral interface choice, but a dominant sociotechnical configuration whose widespread adoption reshapes social, economic, legal, and environmental systems. We examine how treating AI primarily as conversational assistants has extensive structural downsides. We show how chatbot-based systems often fail to adequately meet user needs, particularly in complex or high-stakes contexts, while projecting confidence and authority. We further analyze how the normalization of chatbot-mediated interaction alters patterns of work, learning, and decision-making, contributing to deskilling, homogenization of knowledge, and shifting expectations of expertise. Finally, we examine broader societal effects, including labor displacement, concentration of economic power, and increased environmental costs driven by sustained investment in large-scale chatbot infrastructures. While acknowledging legitimate benefits, we argue that the current trajectory of AI development reflects specific value choices that prioritize conversational generality over domain specificity, accountability, and long-term social sustainability. We conclude by outlining alternative directions for AI development and governance that move beyond one-size-fits-all chatbots, emphasizing pluralistic system design, task-specific tools, and institutional safeguards to mitigate social and economic harm.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper argues that the dominant chatbot interface for AI represents a non-neutral sociotechnical configuration whose adoption produces structural harms, including failure to meet needs in complex/high-stakes contexts, deskilling through altered work and learning patterns, knowledge homogenization, labor displacement, concentration of economic power, and elevated environmental costs from large-scale infrastructures. It attributes these outcomes to value choices favoring conversational generality over domain specificity and accountability, while outlining alternative pluralistic and task-specific design directions.
Significance. If the causal attributions hold, the analysis could usefully inform AI governance and design debates by linking interface choices to systemic social and environmental effects. The paper's broad scope connecting technical form to societal outcomes is a strength for a Computers and Society venue, though its interpretive character limits immediate applicability without additional grounding.
major comments (3)
- [analysis of normalization of chatbot-mediated interaction] The section analyzing normalization of chatbot-mediated interaction asserts that it contributes to deskilling, homogenization of knowledge, and shifting expertise expectations, yet supplies no mechanisms, before/after comparisons to non-chatbot AI deployments (e.g., domain-specific APIs), or cited empirical studies isolating the conversational form as the driver.
- [broader societal effects] The examination of broader societal effects states that labor displacement, power concentration, and environmental costs are driven by sustained investment in chatbot infrastructures, but provides no comparative evidence or references demonstrating that these outcomes are specific to the chatbot paradigm rather than general properties of high-capability AI or deployment incentives.
- [abstract and conclusion] The abstract and concluding argument treat the listed harms as direct consequences of prioritizing conversational generality, creating a circular structure in which the harms are invoked both to diagnose the paradigm and to justify alternatives, without independent falsifiable tests or counterexamples.
minor comments (1)
- [introduction] The term 'chatbot paradigm' is used throughout without an early, precise definition distinguishing the interface from underlying model capabilities or training regimes.
Simulated Author's Rebuttal
We thank the referee for the detailed and constructive comments. These highlight opportunities to improve the grounding and clarity of our critical analysis. We address each major comment below and indicate the revisions we will undertake.
read point-by-point responses
-
Referee: The section analyzing normalization of chatbot-mediated interaction asserts that it contributes to deskilling, homogenization of knowledge, and shifting expertise expectations, yet supplies no mechanisms, before/after comparisons to non-chatbot AI deployments (e.g., domain-specific APIs), or cited empirical studies isolating the conversational form as the driver.
Authors: We agree that explicit mechanisms and additional references would strengthen the section. The current argument draws on established STS and labor studies literature regarding automation's effects on expertise. In revision, we will outline specific mechanisms (e.g., how fluid conversational interfaces reduce opportunities for deliberate practice and verification) and add citations to empirical work on AI in education and professional workflows. Direct before-and-after comparisons isolating conversational form remain limited in existing research due to the paradigm's rapid dominance; we will note this limitation and reference available contrasts with API-based or domain-specific tools. revision: partial
-
Referee: The examination of broader societal effects states that labor displacement, power concentration, and environmental costs are driven by sustained investment in chatbot infrastructures, but provides no comparative evidence or references demonstrating that these outcomes are specific to the chatbot paradigm rather than general properties of high-capability AI or deployment incentives.
Authors: We accept that clearer differentiation is needed. The paper's core claim is that chatbot generality requires large-scale, general-purpose infrastructure, unlike narrower systems. Revisions will incorporate references to environmental assessments of LLM training and economic analyses of AI labor impacts, plus examples of specialized AI deployments (e.g., in scientific computing) with distinct investment patterns and lower concentration effects. This will better isolate paradigm-specific drivers while acknowledging overlaps with broader AI trends. revision: partial
-
Referee: The abstract and concluding argument treat the listed harms as direct consequences of prioritizing conversational generality, creating a circular structure in which the harms are invoked both to diagnose the paradigm and to justify alternatives, without independent falsifiable tests or counterexamples.
Authors: The manuscript is an interpretive sociotechnical analysis rather than an empirical study, so it does not furnish falsifiable tests. To reduce circularity, we will revise the abstract and conclusion to separate diagnostic observations from prescriptive recommendations. We will also introduce counterexamples of effective non-chatbot systems, such as task-specific models in biology and diagnostics. This preserves the argumentative intent while improving logical flow. revision: partial
- Independent falsifiable tests or new empirical evidence isolating the chatbot paradigm's causal role in the identified harms, as these would require original data collection beyond the scope of the current conceptual paper.
Circularity Check
No significant circularity in argumentative structure
full rationale
This is a sociotechnical critique paper without equations, fitted parameters, or predictive derivations. The central claims about harms from the chatbot paradigm are presented as analytical examinations of existing systems rather than reductions to self-definitions or inputs by construction. No self-citation load-bearing steps, uniqueness theorems, or ansatz smuggling appear in the abstract or described structure. The paper acknowledges benefits and proposes alternatives, indicating an open argumentative form grounded in external observation rather than tautological loops.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption The chatbot paradigm is a dominant sociotechnical configuration
- ad hoc to paper Widespread chatbot adoption necessarily produces deskilling, homogenization, labor displacement, power concentration, and environmental costs
Reference graph
Works this paper leans on
-
[1]
Alaa A Abd-Alrazaq, Mohannad Alajlani, Ali Abdallah Alalwan, Bridgette M Bewick, Peter Gardner, and Mowafa Househ. 2019. An overview of the features of chatbots in mental health: A scoping review.International journal of medical informatics132 (2019), 103978
work page 2019
-
[2]
Daron Acemoglu and Pascual Restrepo. 2018. Artificial intelligence, automation, and work. InThe economics of artificial intelligence: An agenda. University of Chicago Press, 197–236
work page 2018
-
[3]
Tarek Ait Baha, Mohamed El Hajji, Youssef Es-Saady, and Hammou Fadili. 2024. The impact of educational chatbot on student learning experience.Education and Information Technologies29, 8 (2024), 10153–10176
work page 2024
-
[4]
Luca Ambrosio, Jordy Schol, Vincenzo Amedeo La Pietra, Fabrizio Russo, Gianluca Vadalà, and Daisuke Sakai. 2023. Threats and opportunities of using ChatGPT in scientific writing—The risk of getting spineless.JOR spine7, 1 (2023), e1296
work page 2023
-
[5]
2024.Decomposing Language Models Into Understandable Components
Anthropic. 2024.Decomposing Language Models Into Understandable Components. https://www.anthropic.com/research/decomposing- language-models-into-understandable-components
work page 2024
-
[6]
Anthropic. 2026. Introducing Claude Design by Anthropic Labs. https://www.anthropic.com/news/claude-design-anthropic-labs. Accessed: 2026-04-28
work page 2026
-
[7]
Luke Balcombe. 2023. AI chatbots in digital mental health. InInformatics, Vol. 10. MDPI, 82
work page 2023
-
[8]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. InProceedings of the 2021 CHI conference on human factors in computing systems. 1–16
work page 2021
-
[9]
2025.‘A predator in your home’: Mothers say chatbots encouraged their sons to kill themselves
BBC News. 2025.‘A predator in your home’: Mothers say chatbots encouraged their sons to kill themselves. https://www.bbc.com/news/ articles/ce3xgwyywe4o Accessed January 13, 2026
work page 2025
- [10]
-
[11]
Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. InProceedings of the 2021 ACM conference on fairness, accountability, and transparency. 610–623
work page 2021
-
[12]
Michael Blackhurst, Cameron Wade, Joe DeCarolis, Anderson de Queiroz, Jeremiah Johnson, and Paulina Jaramillo. 2025.Data Center Growth Could Increase Electricity Bills 8% Nationally and as Much as 25% in Some Regional Markets. Carnegie Mellon University. https://www.cmu.edu/work-that-matters/energy-innovation/data-center-growth-could-increase-electricity-bills
work page 2025
-
[13]
Borhane Blili-Hamelin, Christopher Graziul, Leif Hancox-Li, Hananel Hazan, El-Mahdi El-Mhamdi, Avijit Ghosh, Katherine A Heller, Jacob Metcalf, Fabricio Murai, Eryk Salvaggio, Andrew Smart, Todd Snider, Mariame Tighanimine, Talia Ringer, Margaret Mitchell, and Shiri Dori-Hacohen. 2025. Position: Stop treating ‘AGI’ as the north-star goal of AI research. I...
work page 2025
-
[14]
Ryan L Boyd and David M Markowitz. 2025. Artificial Intelligence and the Psychology of Human Connection.Preprint10 (2025)
work page 2025
-
[15]
Petter Bae Brandtzaeg and Asbjørn Følstad. 2018. Chatbots: changing user needs and motivations.interactions25, 5 (2018), 38–43
work page 2018
-
[16]
Natalie Grace Brigham, Miranda Wei, Tadayoshi Kohno, and Elissa M Redmiles. 2024. " Violation of my {body:}" Perceptions of {AI-generated}non-consensual (intimate) imagery. InTwentieth Symposium on Usable Privacy and Security (SOUPS 2024). 373–392
work page 2024
-
[17]
Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond. 2023.Generative AI at Work. Working Paper 31161. National Bureau of Economic Research (NBER). https://www.nber.org/system/files/working_papers/w31161/w31161.pdf Accessed 2026-01-03
work page 2023
-
[18]
Krzysztof Budzyń, Marcin Romańczyk, Diana Kitala, Paweł Kołodziej, Marek Bugajski, Hans O Adami, Johannes Blom, Marek Buszkiewicz, Natalie Halvorsen, Cesare Hassan, et al . 2025. Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study.The Lancet Gastroenterology & Hepatology10, 10 (2025), 896–903
work page 2025
-
[19]
David Cahn. 2024. AI’s $600B Question. Sequoia Capital. https://sequoiacap.com/article/ais-600b-question/ Accessed 2026-01-03
work page 2024
-
[20]
Julie Y Cai and Marybeth J Mattingly. 2025. Unstable Work Schedules and Racial Earnings Disparities Among US Workers.RSF: The Russell Sage Foundation Journal of the Social Sciences11, 1 (2025), 201–223
work page 2025
-
[21]
2024.Data centers: Just one part of the African digital infrastructure investment equation
Samuel Carvalho. 2024.Data centers: Just one part of the African digital infrastructure investment equation. Data Center Dynamics. https:// www.datacenterdynamics.com/en/opinions/data-centers-just-one-part-of-the-african-digital-infrastructure-investment-equation/
work page 2024
-
[22]
Mauro Cazzaniga, Carlo Pizzinelli, Emma J Rockall, and Ms Marina Mendes Tavares. 2024. Exposure to artificial intelligence and occupational mobility: A cross-country analysis.International Monetary Fund(2024). Issue 116
work page 2024
- [23]
- [24]
-
[25]
Minyang Chow and Olivia Ng. 2025. Beyond chatbots: Moving toward multistep modular AI agents in medical education.JMIR Medical Education11 (2025), e76661
work page 2025
-
[26]
Nilesh Christopher. 2024. How AI is resurrecting dead Indian politicians as election looms.Al Jazeera(2024). What if AI systems weren’t chatbots? FAccT ’26, June 25–28, 2026, Montreal, QC, Canada
work page 2024
-
[27]
2024.Indian Voters Are Being Bombarded With Millions of Deepfakes
Nilesh Christopher. 2024.Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve. https: //www.nileshchristopher.net/ai-india-elections-deepfakes/indian-elections-ai-deepfakes
work page 2024
-
[28]
Benjamin Cohen-Wang, Harshay Shah, Kristian Georgiev, and Aleksander Mądry. 2024. ContextCite: Attributing model generation to context.Advances in Neural Information Processing Systems37 (2024), 95764–95807
work page 2024
-
[29]
Michelle Cohn, Mahima Pushkarna, Gbolahan O Olanubi, Joseph M Moran, Daniel Padgett, Zion Mengesha, and Courtney Heldreth
-
[30]
InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems
Believing anthropomorphism: examining the role of anthropomorphic cues on trust in large language models. InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–15
-
[31]
2024.Data Centers Are Draining Water and Generating Smog in Oregon
Sean Patrick Cooper. 2024.Data Centers Are Draining Water and Generating Smog in Oregon. Rolling Stone. https://www.rollingstone. com/culture/culture-features/data-center-water-pollution-amazon-oregon-1235466613/
work page 2024
- [32]
-
[33]
2023.Artificial intelligence, services globalisation and income inequality
Giulio Cornelli, Jon Frost, and Saurabh Mishra. 2023.Artificial intelligence, services globalisation and income inequality. Technical Report. Bank for International Settlements
work page 2023
-
[34]
Luca Costabello, Alberto Bernardi, Adrianna Janik, Aldan Creo, Sumit Pai, Chan Le Van, Rory McGrath, Nicholas McCarthy, and Pedro Tabacof. 2019. AmpliGraph: a Library for Representation Learning on Knowledge Graphs. doi:10.5281/zenodo.2595043
-
[35]
Micholo Cucio and Tristan Hennig. 2025.Artificial Intelligence and the Philippine Labor Market: Mapping Occupational Exposure and Complementarity. Technical Report. The International Monetary Fund (IMF)
work page 2025
-
[36]
Aniruddha Das. 2023. AI Chatbots may be fun, but they have a drinking problem.Foundry journal26, 9 (2023), 1–4
work page 2023
-
[37]
Julian De Freitas, Zeliha Oğuz-Uğuralp, Ahmet Kaan Uğuralp, and Stefano Puntoni. 2025. AI companions reduce loneliness.Journal of Consumer Research(2025), ucaf040
work page 2025
-
[38]
Sarah E Dempsey. 2021. Racialized and gendered constructions of the “ideal server”: Contesting historical occupational discourses of restaurant service.Frontiers in Sustainable Food Systems5 (2021), 727473
work page 2021
-
[39]
Kerstin Denecke, Alaa Abd-Alrazaq, and Mowafa Househ. 2021. Artificial intelligence for chatbots in mental health: opportunities and challenges.Multiple perspectives on artificial intelligence in healthcare: Opportunities and challenges(2021), 115–128
work page 2021
-
[40]
Abhay Deshpande, Maya Guru, Rose Hendrix, Snehal Jauhri, Ainaz Eftekhar, Rohun Tripathi, Max Argus, Jordi Salvador, Haoquan Fang, Matthew Wallingford, Wilbert Pumacay, Yejin Kim, Quinn Pfeifer, Ying-Chun Lee, Piper Wolters, Omar Rayyan, Mingtong Zhang, Jiafei Duan, Karen Farley, Winson Han, Eli Vanderbilt, Dieter Fox, Ali Farhadi, Georgia Chalvatzaki, Dhr...
-
[41]
Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A Smith, Nicole DeCario, and Will Buchanan. 2022. Measuring the carbon intensity of AI in cloud instances. InProceedings of the 2022 ACM conference on fairness, accountability, and transparency. 1877–1894
work page 2022
-
[42]
Barry Elad. 2025. Claude AI Statistics. (2025). https://sqmagazine.co.uk/claude-ai-statistics/
work page 2025
-
[43]
Madeleine Clare Elish. 2025. Moral crumple zones: cautionary tales in human–robot interaction. InRobot Law: Volume II. Edward Elgar Publishing, 83–105
work page 2025
-
[44]
Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock. 2024. GPTs are GPTs: Labor market impact potential of LLMs. Science384, 6702 (2024), 1306–1308
work page 2024
-
[45]
Nanna Inie Emily M. Bender. 2026. We Need to Talk About How We Talk About ’AI’ | TechPolicy.Press — techpolicy.press. https: //www.techpolicy.press/we-need-to-talk-about-how-we-talk-about-ai/. [Accessed 08-01-2026]
work page 2026
-
[46]
Robin Emsley. 2023. ChatGPT: these are not hallucinations–they’re fabrications and falsifications.Schizophrenia9, 1 (2023), 52
work page 2023
-
[47]
Daniel Evanko and Michael Di Natale. 2024. Quantifying and Assessing the Use of Generative AI by Authors and Reviewers in the Cancer Research Field.International Congress on Peer Review and Scientific Publication(2024)
work page 2024
-
[48]
Cathy Mengying Fang, Auren R Liu, Valdemar Danry, Eunhae Lee, Samantha WT Chan, Pat Pataranutaporn, Pattie Maes, Jason Phang, Michael Lampe, Lama Ahmad, et al. 2025. How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Randomized Controlled Study.arXiv preprint arXiv:2503.17473(2025)
- [49]
-
[50]
Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. 2017. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial.JMIR mental health4, 2 (2017), e7785
work page 2017
-
[51]
Agentic AI Foundation. [n. d.]. Tools - Model Context Protocol — modelcontextprotocol.io. https://modelcontextprotocol.io/specification/ 2025-06-18/server/tools. [Accessed 30-12-2025]
work page 2025
-
[52]
2024.Open letter to President Biden from tech workers in Kenya
Foxglove. 2024.Open letter to President Biden from tech workers in Kenya. https://www.foxglove.org.uk/open-letter-to-president- biden-from-tech-workers-in-kenya/
work page 2024
-
[53]
Michael Gerlich. 2025. AI tools in society: Impacts on cognitive offloading and the future of critical thinking.Societies15, 1 (2025), 6. FAccT ’26, June 25–28, 2026, Montreal, QC, Canada Ghosh et al
work page 2025
-
[54]
Sourojit Ghosh. 2024. Interpretations, Representations, and Stereotypes of Caste within Text-to-Image Generators. InProceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Vol. 7. 490–502
work page 2024
-
[55]
Sourojit Ghosh and Aylin Caliskan. 2023. ‘Person’ == Light-skinned, Western Man, and Sexualization of Women of Color: Stereotypes in Stable Diffusion. InFindings of the Association for Computational Linguistics: EMNLP 2023. Association for Computational Linguistics, 6971–6985
work page 2023
-
[56]
I Don’t See Myself Represented Here at All
Sourojit Ghosh, Nina Lutz, and Aylin Caliskan. 2024. “I Don’t See Myself Represented Here at All”: User Experiences of Stable Diffusion Outputs Containing Representational Harms across Gender Identities and Nationalities. InProceedings of the AAAI/ACM conference on AI, ethics, and society, Vol. 7. 463–475
work page 2024
-
[57]
2025.AI bots wrote and reviewed all papers at this conference
Elizabeth Gibney. 2025.AI bots wrote and reviewed all papers at this conference
work page 2025
-
[58]
Cassidy Gibson, Daniel Olszewski, Natalie Grace Brigham, Anna Crowder, Kevin RB Butler, Patrick Traynor, Elissa M Redmiles, and Tadayoshi Kohno. 2025. Analyzing the {AI} Nudification Application Ecosystem. In34th USENIX Security Symposium (USENIX Security 25). 1–20
work page 2025
-
[59]
Trystan S Goetze. 2024. AI art is theft: Labour, extraction, and exploitation: Or, on the dangers of stochastic Pollocks. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. 186–196
work page 2024
-
[60]
Goldman Sachs Research. 2024. Gen AI: Too Much Spend, Too Little Benefit? Goldman Sachs Insights. https://www.goldmansachs. com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit Accessed 2026-01-03
work page 2024
-
[61]
Google. 2026. NotebookLM. https://notebooklm.google.com. Accessed: 2026-04-28
work page 2026
-
[62]
2019.Ghost work: How to stop Silicon Valley from building a new global underclass
Mary L Gray and Siddharth Suri. 2019.Ghost work: How to stop Silicon Valley from building a new global underclass. Harper Business
work page 2019
-
[63]
Ben Green and Salomé Viljoen. 2020. Algorithmic realism: expanding the boundaries of algorithmic thought. InProceedings of the 2020 conference on fairness, accountability, and transparency. 19–31
work page 2020
-
[64]
Cobus Greyling. 2025. How ComfyUI-R1 & ComfyUI Transform Unstructured Input into Structured Workflows — cobus- greyling.substack.com. https://cobusgreyling.substack.com/p/how-comfyui-r1-and-comfyui-transform. [Accessed 08-01-2026]
work page 2025
-
[65]
Oliver L Haimson, Samuel Reiji Mayworm, Alexis Shore Ingber, and Nazanin Andalibi. 2025. AI Attitudes Among Marginalized Populations in the US: Nonbinary, Transgender, and Disabled Individuals Report More Negative AI Attitudes. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. 1224–1237
work page 2025
-
[66]
Fauzia Zahira Munirul Hakim, Lia Maulia Indrayani, and Rosaria Mita Amalia. 2019. A dialogic analysis of compliment strategies employed by replika chatbot. InThird International conference of arts, language and culture (ICALC 2018). Atlantis Press, 266–271
work page 2019
- [67]
-
[68]
Adam W Hanley, Alia R Warner, Vincent M Dehili, Angela I Canto, and Eric L Garland. 2015. Washing dishes to wash the dishes: brief instruction in an informal mindfulness practice.Mindfulness6, 5 (2015), 1095–1103
work page 2015
-
[69]
2025.A winter freeze could be coming to Houston
Claire Hao. 2025.A winter freeze could be coming to Houston. Are CenterPoint, ERCOT ready?Houston Chronicle. https://www. houstonchronicle.com/business/energy/article/houston-freeze-power-outages-21239664.php
work page 2025
-
[70]
Eleanor Hawkins. 2025. Anthropic launches first brand campaign for Claude — axios.com. https://www.axios.com/2025/09/18/anthropic- brand-campaign-claude. [Accessed 07-01-2026]
work page 2025
-
[71]
Will Hawkins, Brent Mittelstadt, and Chris Russell. 2025. Deepfakes on Demand: The rise of accessible non-consensual deepfake image generators: The rise of accessible non-consensual deepfake image generators. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. 1602–1614
work page 2025
-
[72]
Kashmir Hill. 2025.A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. https://www.nytimes.com/2025/08/26/technology/ chatgpt-openai-suicide.html?smid=nytcore-ios-share Accessed January 10, 2026
work page 2025
-
[73]
Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects.arXiv preprint arXiv:1812.04608(2018)
work page Pith review arXiv 2018
-
[74]
Xiang Hui, Oren Reshef, and Luofeng Zhou. 2024. The short-term effects of generative artificial intelligence on employment: Evidence from an online labor market.Organization Science35, 6 (2024), 1977–1989
work page 2024
-
[75]
2025.Large language models, small labor market effects
Anders Humlum and Emilie Vestergaard. 2025.Large language models, small labor market effects. Technical Report. National Bureau of Economic Research
work page 2025
-
[76]
2024.India’s data centre boom confronts a looming water challenge
Nikhil Imandar. 2024.India’s data centre boom confronts a looming water challenge. BBC News. https://www.bbc.com/news/articles/ cgr417pwek7o
work page 2024
-
[77]
2025.Future of Work Issue Briefs
International Labour Organization. 2025.Future of Work Issue Briefs. Technical Report. International Labour Organization
work page 2025
-
[78]
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation.ACM computing surveys55, 12 (2023), 1–38
work page 2023
-
[79]
Harry H Jiang, Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers, and Timnit Gebru. 2023. AI Art and its Impact on Artists. InProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. 363–374. What if AI systems weren’t chatbots? FAccT ’26, June 25–28, 2026, Montreal, QC, Canada
work page 2023
- [80]
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.