pith. machine review for the scientific record. sign in

arxiv: 2605.07896 · v1 · submitted 2026-05-08 · 💻 cs.CY · cs.AI

Recognition: no theorem link

What if AI systems weren't chatbots?

Authors on Pith no claims yet

Pith reviewed 2026-05-11 03:18 UTC · model grok-4.3

classification 💻 cs.CY cs.AI
keywords AI chatbotssociotechnical systemsdeskillinglabor displacementenvironmental costsknowledge homogenizationAI interfacesAI governance
0
0 comments X

The pith

Treating AI primarily as chatbots produces structural failures in complex tasks, deskilling, and concentrated economic power.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper contends that building AI around conversational chatbot interfaces is a loaded design choice rather than a neutral default. It traces how this form often falls short for users facing intricate or high-stakes problems yet still projects authority, how it changes daily work and learning so that skills erode and knowledge becomes more uniform, and how it drives larger shifts including job losses, tighter control of economic gains, and heavier environmental loads from the required infrastructure. Readers might care because these patterns affect professional competence, the distribution of technology's benefits, and the resources consumed by digital systems. The authors acknowledge some upsides but call for other development paths that favor targeted tools and checks against broad harm.

Core claim

The chatbot paradigm is a dominant sociotechnical configuration whose widespread adoption reshapes social, economic, legal, and environmental systems, with chatbot-based AI failing to meet user needs in complex or high-stakes contexts while projecting confidence, altering work and learning patterns to produce deskilling and knowledge homogenization, and generating labor displacement, concentration of economic power, and increased environmental costs driven by large-scale infrastructures; this trajectory reflects value choices that prioritize conversational generality over domain specificity, accountability, and long-term social sustainability.

What carries the argument

The chatbot paradigm, the choice to treat AI primarily as conversational assistants that reshape interaction patterns across work, learning, and decision-making.

If this is right

  • Chatbot systems often fail to support complex or high-stakes needs while still appearing authoritative.
  • Normalized chatbot use changes work and learning so that deskilling and homogenized knowledge become common.
  • Sustained investment in chatbot infrastructures increases environmental costs and concentrates economic power.
  • Moving toward pluralistic designs and task-specific tools could reduce the described harms.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Developers could run side-by-side trials of chatbot versus domain-specific interfaces on the same professional tasks to track skill retention.
  • Regulators might require interface impact statements for large AI deployments similar to those used for other technologies with broad social reach.
  • Organizations could measure changes in employee expertise and output diversity after introducing general chat tools.

Load-bearing premise

Widespread adoption of the chatbot paradigm necessarily produces the listed negative effects on skills, knowledge, labor, power, and the environment without needing further causal proof.

What would settle it

A study tracking workers or students before and after chatbot AI adoption that finds no measurable decline in specialized skills, no reduction in knowledge diversity, stable employment levels, and no net rise in infrastructure-related emissions.

Figures

Figures reproduced from arXiv: 2605.07896 by Avijit Ghosh, Pranav Narayanan Venkit, Sanjana Gautam, Sourojit Ghosh.

Figure 1
Figure 1. Figure 1: Causal chain of harms arising from the AI chatbot paradigm. Chatbot design choices, including single authoritative [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Harms arising from AI systems can be understood across three layers: [PITH_FULL_IMAGE:figures/full_fig_p012_2.png] view at source ↗
read the original abstract

The rapid convergence of artificial intelligence (AI) toward conversational chatbot interfaces marks a critical moment for the industry. This paper argues that the chatbot paradigm is not a neutral interface choice, but a dominant sociotechnical configuration whose widespread adoption reshapes social, economic, legal, and environmental systems. We examine how treating AI primarily as conversational assistants has extensive structural downsides. We show how chatbot-based systems often fail to adequately meet user needs, particularly in complex or high-stakes contexts, while projecting confidence and authority. We further analyze how the normalization of chatbot-mediated interaction alters patterns of work, learning, and decision-making, contributing to deskilling, homogenization of knowledge, and shifting expectations of expertise. Finally, we examine broader societal effects, including labor displacement, concentration of economic power, and increased environmental costs driven by sustained investment in large-scale chatbot infrastructures. While acknowledging legitimate benefits, we argue that the current trajectory of AI development reflects specific value choices that prioritize conversational generality over domain specificity, accountability, and long-term social sustainability. We conclude by outlining alternative directions for AI development and governance that move beyond one-size-fits-all chatbots, emphasizing pluralistic system design, task-specific tools, and institutional safeguards to mitigate social and economic harm.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 1 minor

Summary. The paper argues that the dominant chatbot interface for AI represents a non-neutral sociotechnical configuration whose adoption produces structural harms, including failure to meet needs in complex/high-stakes contexts, deskilling through altered work and learning patterns, knowledge homogenization, labor displacement, concentration of economic power, and elevated environmental costs from large-scale infrastructures. It attributes these outcomes to value choices favoring conversational generality over domain specificity and accountability, while outlining alternative pluralistic and task-specific design directions.

Significance. If the causal attributions hold, the analysis could usefully inform AI governance and design debates by linking interface choices to systemic social and environmental effects. The paper's broad scope connecting technical form to societal outcomes is a strength for a Computers and Society venue, though its interpretive character limits immediate applicability without additional grounding.

major comments (3)
  1. [analysis of normalization of chatbot-mediated interaction] The section analyzing normalization of chatbot-mediated interaction asserts that it contributes to deskilling, homogenization of knowledge, and shifting expertise expectations, yet supplies no mechanisms, before/after comparisons to non-chatbot AI deployments (e.g., domain-specific APIs), or cited empirical studies isolating the conversational form as the driver.
  2. [broader societal effects] The examination of broader societal effects states that labor displacement, power concentration, and environmental costs are driven by sustained investment in chatbot infrastructures, but provides no comparative evidence or references demonstrating that these outcomes are specific to the chatbot paradigm rather than general properties of high-capability AI or deployment incentives.
  3. [abstract and conclusion] The abstract and concluding argument treat the listed harms as direct consequences of prioritizing conversational generality, creating a circular structure in which the harms are invoked both to diagnose the paradigm and to justify alternatives, without independent falsifiable tests or counterexamples.
minor comments (1)
  1. [introduction] The term 'chatbot paradigm' is used throughout without an early, precise definition distinguishing the interface from underlying model capabilities or training regimes.

Simulated Author's Rebuttal

3 responses · 1 unresolved

We thank the referee for the detailed and constructive comments. These highlight opportunities to improve the grounding and clarity of our critical analysis. We address each major comment below and indicate the revisions we will undertake.

read point-by-point responses
  1. Referee: The section analyzing normalization of chatbot-mediated interaction asserts that it contributes to deskilling, homogenization of knowledge, and shifting expertise expectations, yet supplies no mechanisms, before/after comparisons to non-chatbot AI deployments (e.g., domain-specific APIs), or cited empirical studies isolating the conversational form as the driver.

    Authors: We agree that explicit mechanisms and additional references would strengthen the section. The current argument draws on established STS and labor studies literature regarding automation's effects on expertise. In revision, we will outline specific mechanisms (e.g., how fluid conversational interfaces reduce opportunities for deliberate practice and verification) and add citations to empirical work on AI in education and professional workflows. Direct before-and-after comparisons isolating conversational form remain limited in existing research due to the paradigm's rapid dominance; we will note this limitation and reference available contrasts with API-based or domain-specific tools. revision: partial

  2. Referee: The examination of broader societal effects states that labor displacement, power concentration, and environmental costs are driven by sustained investment in chatbot infrastructures, but provides no comparative evidence or references demonstrating that these outcomes are specific to the chatbot paradigm rather than general properties of high-capability AI or deployment incentives.

    Authors: We accept that clearer differentiation is needed. The paper's core claim is that chatbot generality requires large-scale, general-purpose infrastructure, unlike narrower systems. Revisions will incorporate references to environmental assessments of LLM training and economic analyses of AI labor impacts, plus examples of specialized AI deployments (e.g., in scientific computing) with distinct investment patterns and lower concentration effects. This will better isolate paradigm-specific drivers while acknowledging overlaps with broader AI trends. revision: partial

  3. Referee: The abstract and concluding argument treat the listed harms as direct consequences of prioritizing conversational generality, creating a circular structure in which the harms are invoked both to diagnose the paradigm and to justify alternatives, without independent falsifiable tests or counterexamples.

    Authors: The manuscript is an interpretive sociotechnical analysis rather than an empirical study, so it does not furnish falsifiable tests. To reduce circularity, we will revise the abstract and conclusion to separate diagnostic observations from prescriptive recommendations. We will also introduce counterexamples of effective non-chatbot systems, such as task-specific models in biology and diagnostics. This preserves the argumentative intent while improving logical flow. revision: partial

standing simulated objections not resolved
  • Independent falsifiable tests or new empirical evidence isolating the chatbot paradigm's causal role in the identified harms, as these would require original data collection beyond the scope of the current conceptual paper.

Circularity Check

0 steps flagged

No significant circularity in argumentative structure

full rationale

This is a sociotechnical critique paper without equations, fitted parameters, or predictive derivations. The central claims about harms from the chatbot paradigm are presented as analytical examinations of existing systems rather than reductions to self-definitions or inputs by construction. No self-citation load-bearing steps, uniqueness theorems, or ansatz smuggling appear in the abstract or described structure. The paper acknowledges benefits and proposes alternatives, indicating an open argumentative form grounded in external observation rather than tautological loops.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim rests on domain assumptions about technology-society interactions and ad-hoc interpretations of harms; no free parameters or invented entities are introduced.

axioms (2)
  • domain assumption The chatbot paradigm is a dominant sociotechnical configuration
    Invoked in the opening sentences as the premise for all subsequent analysis.
  • ad hoc to paper Widespread chatbot adoption necessarily produces deskilling, homogenization, labor displacement, power concentration, and environmental costs
    Treated as established structural effects without independent derivation or cited evidence in the abstract.

pith-pipeline@v0.9.0 · 5520 in / 1390 out tokens · 41050 ms · 2026-05-11T03:18:18.837727+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

200 extracted references · 200 canonical work pages · 3 internal anchors

  1. [1]

    Alaa A Abd-Alrazaq, Mohannad Alajlani, Ali Abdallah Alalwan, Bridgette M Bewick, Peter Gardner, and Mowafa Househ. 2019. An overview of the features of chatbots in mental health: A scoping review.International journal of medical informatics132 (2019), 103978

  2. [2]

    Daron Acemoglu and Pascual Restrepo. 2018. Artificial intelligence, automation, and work. InThe economics of artificial intelligence: An agenda. University of Chicago Press, 197–236

  3. [3]

    Tarek Ait Baha, Mohamed El Hajji, Youssef Es-Saady, and Hammou Fadili. 2024. The impact of educational chatbot on student learning experience.Education and Information Technologies29, 8 (2024), 10153–10176

  4. [4]

    Luca Ambrosio, Jordy Schol, Vincenzo Amedeo La Pietra, Fabrizio Russo, Gianluca Vadalà, and Daisuke Sakai. 2023. Threats and opportunities of using ChatGPT in scientific writing—The risk of getting spineless.JOR spine7, 1 (2023), e1296

  5. [5]

    2024.Decomposing Language Models Into Understandable Components

    Anthropic. 2024.Decomposing Language Models Into Understandable Components. https://www.anthropic.com/research/decomposing- language-models-into-understandable-components

  6. [6]

    Anthropic. 2026. Introducing Claude Design by Anthropic Labs. https://www.anthropic.com/news/claude-design-anthropic-labs. Accessed: 2026-04-28

  7. [7]

    Luke Balcombe. 2023. AI chatbots in digital mental health. InInformatics, Vol. 10. MDPI, 82

  8. [8]

    Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. InProceedings of the 2021 CHI conference on human factors in computing systems. 1–16

  9. [9]

    2025.‘A predator in your home’: Mothers say chatbots encouraged their sons to kill themselves

    BBC News. 2025.‘A predator in your home’: Mothers say chatbots encouraged their sons to kill themselves. https://www.bbc.com/news/ articles/ce3xgwyywe4o Accessed January 13, 2026

  10. [10]

    Brain Fry

    Julie Bedard, Matthew Kropp, Megan Hsu, Olivia T. Karaman, Jason Hawes, and Gabriella Rosen Kellerman. 2026. When Using AI Leads to “Brain Fry”. https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry

  11. [11]

    Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. InProceedings of the 2021 ACM conference on fairness, accountability, and transparency. 610–623

  12. [12]

    2025.Data Center Growth Could Increase Electricity Bills 8% Nationally and as Much as 25% in Some Regional Markets

    Michael Blackhurst, Cameron Wade, Joe DeCarolis, Anderson de Queiroz, Jeremiah Johnson, and Paulina Jaramillo. 2025.Data Center Growth Could Increase Electricity Bills 8% Nationally and as Much as 25% in Some Regional Markets. Carnegie Mellon University. https://www.cmu.edu/work-that-matters/energy-innovation/data-center-growth-could-increase-electricity-bills

  13. [13]

    Borhane Blili-Hamelin, Christopher Graziul, Leif Hancox-Li, Hananel Hazan, El-Mahdi El-Mhamdi, Avijit Ghosh, Katherine A Heller, Jacob Metcalf, Fabricio Murai, Eryk Salvaggio, Andrew Smart, Todd Snider, Mariame Tighanimine, Talia Ringer, Margaret Mitchell, and Shiri Dori-Hacohen. 2025. Position: Stop treating ‘AGI’ as the north-star goal of AI research. I...

  14. [14]

    Ryan L Boyd and David M Markowitz. 2025. Artificial Intelligence and the Psychology of Human Connection.Preprint10 (2025)

  15. [15]

    Petter Bae Brandtzaeg and Asbjørn Følstad. 2018. Chatbots: changing user needs and motivations.interactions25, 5 (2018), 38–43

  16. [16]

    Violation of my {body:}

    Natalie Grace Brigham, Miranda Wei, Tadayoshi Kohno, and Elissa M Redmiles. 2024. " Violation of my {body:}" Perceptions of {AI-generated}non-consensual (intimate) imagery. InTwentieth Symposium on Usable Privacy and Security (SOUPS 2024). 373–392

  17. [17]

    Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond. 2023.Generative AI at Work. Working Paper 31161. National Bureau of Economic Research (NBER). https://www.nber.org/system/files/working_papers/w31161/w31161.pdf Accessed 2026-01-03

  18. [18]

    Krzysztof Budzyń, Marcin Romańczyk, Diana Kitala, Paweł Kołodziej, Marek Bugajski, Hans O Adami, Johannes Blom, Marek Buszkiewicz, Natalie Halvorsen, Cesare Hassan, et al . 2025. Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study.The Lancet Gastroenterology & Hepatology10, 10 (2025), 896–903

  19. [19]

    David Cahn. 2024. AI’s $600B Question. Sequoia Capital. https://sequoiacap.com/article/ais-600b-question/ Accessed 2026-01-03

  20. [20]

    Julie Y Cai and Marybeth J Mattingly. 2025. Unstable Work Schedules and Racial Earnings Disparities Among US Workers.RSF: The Russell Sage Foundation Journal of the Social Sciences11, 1 (2025), 201–223

  21. [21]

    2024.Data centers: Just one part of the African digital infrastructure investment equation

    Samuel Carvalho. 2024.Data centers: Just one part of the African digital infrastructure investment equation. Data Center Dynamics. https:// www.datacenterdynamics.com/en/opinions/data-centers-just-one-part-of-the-african-digital-infrastructure-investment-equation/

  22. [22]

    Mauro Cazzaniga, Carlo Pizzinelli, Emma J Rockall, and Ms Marina Mendes Tavares. 2024. Exposure to artificial intelligence and occupational mobility: A cross-country analysis.International Monetary Fund(2024). Issue 116

  23. [23]

    Myra Cheng, Esin Durmus, and Dan Jurafsky. 2023. Marked personas: Using natural language prompts to measure stereotypes in language models.arXiv preprint arXiv:2305.18189(2023)

  24. [24]

    Ayush Chopra, Santanu Bhattacharya, DeAndrea Salvador, Ayan Paul, Teddy Wright, Aditi Garg, Feroz Ahmad, Alice C Schwarze, Ramesh Raskar, and Prasanna Balaprakash. 2025. The Iceberg Index: Measuring Workforce Exposure Across the AI Economy.arXiv preprint arXiv:2510.25137(2025)

  25. [25]

    Minyang Chow and Olivia Ng. 2025. Beyond chatbots: Moving toward multistep modular AI agents in medical education.JMIR Medical Education11 (2025), e76661

  26. [26]

    Nilesh Christopher. 2024. How AI is resurrecting dead Indian politicians as election looms.Al Jazeera(2024). What if AI systems weren’t chatbots? FAccT ’26, June 25–28, 2026, Montreal, QC, Canada

  27. [27]

    2024.Indian Voters Are Being Bombarded With Millions of Deepfakes

    Nilesh Christopher. 2024.Indian Voters Are Being Bombarded With Millions of Deepfakes. Political Candidates Approve. https: //www.nileshchristopher.net/ai-india-elections-deepfakes/indian-elections-ai-deepfakes

  28. [28]

    Benjamin Cohen-Wang, Harshay Shah, Kristian Georgiev, and Aleksander Mądry. 2024. ContextCite: Attributing model generation to context.Advances in Neural Information Processing Systems37 (2024), 95764–95807

  29. [29]

    Michelle Cohn, Mahima Pushkarna, Gbolahan O Olanubi, Joseph M Moran, Daniel Padgett, Zion Mengesha, and Courtney Heldreth

  30. [30]

    InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems

    Believing anthropomorphism: examining the role of anthropomorphic cues on trust in large language models. InExtended Abstracts of the CHI Conference on Human Factors in Computing Systems. 1–15

  31. [31]

    2024.Data Centers Are Draining Water and Generating Smog in Oregon

    Sean Patrick Cooper. 2024.Data Centers Are Draining Water and Generating Smog in Oregon. Rolling Stone. https://www.rollingstone. com/culture/culture-features/data-center-water-pollution-amazon-oregon-1235466613/

  32. [32]

    Erica Coppolillo, Giuseppe Manco, and Luca Maria Aiello. 2025. Unmasking conversational bias in AI multiagent systems.arXiv preprint arXiv:2501.14844(2025)

  33. [33]

    2023.Artificial intelligence, services globalisation and income inequality

    Giulio Cornelli, Jon Frost, and Saurabh Mishra. 2023.Artificial intelligence, services globalisation and income inequality. Technical Report. Bank for International Settlements

  34. [34]

    Luca Costabello, Alberto Bernardi, Adrianna Janik, Aldan Creo, Sumit Pai, Chan Le Van, Rory McGrath, Nicholas McCarthy, and Pedro Tabacof. 2019. AmpliGraph: a Library for Representation Learning on Knowledge Graphs. doi:10.5281/zenodo.2595043

  35. [35]

    2025.Artificial Intelligence and the Philippine Labor Market: Mapping Occupational Exposure and Complementarity

    Micholo Cucio and Tristan Hennig. 2025.Artificial Intelligence and the Philippine Labor Market: Mapping Occupational Exposure and Complementarity. Technical Report. The International Monetary Fund (IMF)

  36. [36]

    Aniruddha Das. 2023. AI Chatbots may be fun, but they have a drinking problem.Foundry journal26, 9 (2023), 1–4

  37. [37]

    Julian De Freitas, Zeliha Oğuz-Uğuralp, Ahmet Kaan Uğuralp, and Stefano Puntoni. 2025. AI companions reduce loneliness.Journal of Consumer Research(2025), ucaf040

  38. [38]

    ideal server

    Sarah E Dempsey. 2021. Racialized and gendered constructions of the “ideal server”: Contesting historical occupational discourses of restaurant service.Frontiers in Sustainable Food Systems5 (2021), 727473

  39. [39]

    Kerstin Denecke, Alaa Abd-Alrazaq, and Mowafa Househ. 2021. Artificial intelligence for chatbots in mental health: opportunities and challenges.Multiple perspectives on artificial intelligence in healthcare: Opportunities and challenges(2021), 115–128

  40. [40]

    Abhay Deshpande, Maya Guru, Rose Hendrix, Snehal Jauhri, Ainaz Eftekhar, Rohun Tripathi, Max Argus, Jordi Salvador, Haoquan Fang, Matthew Wallingford, Wilbert Pumacay, Yejin Kim, Quinn Pfeifer, Ying-Chun Lee, Piper Wolters, Omar Rayyan, Mingtong Zhang, Jiafei Duan, Karen Farley, Winson Han, Eli Vanderbilt, Dieter Fox, Ali Farhadi, Georgia Chalvatzaki, Dhr...

  41. [41]

    Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A Smith, Nicole DeCario, and Will Buchanan. 2022. Measuring the carbon intensity of AI in cloud instances. InProceedings of the 2022 ACM conference on fairness, accountability, and transparency. 1877–1894

  42. [42]

    Barry Elad. 2025. Claude AI Statistics. (2025). https://sqmagazine.co.uk/claude-ai-statistics/

  43. [43]

    Madeleine Clare Elish. 2025. Moral crumple zones: cautionary tales in human–robot interaction. InRobot Law: Volume II. Edward Elgar Publishing, 83–105

  44. [44]

    Tyna Eloundou, Sam Manning, Pamela Mishkin, and Daniel Rock. 2024. GPTs are GPTs: Labor market impact potential of LLMs. Science384, 6702 (2024), 1306–1308

  45. [45]

    Nanna Inie Emily M. Bender. 2026. We Need to Talk About How We Talk About ’AI’ | TechPolicy.Press — techpolicy.press. https: //www.techpolicy.press/we-need-to-talk-about-how-we-talk-about-ai/. [Accessed 08-01-2026]

  46. [46]

    Robin Emsley. 2023. ChatGPT: these are not hallucinations–they’re fabrications and falsifications.Schizophrenia9, 1 (2023), 52

  47. [47]

    Daniel Evanko and Michael Di Natale. 2024. Quantifying and Assessing the Use of Generative AI by Authors and Reviewers in the Cancer Research Field.International Congress on Peer Review and Scientific Publication(2024)

  48. [48]

    Cathy Mengying Fang, Auren R Liu, Valdemar Danry, Eunhae Lee, Samantha WT Chan, Pat Pataranutaporn, Pattie Maes, Jason Phang, Michael Lampe, Lama Ahmad, et al. 2025. How AI and Human Behaviors Shape Psychosocial Effects of Extended Chatbot Use: A Longitudinal Randomized Controlled Study.arXiv preprint arXiv:2503.17473(2025)

  49. [49]

    Tongtong Feng, Xin Wang, Yu-Gang Jiang, and Wenwu Zhu. 2025. Embodied ai: From llms to world models.arXiv preprint arXiv:2509.20021(2025)

  50. [50]

    Kathleen Kara Fitzpatrick, Alison Darcy, and Molly Vierhile. 2017. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial.JMIR mental health4, 2 (2017), e7785

  51. [51]

    Agentic AI Foundation. [n. d.]. Tools - Model Context Protocol — modelcontextprotocol.io. https://modelcontextprotocol.io/specification/ 2025-06-18/server/tools. [Accessed 30-12-2025]

  52. [52]

    2024.Open letter to President Biden from tech workers in Kenya

    Foxglove. 2024.Open letter to President Biden from tech workers in Kenya. https://www.foxglove.org.uk/open-letter-to-president- biden-from-tech-workers-in-kenya/

  53. [53]

    Michael Gerlich. 2025. AI tools in society: Impacts on cognitive offloading and the future of critical thinking.Societies15, 1 (2025), 6. FAccT ’26, June 25–28, 2026, Montreal, QC, Canada Ghosh et al

  54. [54]

    Sourojit Ghosh. 2024. Interpretations, Representations, and Stereotypes of Caste within Text-to-Image Generators. InProceedings of the AAAI/ACM Conference on AI, Ethics, and Society, Vol. 7. 490–502

  55. [55]

    Sourojit Ghosh and Aylin Caliskan. 2023. ‘Person’ == Light-skinned, Western Man, and Sexualization of Women of Color: Stereotypes in Stable Diffusion. InFindings of the Association for Computational Linguistics: EMNLP 2023. Association for Computational Linguistics, 6971–6985

  56. [56]

    I Don’t See Myself Represented Here at All

    Sourojit Ghosh, Nina Lutz, and Aylin Caliskan. 2024. “I Don’t See Myself Represented Here at All”: User Experiences of Stable Diffusion Outputs Containing Representational Harms across Gender Identities and Nationalities. InProceedings of the AAAI/ACM conference on AI, ethics, and society, Vol. 7. 463–475

  57. [57]

    2025.AI bots wrote and reviewed all papers at this conference

    Elizabeth Gibney. 2025.AI bots wrote and reviewed all papers at this conference

  58. [58]

    Cassidy Gibson, Daniel Olszewski, Natalie Grace Brigham, Anna Crowder, Kevin RB Butler, Patrick Traynor, Elissa M Redmiles, and Tadayoshi Kohno. 2025. Analyzing the {AI} Nudification Application Ecosystem. In34th USENIX Security Symposium (USENIX Security 25). 1–20

  59. [59]

    Trystan S Goetze. 2024. AI art is theft: Labour, extraction, and exploitation: Or, on the dangers of stochastic Pollocks. InProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency. 186–196

  60. [60]

    Goldman Sachs Research. 2024. Gen AI: Too Much Spend, Too Little Benefit? Goldman Sachs Insights. https://www.goldmansachs. com/insights/top-of-mind/gen-ai-too-much-spend-too-little-benefit Accessed 2026-01-03

  61. [61]

    Google. 2026. NotebookLM. https://notebooklm.google.com. Accessed: 2026-04-28

  62. [62]

    2019.Ghost work: How to stop Silicon Valley from building a new global underclass

    Mary L Gray and Siddharth Suri. 2019.Ghost work: How to stop Silicon Valley from building a new global underclass. Harper Business

  63. [63]

    Ben Green and Salomé Viljoen. 2020. Algorithmic realism: expanding the boundaries of algorithmic thought. InProceedings of the 2020 conference on fairness, accountability, and transparency. 19–31

  64. [64]

    Cobus Greyling. 2025. How ComfyUI-R1 & ComfyUI Transform Unstructured Input into Structured Workflows — cobus- greyling.substack.com. https://cobusgreyling.substack.com/p/how-comfyui-r1-and-comfyui-transform. [Accessed 08-01-2026]

  65. [65]

    Oliver L Haimson, Samuel Reiji Mayworm, Alexis Shore Ingber, and Nazanin Andalibi. 2025. AI Attitudes Among Marginalized Populations in the US: Nonbinary, Transgender, and Disabled Individuals Report More Negative AI Attitudes. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. 1224–1237

  66. [66]

    Fauzia Zahira Munirul Hakim, Lia Maulia Indrayani, and Rosaria Mita Amalia. 2019. A dialogic analysis of compliment strategies employed by replika chatbot. InThird International conference of arts, language and culture (ICALC 2018). Atlantis Press, 266–271

  67. [67]

    Yuelin Han, Zhifeng Wu, Pengfei Li, Adam Wierman, and Shaolei Ren. 2024. The unpaid toll: Quantifying the public health impact of AI.arXiv preprint arXiv:2412.06288(2024)

  68. [68]

    Adam W Hanley, Alia R Warner, Vincent M Dehili, Angela I Canto, and Eric L Garland. 2015. Washing dishes to wash the dishes: brief instruction in an informal mindfulness practice.Mindfulness6, 5 (2015), 1095–1103

  69. [69]

    2025.A winter freeze could be coming to Houston

    Claire Hao. 2025.A winter freeze could be coming to Houston. Are CenterPoint, ERCOT ready?Houston Chronicle. https://www. houstonchronicle.com/business/energy/article/houston-freeze-power-outages-21239664.php

  70. [70]

    Eleanor Hawkins. 2025. Anthropic launches first brand campaign for Claude — axios.com. https://www.axios.com/2025/09/18/anthropic- brand-campaign-claude. [Accessed 07-01-2026]

  71. [71]

    Will Hawkins, Brent Mittelstadt, and Chris Russell. 2025. Deepfakes on Demand: The rise of accessible non-consensual deepfake image generators: The rise of accessible non-consensual deepfake image generators. InProceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. 1602–1614

  72. [72]

    2025.A Teen Was Suicidal

    Kashmir Hill. 2025.A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. https://www.nytimes.com/2025/08/26/technology/ chatgpt-openai-suicide.html?smid=nytcore-ios-share Accessed January 10, 2026

  73. [73]

    Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects.arXiv preprint arXiv:1812.04608(2018)

  74. [74]

    Xiang Hui, Oren Reshef, and Luofeng Zhou. 2024. The short-term effects of generative artificial intelligence on employment: Evidence from an online labor market.Organization Science35, 6 (2024), 1977–1989

  75. [75]

    2025.Large language models, small labor market effects

    Anders Humlum and Emilie Vestergaard. 2025.Large language models, small labor market effects. Technical Report. National Bureau of Economic Research

  76. [76]

    2024.India’s data centre boom confronts a looming water challenge

    Nikhil Imandar. 2024.India’s data centre boom confronts a looming water challenge. BBC News. https://www.bbc.com/news/articles/ cgr417pwek7o

  77. [77]

    2025.Future of Work Issue Briefs

    International Labour Organization. 2025.Future of Work Issue Briefs. Technical Report. International Labour Organization

  78. [78]

    Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation.ACM computing surveys55, 12 (2023), 1–38

  79. [79]

    Harry H Jiang, Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers, and Timnit Gebru. 2023. AI Art and its Impact on Artists. InProceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society. 363–374. What if AI systems weren’t chatbots? FAccT ’26, June 25–28, 2026, Montreal, QC, Canada

  80. [80]

    Liwei Jiang, Yuanjun Chai, Margaret Li, Mickel Liu, Raymond Fok, Nouha Dziri, Yulia Tsvetkov, Maarten Sap, Alon Albalak, and Yejin Choi. 2025. Artificial hivemind: The open-ended homogeneity of language models (and beyond).arXiv preprint arXiv:2510.22954(2025)

Showing first 80 references.