pith. machine review for the scientific record. sign in

arxiv: 2604.21043 · v1 · submitted 2026-04-22 · 💻 cs.CY · cs.AI· cs.LG

Recognition: unknown

Strategic Polysemy in AI Discourse: A Philosophical Analysis of Language, Hype, and Power

Authors on Pith no claims yet

Pith reviewed 2026-05-09 22:42 UTC · model grok-4.3

classification 💻 cs.CY cs.AIcs.LG
keywords strategic polysemyglosslightingAI discourseAI hype cyclesanthropomorphism in AIlanguage and powersociotechnical mechanismsphilosophy of AI
0
0 comments X

The pith

AI discourse employs terms with simultaneous technical and anthropomorphic meanings to sustain hype and shape institutional support.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper argues that common AI terms such as hallucination, chain-of-thought, introspection, alignment, and agent sustain multiple interpretations at once, pairing narrow technical definitions with broader everyday or human-like associations. This flexibility lets researchers and developers draw on the intuitive appeal of familiar language while retreating to precise meanings when challenged. The authors introduce glosslighting as the mechanism that enables this dual use, producing effects like accelerated hype cycles, easier mobilization of investment, and influence over public and policy views of AI. At the same time, the practice tends to reduce pressure for rigorous examination of what the systems actually do and whether their development raises ethical concerns. In this view, language functions as an active sociotechnical force in how AI is built, funded, and governed.

Core claim

Many terms in contemporary AI research and deployment sustain multiple interpretations simultaneously by combining narrow technical definitions with broader anthropomorphic or common-sense associations. This semantic flexibility, produced through the practice of glosslighting, allows actors to benefit from the persuasive force of familiar language while preserving plausible deniability via restricted technical definitions. The result is measurable institutional and discursive effects: contributions to AI hype cycles, facilitation of investment and institutional support, shaping of researcher, public, and policymaker perceptions, and deflection of epistemic and ethical scrutiny. Language thus

What carries the argument

Glosslighting: the practice of using technically redefined terms to evoke intuitive associations while preserving plausible deniability through restricted technical definitions.

If this is right

  • Hype cycles in AI are partly maintained by the ability to evoke human-like capabilities through everyday language while falling back on narrow definitions.
  • Investment and institutional support flow more readily when terms carry both technical precision and intuitive appeal.
  • Public and policymaker perceptions of AI systems are shaped by the broader associations that remain active alongside technical ones.
  • Epistemic and ethical scrutiny is reduced because challenges can be met by retreating to the restricted technical meaning.
  • Language itself operates as a sociotechnical mechanism that influences the trajectory of AI development and its governance.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the mechanism holds, interventions aimed at clearer terminology might alter the speed of AI adoption and the nature of public debate.
  • The same pattern could be examined in other rapidly developing technical domains where new tools borrow familiar words.
  • Quantifying glosslighting through discourse analysis of specific AI subfields would provide a direct test of its prevalence and effects.
  • Governance efforts might gain from requiring explicit separation of technical and colloquial senses in high-stakes communications.

Load-bearing premise

The polysemous usage is deployed strategically by actors to achieve institutional effects rather than arising from ordinary linguistic evolution or convenience in a fast-moving field.

What would settle it

A large-scale corpus study of AI papers and public statements showing consistent avoidance of broader associations by technical users, with no measurable correlation between such usage and funding levels, media attention, or policy influence.

read the original abstract

This paper examines the strategic use of language in contemporary artificial intelligence (AI) discourse, focusing on the widespread adoption of metaphorical or colloquial terms like "hallucination", "chain-of-thought", "introspection", "language model", "alignment", and "agent". We argue that many such terms exhibit strategic polysemy: they sustain multiple interpretations simultaneously, combining narrow technical definitions with broader anthropomorphic or common-sense associations. In contemporary AI research and deployment contexts, this semantic flexibility produces significant institutional and discursive effects, shaping how AI systems are understood by researchers, policymakers, funders, and the public. To analyse this phenomenon, we introduce the concept of glosslighting: the practice of using technically redefined terms to evoke intuitive -- often anthropomorphic or misleading -- associations while preserving plausible deniability through restricted technical definitions. Glosslighting enables actors to benefit from the persuasive force of familiar language while maintaining the ability to retreat to narrower definitions when challenged. We argue that this practice contributes to AI hype cycles, facilitates the mobilisation of investment and institutional support, and influences public and policy perceptions of AI systems, while often deflecting epistemic and ethical scrutiny. By examining the linguistic dynamics of glosslighting and strategic polysemy, the paper highlights how language itself functions as a sociotechnical mechanism shaping the development and governance of AI.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper claims that terms in AI discourse such as 'hallucination', 'chain-of-thought', 'introspection', 'alignment', and 'agent' exhibit strategic polysemy by sustaining both narrow technical definitions and broader anthropomorphic associations. It introduces the concept of 'glosslighting' as the practice of leveraging this flexibility for persuasive effects while retaining deniability through technical retreat, and argues that this mechanism drives AI hype cycles, mobilizes investment and institutional support, shapes public and policy perceptions, and deflects epistemic and ethical scrutiny.

Significance. If the interpretive framework is substantiated, the paper contributes a novel conceptual tool for analyzing language as a sociotechnical mechanism in AI governance and development. The introduction of 'glosslighting' offers a potentially useful lens for examining how semantic flexibility influences funding, policy, and public understanding, extending philosophical analysis of metaphor and framing into contemporary AI contexts.

major comments (2)
  1. [Abstract and §3] Abstract and §3 (definition of glosslighting): The concept is defined in terms of the persuasive and evasive effects it produces ('evoke intuitive associations while preserving plausible deniability'), which creates a risk of circularity; the central claim that glosslighting 'contributes to AI hype cycles' and 'facilitates the mobilisation of investment' then rests on the same effects used to define the term, without independent metrics or observable indicators to identify instances.
  2. [§4] §4 (institutional effects): The attribution of strategic intent and causal effects on hype, funding, and scrutiny deflection is presented as interpretive inference from term usage but lacks criteria or evidence to distinguish deliberate strategic deployment from standard semantic drift, borrowing, or communicative convenience in a rapidly evolving technical field; this distinction is load-bearing for the claim that polysemy is 'strategic' rather than emergent.
minor comments (2)
  1. [Introduction] The paper would benefit from a brief comparison of 'glosslighting' to related concepts such as framing, metaphor in science communication, or euphemism to clarify its novelty and avoid overlap.
  2. [§2] Examples of terms (e.g., 'hallucination', 'alignment') are listed but not systematically analyzed with usage data or timelines; adding even illustrative corpus references would strengthen the interpretive claims.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive and incisive comments on our manuscript. We address each major comment below, indicating where we will make revisions to strengthen the clarity and rigor of our arguments.

read point-by-point responses
  1. Referee: [Abstract and §3] Abstract and §3 (definition of glosslighting): The concept is defined in terms of the persuasive and evasive effects it produces ('evoke intuitive associations while preserving plausible deniability'), which creates a risk of circularity; the central claim that glosslighting 'contributes to AI hype cycles' and 'facilitates the mobilisation of investment' then rests on the same effects used to define the term, without independent metrics or observable indicators to identify instances.

    Authors: We agree that the initial formulation risks appearing circular if the effects are not clearly separated from the mechanism. The core definition of glosslighting identifies the linguistic practice of maintaining polysemous terms that permit both evocative and technical readings. The claims regarding contributions to hype cycles and investment mobilisation are supported by the paper's case analyses of specific term usages (e.g., in research papers, press releases, and policy documents), where the pattern of initial broad invocation followed by technical retreat is observable. We will revise the abstract and §3 to foreground this separation explicitly, adding a brief set of identification criteria based on recurring patterns of usage and subsequent clarification rather than introducing quantitative metrics, which would exceed the paper's philosophical scope. revision: partial

  2. Referee: [§4] §4 (institutional effects): The attribution of strategic intent and causal effects on hype, funding, and scrutiny deflection is presented as interpretive inference from term usage but lacks criteria or evidence to distinguish deliberate strategic deployment from standard semantic drift, borrowing, or communicative convenience in a rapidly evolving technical field; this distinction is load-bearing for the claim that polysemy is 'strategic' rather than emergent.

    Authors: The comment correctly identifies a limitation in the current presentation: our analysis relies on interpretive inference from discourse patterns rather than direct evidence of intent or controlled comparison with non-strategic cases. We do not claim to demonstrate individual actors' deliberate strategies or strict causation; instead, 'strategic' denotes the functional affordances of the polysemous structure in institutional contexts. We will revise §4 to include an explicit discussion of this distinction, acknowledging that semantic drift and convenience are plausible alternative explanations in some instances, while arguing that the consistent cross-actor deployment and the benefits accrued (e.g., in funding narratives) support treating the polysemy as strategically consequential even if not always intentionally engineered. This addition will preserve the paper's interpretive character without overstating empirical claims. revision: partial

Circularity Check

1 steps flagged

Definition of glosslighting incorporates claimed persuasive effects and deniability mechanism, making attribution of hype and institutional impacts self-reinforcing by construction

specific steps
  1. self definitional [Abstract]
    "To analyse this phenomenon, we introduce the concept of glosslighting: the practice of using technically redefined terms to evoke intuitive -- often anthropomorphic or misleading -- associations while preserving plausible deniability through restricted technical definitions. Glosslighting enables actors to benefit from the persuasive force of familiar language while maintaining the ability to retreat to narrower definitions when challenged. We argue that this practice contributes to AI hype cycles, facilitates the mobilisation of investment and institutional support, and influences public and "

    The definition already encodes the mechanism (evoking associations + preserving deniability) that produces the listed benefits and effects; the subsequent argument that the practice 'contributes to AI hype cycles' and 'facilitates the mobilisation of investment' therefore follows tautologically from the definition rather than from separate evidence or differentiation from non-strategic linguistic processes.

full rationale

The paper introduces glosslighting as an analytical tool but defines it explicitly in terms of the evasive and benefit-producing mechanisms that are then asserted to drive hype cycles and resource mobilisation. This structure means the central sociotechnical claim reduces directly to the definitional premises rather than being supported by independent criteria for identifying strategic intent or measuring effects. No equations, predictions, or self-citation chains are present; the circularity is limited to this conceptual step and does not extend to the entire analysis.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 1 invented entities

The analysis rests on interpretive assumptions about how language functions in technical communities and its downstream effects on institutions; the primary addition is the coined term glosslighting.

axioms (2)
  • domain assumption Technical terms in emerging fields can simultaneously carry precise operational definitions and broader colloquial or anthropomorphic associations
    Invoked throughout the abstract in the definition of strategic polysemy
  • domain assumption Such dual usage can be leveraged to influence perceptions, funding, and policy while preserving deniability
    Central premise underlying the claimed institutional effects of glosslighting
invented entities (1)
  • glosslighting no independent evidence
    purpose: To label the practice of using polysemous terms to evoke intuitive associations while retaining technical retreat
    Newly introduced concept that organizes the paper's analysis of AI discourse

pith-pipeline@v0.9.0 · 5543 in / 1331 out tokens · 42656 ms · 2026-05-09T22:42:36.569230+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

125 extracted references · 23 canonical work pages · 6 internal anchors

  1. [1]

    Mohamed Abdalla and Moustafa Abdalla. 2021. The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity. InProceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. Association for Computing Machinery, New York, NY, 287–297

  2. [2]

    Gavin Abercrombie, Amanda Cercas Curry, Tanvi Dinkar, Verena Rieser, and Zeerak Talat. 2023. Mirages: On Anthropomorphism in Dialogue Systems. arXiv:2305.09800 [cs.CL] https://arxiv.org/abs/2305.09800

  3. [3]

    Kate Abramson. 2014. Turning up the lights on gaslighting.Philosophical perspectives28 (2014), 1–30

  4. [4]

    Mel Andrews, Andrew Smart, and Abeba Birhane. 2024. The reanimation of pseudoscience in machine learning and its ethical repercussions.Patterns5, 9 (2024)

  5. [5]

    Anthropic. 2025. Agentic Misalignment: How LLMs could be insider threats.AnthropicJun., 20 (2025)

  6. [6]

    André Ariew and R. C. Lewontin. 2004. The Confusions of Fitness.The British Journal for the Philosophy of Science55, 2 (06 2004), 347–363. arXiv:https://academic.oup.com/bjps/article-pdf/55/2/347/4208500/550347.pdf doi:10.1093/bjps/55.2.347

  7. [7]

    Giosuè Baggio and Elliot Murphy. 2026. On the referential capacity of language models: An internalist rejoinder to Mandelkern & Linzen.Computational Linguistics(2026), 1–10

  8. [8]

    Baker and T

    S. Baker and T. Kanade. 2000. Hallucinating faces.Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580)(2000), 83–88

  9. [9]

    Jascha Bareis and Christian Katzenbach. 2022. Talking AI into being: The narratives and imaginaries of national AI strategies and their performative politics.Science, Technology, & Human Values47, 5 (2022), 855–881

  10. [10]

    Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big?. InProceedings of the 2021 ACM conference on fairness, accountability, and transparency. 610–623

  11. [11]

    Bender and Alex Hanna

    Emily M. Bender and Alex Hanna. 2025.The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Penguin

  12. [12]

    Binder, James Chua, Tomek Korbak, Henry Sleight, John Hughes, Robert Long, Ethan Perez, Miles Turpin, and Owain Evans

    Felix J. Binder, James Chua, Tomek Korbak, Henry Sleight, John Hughes, Robert Long, Ethan Perez, Miles Turpin, and Owain Evans

  13. [13]

    Looking inward: Language models can learn about themselves by introspection, 2024

    Looking Inward: Language Models Can Learn About Themselves by Introspection.arXiv Preprint2410.13787 (2024), 1–46. https://arxiv.org/abs/2410.13787

  14. [14]

    Emma Borg. 2025. LLMs, turing tests and Chinese rooms: The prospects for meaning in large Language models.Inquiry(2025), 1–31

  15. [15]

    2018.Artificial Unintelligence: How Computers Misunderstand the World

    Meredith Broussard. 2018.Artificial Unintelligence: How Computers Misunderstand the World. The MIT Press, Cambridge, MA

  16. [16]

    Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023. Sparks of Artificial General Intelligence: Early experiments with GPT-4.arXiv2303.12712 (2023), 1–155. https://arxiv.org/abs/2303.12712

  17. [17]

    Elisabeth Camp. 2018. Insinuation, Common Ground, and the Conversational Record. InNew Work on Speech Acts, Daniel Fogal, Daniel W. Harris, and Matt Moss (Eds.). Oxford University Press, 40–66

  18. [18]

    Rudolf Carnap. 1950. Logical foundations of probability. (1950)

  19. [19]

    Myra Cheng, Su Lin Blodgett, Alicia DeVrio, Lisa Egede, and Alexandra Olteanu. 2025. Dehumanizing Machines: Mitigating Anthropo- morphic Behaviors in Text Generation Systems. InProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Wanxiang Che, Joyce Nabende, Ekaterina Shutova, and Mohammad Taher ...

  20. [20]

    2020.The Alignment Problem: Machine Learning and Human Values

    Brian Christian. 2020.The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company, New York

  21. [21]

    2020.AI Ethics

    Mark Coeckelbergh. 2020.AI Ethics. The MIT Press, Cambridge, MA

  22. [22]

    2021.Atlas of AI

    Kate Crawford. 2021.Atlas of AI. Yale University Press, New Haven, CT

  23. [23]

    Mary Cummings. 2021. Rethinking the maturity of artificial intelligence in safety-critical settings.Ai Magazine42, 1 (2021), 6–15

  24. [24]

    Paresh Dave. 2025. Who’s to Blame When AI Agents Screw Up?WIREDMay, 22 (2025). https://www.wired.com/story/ai-agents-legal- liability-issues/

  25. [25]

    Leonardo De Cosmo. 2022. Google Engineer Claims AI Chatbot Is Sentient: Why That Matters.Scientific AmericanJul., 12 (2022). https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/

  26. [26]

    DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, et al. 2025. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.arXiv Preprint2501.12948 (2025), 1–22. https://arxiv.org/abs/2501.12948

  27. [27]

    Catarina Dutilh Novaes. 2020. Carnapian explication and ameliorative analysis: A systematic comparison.Synthese197, 3 (2020), 1011–1034

  28. [28]

    1996.The closed world: Computers and the politics of discourse in Cold War America

    Paul N Edwards. 1996.The closed world: Computers and the politics of discourse in Cold War America. MIT press

  29. [29]

    Robin Emsley. 2023. ChatGPT: these are not hallucinations – they’re fabrications and falsifications.Schizophrenia9 (2023)

  30. [30]

    Cacioppo

    Nicholas Epley, Adam Waytz, and John T. Cacioppo. 2007. On seeing human: a three-factor theory of anthropomorphism.Psychol. Rev. 114, 4 (2007), 864–886. Strategic Polysemy in AI Discourse FAccT ’26, June 25–28, 2026, Montreal, QC, Canada

  31. [31]

    Arianna Falbo and Travis LaCroix. 2022. Est-Ce Que Vous Compute? Code-Switching, Cultural Identity, and AI.Feminist Philosophy Quarterly8, 3/4 (2022). https://doi.org/10.5206/fpq/2022.3/4.14264

  32. [32]

    Ingrid Lossius Falkum and Augustin Vincente. 2015. Polysemy: Current Perspectives and Approaches.Lingua157 (2015), 1–16

  33. [33]

    Luciano Floridi. 2024. Why the AI Hype is Another Tech Bubble.Philosophy & Technology37, 128 (2024), 1–13

  34. [34]

    Iason Gabriel. 2020. Artificial Intelligence, Values, and Alignment.Minds and Machines30 (2020), 411–437

  35. [35]

    Timnit Gebru and Émile P. Torres. 2024. The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence.First Monday29, 4 (2024)

  36. [36]

    Dedre Gentner and Donald R. Gentner. 2014. Flowing waters or teeming crowds: Mental models of electricity. InMental models. Psychology Press, 99–129

  37. [37]

    Andreu Belsunces Gonçalves. 2025. Deep Hype in Artificial General Intelligence: Uncertainty, Sociotechnical Fictions and the Governance of AI Futures.arXiv Preprint2508.19749 (2025), 1–29. https://arxiv.org/abs/2508.19749

  38. [38]

    Google. 2025. What is an AI agent?Google CloudDec., 04 (2025). https://cloud.google.com/discover/what-are-ai-agents

  39. [39]

    Stephen Jay Gould. 1981. The Mismeasure of Man.New York: W. W. Norton, 1981113 (1981), 45

  40. [40]

    Ben Green. 2021. The Contestation of Tech Ethics: A Sociotechnical Approach to Technology Ethics in Practice.Journal of Social Computing2, 3 (2021), 209–225

  41. [41]

    Erin Griffith. 2024. A.I. Isn’t Magic, but Can It Be ‘Agentic’?The New York TimesSep., 6 (2024)

  42. [42]

    Marius Guenzel, Shimon Kogan, Marina Niessner, and Kelly Shue. 2025. AI Personality Extraction from Faces: Labor Market Implications. Preprint(2025). http://dx.doi.org/10.2139/ssrn.5089827

  43. [43]

    Paula Helm and Selin Gerlek. 2025. Empirical AI Ethics: Reconfiguring Ethics towards a Situated, Plural, and Transformative Approach. arXiv preprint2509.17727 (2025). https://arxiv.org/pdf/2509.17727

  44. [44]

    Daniel J Hemel. 2023. Polysemy and the Law.Vand. L. Rev.76 (2023), 1067

  45. [45]

    Isabella Hermann. 2023. Artificial intelligence in fiction: between narratives and metaphors.AI & society38, 1 (2023), 319–329

  46. [46]

    Nanna Inie, Peter Zukerman, and Emily M. Bender. 2026. De-Anthropomorphizing ‘AI’: From Wishful Mnemonics to Accurate Nomenclature.First Monday31, 2 (2026). https://doi.org/10.5210/fm.v31i2.14366

  47. [47]

    arXiv preprint arXiv:2505.13763 , year=

    Li Ji-An, Hua-Dong Xiong, Robert C. Wilson, Marcelo G. Mattar, and Marcus K. Benna. 2025. Language Models Are Capable of Metacognitive Monitoring and Control of Their Internal Activations.1–362505.13763 (2025). https://arxiv.org/abs/2505.13763

  48. [48]

    Andrej Karpathy. 2015. The Unreasonable Effectiveness of Recurrent Neural Networks.Andrej Karpathy BlogMay, 21 (2015). https://karpathy.github.io/2015/05/21/rnn-effectiveness/

  49. [49]

    William Melvin Kelley. 1962. If You’re Woke You Dig It.The New York TimesMay, 20 (1962), 45. https://www.nytimes.com/1962/05/20/ archives/if-youre-woke-you-dig-it-no-mickey-mouse-can-be-expected-to-follow.html

  50. [50]

    Cameron Domenico Kirk-Giannini. 2023. Dilemmatic gaslighting.Philosophical Studies180, 3 (2023), 745–772

  51. [51]

    2005.On Populist Reason

    Ernesto Laclau. 2005.On Populist Reason. Verso

  52. [52]

    Travis LaCroix. 2022. Moral Dilemmas for Moral Machines.AI and Ethics2 (2022), 737–746

  53. [53]

    Travis LaCroix. 2024. The linguistic dead zone of value-aligned agency, natural and artificial.Philosophical Studies(2024)

  54. [54]

    2025.Artificial Intelligence and the Value Alignment Problem: A Philosophical Introduction

    Travis LaCroix. 2025.Artificial Intelligence and the Value Alignment Problem: A Philosophical Introduction. Broadview Press, Peterborough, ON

  55. [55]

    Travis LaCroix and Alexandra (Sasha) Luccioni. 2025. Metaethical Perspectives on ‘Benchmarking’ AI Ethics.AI and Ethics5 (2025), 4029–4027

  56. [56]

    1980.Metaphors We Live By

    George Lakoff and Mark Johnson. 1980.Metaphors We Live By. University of Chicago Press

  57. [57]

    Shane Legg and Marcus Hutter. 2007. Universal Intelligence: A Definition of Machine Intelligence.Minds and Machines17, 4 (2007), 391–444

  58. [58]

    1987.Introduction to Marcel Mauss

    Claude Lévi-Strauss. 1987.Introduction to Marcel Mauss. Routledge, London

  59. [59]

    David Lewis. 1979. Scorekeeping in a Language Game.Journal of Philosophical Logic8 (1979), 339–359

  60. [60]

    1984.Not in our genes

    Richard C Lewontin, Steven Rose, and Leon J Kamin. 1984.Not in our genes. Pantheon Books New York

  61. [61]

    1973.Artificial intelligence: a general survey

    James Lighthill. 1973.Artificial intelligence: a general survey. Technical Report. Science Research Council

  62. [62]

    Jack Lindsey. 2025. Emergent Introspective Awareness in Large Language Models.AnthropicOct., 29 (2025). https://www.anthropic. com/research/introspection

  63. [63]

    Michelle Liu. 2025. Polysemy and Philosophy.Philosophy Compass(2025)

  64. [64]

    Negar Maleki, Balaji Padmanabhan, and Kaushik Dutta. 2024. AI Hallucinations: A Misnomer Worth Clarifying. In2024 IEEE Conference on Artificial Intelligence (CAI). 133–138

  65. [65]

    Fintan Mallory. 2023. Fictionalism about Chatbots.Ergo: An Open Access Journal of Philosophy10 (2023), 1082–1100

  66. [66]

    Forthcoming

    Fintan Mallory. Forthcoming. Large Language Models are Stochastic Measuring Devices. InCommunicating with AI: Philosophical Perspectives, Herman Cappelen and Rachel Sterken (Eds.). Oxford University Press, Oxford

  67. [67]

    Matthew Mandelkern and Tal Linzen. 2024. Do language models’ words refer?Computational Linguistics50, 3 (2024), 1191–1200

  68. [68]

    2018.Gods and Robots: Myths, Machines, and Ancient Dreams of Technology

    Adrienne Mayor. 2018.Gods and Robots: Myths, Machines, and Ancient Dreams of Technology. Princeton University Press, Princeton, NJ. FAccT ’26, June 25–28, 2026, Montreal, QC, Canada LaCroix et al

  69. [69]

    Minsky, Nathaniel Rochester, and Claude E

    John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon. 1955/2006. A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.AI Magazine27, 4 (1955/2006), 12–14

  70. [70]

    McCulloch and Walter Pitts

    Warren S. McCulloch and Walter Pitts. 1943. A Logical Calculus of the Ideas Immanent in Nervous Activity.The Bulletin of Mathematical Biophysics5 (1943), 115–133

  71. [71]

    Drew McDermott. 1976. Artificial Intelligence Meets Natural Stupidity.ACM SIGART Bulletin57 (1976), 4–9

  72. [72]

    Alison McIntyre. 2023. Doctrine of Double Effect. InThe Stanford Encyclopedia of Philosophy, Edward N. Zalta and Uri Nodelman (Eds.). Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2023/entries/double-effect/

  73. [73]

    Cade Metz and Karen Weise. 2025. A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse. https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.htmlMay, 06 (2025). https://www.nytimes.com/ 2025/05/05/technology/ai-hallucinations-chatgpt-google.html

  74. [74]

    Milagros Miceli, Julian Posada, and Tianling Yang. 2022. Studying Up Machine Learning Data: Why Talk About Bias When We Mean Power?Proc. ACM Hum.-Comput. Interact.6 (2022), 1–14

  75. [75]

    Eliot Michaelson. 2022. Speaker’s reference, semantic reference, sneaky reference.Mind & Language37, 5 (2022), 856–875

  76. [76]

    Raphaël Millière. 2024. Language models as models of language.arXiv preprint arXiv:2408.07144(2024)

  77. [77]

    1985.Neural Networks, Pattern Recognition, and Fingerprint Hallucination

    Eric Mjolsness. 1985.Neural Networks, Pattern Recognition, and Fingerprint Hallucination. Ph. D. Dissertation. California Institute of Technology

  78. [78]

    NAACP. 2023. Reclaiming the Word ‘Woke’ as Part of African American Culture.Resources(2023). https://naacp.org/resources/ reclaiming-word-woke-part-african-american-culture

  79. [79]

    2024.AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference

    Arvind Narayanan and Sayash Kapoor. 2024.AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference. Princeton University Press

  80. [80]

    Brigitte Nerlich and David D Clarke. 2001. Ambiguities we live by: Towards a pragmatics of polysemy.Journal of Pragmatics33, 1 (2001), 1–20

Showing first 80 references.