pith. machine review for the scientific record. sign in

arxiv: 2605.05419 · v1 · submitted 2026-05-06 · 💻 cs.CY

Recognition: unknown

LLMorphism: When humans come to see themselves as language models

Authors on Pith no claims yet

Pith reviewed 2026-05-08 15:34 UTC · model grok-4.3

classification 💻 cs.CY
keywords LLMorphismhuman cognitionAI perceptionanalogical reasoningmetaphors of mindsocietal impacts of AIself-understanding
0
0 comments X

The pith

Conversational LLMs may lead people to mistakenly believe human cognition works like language model text generation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper defines LLMorphism as the biased view that human minds operate similarly to large language models. It claims that as AI systems produce fluent, human-like language, people may draw an incorrect reverse inference that their own thinking must follow the same pattern-matching process. This belief spreads because linguistic similarity makes the analogy psychologically available, even though output resemblance reveals nothing about internal cognitive structure. The author traces consequences across domains such as education, work, creativity, responsibility, and self-perception. The core point is that public discussion focuses too much on over-attributing mind to machines and too little on the risk of under-attributing it to humans.

Core claim

LLMorphism is the biased belief that human cognition works like a large language model. The rise of conversational LLMs may make this bias increasingly psychologically available through analogical transfer, whereby features of LLMs are projected onto humans, and metaphorical availability, whereby LLM vocabulary becomes a culturally salient way to describe thought, leading to implications for work, education, responsibility, healthcare, communication, creativity, and human dignity.

What carries the argument

LLMorphism, the biased belief that human cognition works like a large language model, which carries the argument by enabling reverse inference from output similarity to cognitive architecture similarity.

If this is right

  • In education, students and teachers may devalue human critical thinking and originality in favor of viewing learning as pattern completion.
  • Workplace evaluations could shift toward treating employee contributions as interchangeable outputs rather than expressions of distinct minds.
  • Legal and moral responsibility might be reassessed if actions are seen as automatic continuations rather than deliberate choices.
  • Healthcare practices could change if mental health is framed as optimizing predictive models instead of addressing subjective experience.
  • Creativity and communication may be perceived as less uniquely human, affecting how dignity and personhood are understood.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Designers of AI interfaces might deliberately highlight differences in internal processes to reduce the pull of the analogy.
  • Public education campaigns could emphasize empirical findings from cognitive science that separate human reasoning from next-token prediction.
  • The same mechanisms of analogical transfer might operate in reverse if future AI systems are presented with explicitly non-human cognitive architectures.

Load-bearing premise

That exposure to conversational LLMs will make the analogy between human thought and model output psychologically available and dominant enough for the listed societal effects to occur.

What would settle it

A controlled study measuring whether participants exposed to extended interaction with a conversational LLM show increased endorsement of statements like 'human thinking is best described as predicting the next word' compared with control participants.

read the original abstract

LLMorphism is the biased belief that human cognition works like a large language model. I argue that the rise of conversational LLMs may make this bias increasingly psychologically available. When artificial systems produce human-like language, people may draw a reverse inference: if LLMs can speak like humans, perhaps humans think like LLMs. This inference is biased because similarity at the level of linguistic output does not imply similarity in cognitive architecture. Yet, LLMorphism may spread through two mechanisms: analogical transfer, whereby features of LLMs are projected onto humans, and metaphorical availability, whereby LLM vocabulary becomes a culturally salient vocabulary for describing thought. I distinguish LLMorphism from mechanomorphism, anthropomorphism, computationalism, dehumanization, objectification, and predictive-processing theories of mind. I outline its implications for work, education, responsibility, healthcare, communication, creativity, and human dignity, while also discussing boundary conditions and forms of resistance. I conclude that the public debate may be missing half of the problem: the issue is not only whether we are attributing too much mind to machines, but also whether we are beginning to attribute too little mind to humans.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

0 major / 3 minor

Summary. The paper introduces LLMorphism as the biased belief that human cognition functions like a large language model. It argues that conversational LLMs may render this bias psychologically available via analogical transfer (projecting LLM features onto humans) and metaphorical availability (adopting LLM vocabulary for describing thought). The manuscript distinguishes LLMorphism from mechanomorphism, anthropomorphism, computationalism, dehumanization, objectification, and predictive-processing theories; outlines implications for domains including work, education, responsibility, healthcare, communication, creativity, and human dignity; acknowledges boundary conditions and resistance; and concludes that AI discourse overlooks the risk of under-attributing mind to humans.

Significance. If the proposed mechanisms of psychological availability prove operative, the framework could usefully expand AI ethics and human-AI interaction research by identifying a complementary bias to anthropomorphism. The manuscript earns credit for its explicit definitional distinctions, acknowledgment of boundary conditions, and framing of the central claim as a possibility rather than an empirical assertion, providing a coherent conceptual scaffold for subsequent falsifiable work.

minor comments (3)
  1. [Mechanisms of spread] The mechanisms section would benefit from a brief note on how analogical transfer is distinguished from standard analogical reasoning in cognitive psychology (e.g., citing Gentner or Holyoak), to sharpen the novelty claim without altering the argument.
  2. [Implications] In the implications for healthcare, the discussion of responsibility attribution could reference existing empirical work on automation bias in clinical decision support to illustrate potential interactions, increasing practical grounding.
  3. [Conclusion] The conclusion's phrasing that the debate 'may be missing half of the problem' is clear but could be tempered with an explicit statement that the paper offers no prevalence estimate, preserving the speculative tone.

Simulated Author's Rebuttal

0 responses · 0 unresolved

We thank the referee for their positive and constructive review. We appreciate the recognition of the manuscript's definitional distinctions, boundary conditions, and framing of LLMorphism as a conceptual possibility rather than an empirical claim. The recommendation for minor revision is noted, and we will use the opportunity to refine clarity and presentation where appropriate.

Circularity Check

0 steps flagged

No significant circularity; conceptual proposal is self-contained

full rationale

The paper introduces LLMorphism as a new conceptual bias, distinguishes it explicitly from mechanomorphism, anthropomorphism, computationalism, dehumanization, objectification, and predictive-processing theories, and reasons about its spread via analogical transfer and metaphorical availability plus listed societal implications. All steps rely on definitional distinctions and logical inference about psychological availability rather than any equation, fitted parameter, or self-citation that reduces the target claim to its own inputs. Boundary conditions and forms of resistance are acknowledged, keeping the argument non-reductive. No load-bearing derivation collapses by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 1 invented entities

The argument rests on the premise that output similarity does not entail architectural similarity and on the psychological claim that LLM vocabulary will become culturally available for self-description.

axioms (1)
  • domain assumption Similarity at the level of linguistic output does not imply similarity in cognitive architecture
    This premise is invoked to label the reverse inference as biased.
invented entities (1)
  • LLMorphism no independent evidence
    purpose: To name and organize the described bias and its mechanisms
    A coined term introduced to frame the phenomenon; no independent empirical handle is supplied.

pith-pipeline@v0.9.0 · 5491 in / 1384 out tokens · 77338 ms · 2026-05-08T15:34:07.680387+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

11 extracted references · 2 canonical work pages · 1 internal anchor

  1. [1]

    objects to think with

    LLMorphism: When humans come to see themselves as language models Valerio Capraro University of Milano-Bicocca valerio.capraro@unimib.it Abstract LLMorphism is the biased belief that human cognition works like a large language model. I argue that the rise of conversational LLMs may make this bias increasingly psychologically available. When artificial sys...

  2. [2]

    Experiential and socio-contextual moderators may also be important

    are likely to reject the LLM metaphor outright, substituting it with an incompatible framework. Experiential and socio-contextual moderators may also be important. Professional roles involving direct caregiving, such as nursing, therapy, or early childhood education, repeatedly confront practitioners with aspects of human life that resist LLM-style descri...

  3. [3]

    S., Franyutti-Cintron, A

    Chen, A., Kim, S. S., Franyutti-Cintron, A. N., Dharmasiri, A., Mukherjee, K., Russakovsky, O., & Fan, J. E. (2026, April). Presenting Large Language Models as Companions Affects What Mental Capacities People Attribute to Them. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems (pp. 1-30). Clark, A. (2013). Whatever next? Pred...

  4. [4]

    J., Rock, M

    Cuddy, A. J., Rock, M. S., & Norton, M. I. (2007). Aid in the aftermath of Hurricane Katrina: Inferences of secondary emotions and intergroup helping. Group Processes & Intergroup Relations, 10(1), 107-118. Damasio, A. (2019). The strange order of things: Life, feeling, and the making of cultures. Vintage. De Waal, F. B. (1999). Anthropomorphism and anthr...

  5. [5]

    Dennett, D. C. (1989). The intentional stance. MIT press. Dibeklioğlu, H., Hammal, Z., Yang, Y ., & Cohn, J. F. (2015, November). Multimodal detection of depression in clinical interviews. In Proceedings of the 2015 ACM on international conference on multimodal interaction (pp. 307-310). Draaisma, D. (2000). Metaphors of Memory: a History of Ideas about t...

  6. [6]

    T., & Gibson, E

    Fedorenko, E., Piantadosi, S. T., & Gibson, E. A. (2024). Language is primarily a tool for communication rather than thought. Nature, 630(8017), 575-586. Feng, H., Zhang, M., Li, X., Shen, Y ., & Li, X. (2024). The level and outcomes of emotional labor in nurses: A scoping review. Journal of Nursing Management, 2024(1), 5317359. Fischer, J. M., & Ravizza,...

  7. [7]

    Gigerenzer, G., & Goldstein, D. G. (1996). Mind as computer: Birth of a metaphor. Creativity Research Journal, 9(2-3), 131-144. Goldstein, A., Zada, Z., Buchnik, E., Schain, M., Price, A., Aubrey, B., ... & Hasson, U. (2022). Shared computational principles for language processing in humans and deep language models. Nature neuroscience, 25(3), 369-380. Gu...

  8. [8]

    Lakoff, G., & Johnson, M. (2008). Metaphors we live by. University of Chicago press. Loru, E., Nudo, J., Di Marco, N., Santirocchi, A., Atzeni, R., Cinelli, M., ... & Quattrociocchi, W. (2025). The simulation of judgment in LLMs. Proceedings of the National Academy of Sciences, 122(42), e2518443122. Mahowald, K., Ivanova, A. A., Blank, I. A., Kanwisher, N...

  9. [9]

    Nussbaum, M. C. (1995). Objectification. Philosophy & public affairs, 24(4), 249-291. Opotow, S. (1990). Moral exclusion and injustice: An introduction. Journal of social issues, 46(1), 1-20. Parasuraman, R., & Riley, V . (1997). Humans and automation: Use, misuse, disuse, abuse. Human factors, 39(2), 230-253. Pezzulo, G., Parr, T., & Friston, K. (2024). ...

  10. [10]

    Quattrociocchi, W., Capraro, V ., & Perc, M. (2025). Epistemological fault lines between human and artificial intelligence. arXiv preprint arXiv:2512.19466. Roter, D. L., Frankel, R. M., Hall, J. A., & Sluyter, D. (2006). The expression of emotion through nonverbal behavior in medical visits: mechanisms and outcomes. Journal of general internal medicine, ...

  11. [11]

    Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36-45. Wilson, T. D. (2004). Strangers to ourselves: Discovering the adaptive unconscious. Harvard University Press. Wood, G., Nuñez Castellar, E., & IJsselsteijn, W. (2025, May). An Exploratory Study In...