pith. machine review for the scientific record. sign in

arxiv: 2604.16415 · v1 · submitted 2026-04-02 · 💻 cs.CY

Recognition: no theorem link

Using Large Language Models for Emotional Support of Bulgarian Users: A Survey

Authors on Pith no claims yet

Pith reviewed 2026-05-13 21:29 UTC · model grok-4.3

classification 💻 cs.CY
keywords large language modelsemotional supportchatbotssurveyBulgarian usersChatGPTmental healthAI applications
0
0 comments X

The pith

Half of surveyed Bulgarian students use chatbots like ChatGPT for emotional support.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper reports results from an anonymous survey of 100 Bulgarian students on their attitudes toward large language models for emotional support. Roughly half the respondents already use such chatbots, with ChatGPT as the clear leader, mainly to manage stress from relationships and school or work. Most users say the technology helps, yet non-users stay doubtful and widespread worries about data security, reliability, and overly positive responses persist.

Core claim

The survey of 100 Bulgarian high school, university, and doctoral students finds that about half use LLMs for emotional support, with ChatGPT dominant, seeking help primarily for stress in interpersonal relationships and work or study settings. Among users, 71 percent view the approach as effective, while non-users remain skeptical, and concerns about data security, technology reliability, and excessive affirmation are common across the sample.

What carries the argument

Anonymous online survey of 100 Bulgarian students measuring self-reported usage rates, preferred platforms, reasons for seeking support, perceived effectiveness, and stated concerns around LLM-based emotional support.

If this is right

  • ChatGPT serves as the primary platform for emotional support among users in this group.
  • Support is sought most often for stress arising from interpersonal relationships and academic or professional demands.
  • Seventy-one percent of users rate LLM-based emotional support as effective.
  • Non-users express ongoing skepticism toward the technology.
  • Data security, reliability, and the risk of excessive affirmation remain shared barriers to wider adoption.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Similar surveys in other countries could reveal whether cultural factors shape acceptance of AI for emotional support.
  • Targeted improvements in privacy protections might convert some skeptics into users.
  • Longer-term studies could track whether repeated use changes how people handle real-life emotional challenges.
  • Comparing student responses with those of working adults would test how life stage affects reliance on chatbots.

Load-bearing premise

The self-selected sample of 100 Bulgarian students accurately reflects attitudes and behaviors in the wider Bulgarian population.

What would settle it

A larger random-sample survey of Bulgarian adults or a broader age range that finds substantially lower usage rates or different levels of skepticism would undermine the reported patterns.

read the original abstract

The use of large language models (LLMs) for psychological and emotional support (ES) has rapidly evolved, becoming the most widely used application of generative artificial intelligence among consumers by 2025. This paper presents the results of an anonymous survey of 100 Bulgarian users, primarily high school, university, and doctoral students, to explore their attitudes toward and usage of chatbots for emotional support. Findings indicate that approximately one-half of the surveyed population utilizes chatbots for ES, with ChatGPT being the most dominant platform. Users primarily seek support for coping with stress in interpersonal relationships and work or study-related environments. While 71% of users perceive the technology as effective, non-users remain sceptical. Despite the growing adoption, significant concerns persist regarding data security, technology reliability, and the tendency of chatbots to provide excessive affirmation.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The manuscript reports results from an anonymous survey of 100 Bulgarian users, primarily high-school, university, and doctoral students, exploring attitudes toward and usage of chatbots and large language models for emotional support (ES). It finds that approximately half of respondents use such tools, with ChatGPT dominant; primary motivations are coping with stress in interpersonal relationships and work/study environments; 71% of users perceive the technology as effective; and non-users remain skeptical, with widespread concerns about data security, reliability, and excessive affirmation.

Significance. If the survey methodology proves sound, the work supplies useful descriptive data on LLM adoption for emotional support in a Bulgarian student population, a region underrepresented in existing AI-mental-health literature. The direct reporting of raw survey responses (no fitted models or derivations) is a methodological strength that avoids circularity risks. The findings on usage rates, platform preferences, and specific concerns could inform targeted follow-up studies, though the narrow demographic limits immediate generalizability.

major comments (1)
  1. [Methods / Survey Design section] The survey methodology is described only at a high level (anonymous survey of 100 students) with no information on sampling procedure, recruitment channels, question wording, response rate, or any checks for non-response or selection bias. This directly undermines the central claims in the abstract and results (≈50% usage, ChatGPT dominance, relationship/work stress as top reasons, 71% perceived effectiveness), as all statistics derive from a self-selected convenience sample without evidence of representativeness to broader Bulgarian users.
minor comments (2)
  1. [Abstract] The abstract uses the vague phrase 'approximately one-half'; the results section should report the exact count or percentage for transparency.
  2. [Introduction] Define the abbreviation 'ES' on first use in the main text even if expanded in the abstract.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive review and for highlighting the need for greater methodological transparency. We address the single major comment below and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: The survey methodology is described only at a high level (anonymous survey of 100 students) with no information on sampling procedure, recruitment channels, question wording, response rate, or any checks for non-response or selection bias. This directly undermines the central claims in the abstract and results (≈50% usage, ChatGPT dominance, relationship/work stress as top reasons, 71% perceived effectiveness), as all statistics derive from a self-selected convenience sample without evidence of representativeness to broader Bulgarian users.

    Authors: We agree that the current Methods section provides insufficient detail on survey design and that this limits interpretation of the descriptive statistics. In the revision we will expand the section to describe the questionnaire instrument (including the exact items on usage frequency, preferred platforms, primary stressors, and perceived effectiveness), the recruitment approach (distribution via Bulgarian university mailing lists, student forums, and targeted social-media groups), and the fact that participation was voluntary and anonymous. We will also add an explicit limitations paragraph stating that the sample is a convenience sample of primarily students, that no response rate could be calculated, and that no formal checks for selection bias were performed. These changes will frame the reported percentages as descriptive of the sampled respondents rather than claims of representativeness for all Bulgarian users. revision: yes

Circularity Check

0 steps flagged

No circularity: direct survey reporting with no derivations or self-referential steps

full rationale

The paper is a straightforward survey report presenting descriptive statistics from 100 anonymous respondents. All findings (usage rates, platform preferences, reasons for seeking support, perceived effectiveness) are stated as direct aggregates of survey answers with no equations, fitted parameters, predictive models, or derivations. No self-citations are invoked to justify core claims, and the methodology does not reduce any result to its own inputs by construction. The self-selected sample is a standard limitation of convenience sampling but does not create circularity in the reported observations.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the assumption that anonymous self-reports from a convenience sample of students accurately reflect real usage and perceptions without substantial social-desirability or recall bias.

axioms (1)
  • domain assumption Survey respondents provide honest and accurate self-reports of their chatbot usage and perceptions.
    This assumption is required to interpret the reported 50% usage rate and 71% effectiveness rating as factual rather than biased estimates.

pith-pipeline@v0.9.0 · 5431 in / 1287 out tokens · 48900 ms · 2026-05-13T21:29:59.389477+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

13 extracted references · 13 canonical work pages · 1 internal anchor

  1. [1]

    Zao-Sanders

    M. Zao-Sanders. How People Are Really Using Gen AI in 2025. Apr. 9, 2 025. url: https://hbr.org/2025/04/how-people-are-really- using-gen-ai-in-2025 (visited on 05/03/2026)

  2. [2]

    B. R. Burleson. Emotional support skills. Lawrence Erlbaum Associates Publishers, 2003

  3. [3]

    Deep Learning Mental Health Dialogue System

    L. Brocki, G. C. Dyer, A. G ladka, and N. C. Chung. “Deep Learning Mental Health Dialogue System.” In: 2023 IEE E International Conference on Big Data and Smart Computing (BigComp). IEEE. 2023, pp. 395–398

  4. [4]

    WHO global air quality guidelines: particulate matter (PM2.5 and PM10), ozone, nitrogen dioxide, sulfur dioxide and carbon monoxide.https://www

    World Health Organiza tion. Mental Health Atlas 2020. Geneva, October 8, 2021. url: https://www.who.int/publications/i/item/9789240036703 (visited on 05/03/2026)

  5. [5]

    Global prevalence and burden of de pressive and anxiety disorders in 204 co untries and territories in 2020 due to the COVID-19 pandemic

    D. F. Santomauro, A. M. M. Herrera, J. Sh adid, P. Zheng, C. Ashbaugh, D. M. Pigott,C. Abbafati, C. Adolph, J. O. Amlag, A. Y. Aravkin, et al. “Global prevalence and burden of de pressive and anxiety disorders in 204 co untries and territories in 2020 due to the COVID-19 pandemic.” In: The Lancet 398.10312 (2021), pp. 1700– 1712

  6. [6]

    It’s only a computer: Virtual humans increase wi llingness to disclose

    G. M. Lucas, J. Gratch, A. King, and L.-P. Morency. “It’s only a computer: Virtual humans increase wi llingness to disclose.” In: Computers in Human Behavior 37 (2014), pp. 94–100

  7. [7]

    Recipients’ perceptions of support attempts and attributions for support attempts that fail

    D. R. Lehman and K. J. Hemphill. “Recipients’ perceptions of support attempts and attributions for support attempts that fail”. In: Journal of Social and Personal Relationships 7 .4 (1990), pp. 563 – 574

  8. [8]

    Guardians of Trust: Risks and Opportunities for LLMs in Mental Health

    M. Baidal, E. Derner, and N. Oliver. “Guardians of Trust: Risks and Opportunities for LLMs in Mental Health.” In: Proceedings of the Fourth Workshop on NLP for Positive Impact (NLP4PI). Vienna, Austria : Association for Comp utational Linguistics, July 2025, pp. 11 –22. isbn: 978 -1-959429-19-7. url: https://aclanthology.org/2025.nlp4pi-1.2/

  9. [9]

    Opportunities and risks of large language models in psychiatry

    N. Obradovich, S. S. Khalsa, W. U. Khan, J. Suh, R. H. Perlis, O. Ajilore, and M. P. Paulus. “Opportunities and risks of large language models in psychiatry.” In: NPP—Digital Psychiatry and Neuroscience 2.1 (2024), p. 8

  10. [10]

    Artificial intelligence in mental healthcare: transformative potential vs. the necessity of human interaction

    A. Babu and A. P. Joseph. “Artificial intelligence in mental healthcare: transformative potential vs. the necessity of human interaction.” In: Frontiers in Psychology 15 (2024),p. 1378904

  11. [11]

    ZILLENIAL MICROGENERATION: HYBRID TRAITS, DIGITAL BEHAVIOR, AND GENERATIONAL BOUNDARIES

    Carganilla, Marielle, and John Rey Pelila. "ZILLENIAL MICROGENERATION: HYBRID TRAITS, DIGITAL BEHAVIOR, AND GENERATIONAL BOUNDARIES." Lingue: Jurnal Bahasa, Budaya, dan Sastra 7.2 (2025): 218-227

  12. [12]

    ELEPHANT: Measuring and understanding social sycophancy in LLMs

    Cheng, Myra, et al. "Social sycophancy: A broader understanding of llm sycophancy." arXiv preprint arXiv:2505.13995 (2025)

  13. [13]

    On transference and counter-transference

    Balint, Alice, and Michael Balint. "On transference and counter-transference." The Internat ional Journal of Psycho - Analysis 20 (1939): 223