pith. machine review for the scientific record. sign in

arxiv: 2604.25525 · v1 · submitted 2026-04-28 · 💻 cs.CL · cs.HC

Recognition: unknown

From Chatbots to Confidants: A Cross-Cultural Study of LLM Adoption for Emotional Support

Authors on Pith no claims yet

Pith reviewed 2026-05-07 16:11 UTC · model grok-4.3

classification 💻 cs.CL cs.HC
keywords large language modelsemotional supportcross-cultural studyadoption ratesdemographic predictorsuser perceptionsmental healthsociotechnical systems
0
0 comments X

The pith

People use large language models for emotional support at rates that differ sharply by country and personal background, with higher socioeconomic status as the main driver of trust and use.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Large language models are becoming everyday sources of emotional support for many users seeking help with personal issues. A study surveying thousands of people in seven countries found adoption rates for this purpose ranging from one-fifth to nearly three-fifths of respondents. Positive views and usage are linked to being between 25 and 44 years old, religious, married, and especially having higher income and education levels. English-speaking countries show notably higher acceptance compared to other European nations. The authors point to the need for thoughtful approaches to designing and overseeing these systems to support safe access across different groups.

Core claim

The study establishes that LLM adoption for emotional support is not uniform but varies by nation and demographics. Using statistical models on data from 4641 participants, it identifies socioeconomic status as the strongest predictor of trust, actual use, and perceived benefits, with other factors including age, religiosity, and marital status also playing roles. Perceptions are more favorable in English-speaking countries than in continental Europe. Examination of user prompts shows primary needs center on loneliness, stress, relationship conflicts, and mental health. Overall, the findings indicate that emotional support from LLMs operates within a sociotechnical context shaped by culture,

What carries the argument

Mixed models that separate cultural effects from demographic composition, applied to a cross-country survey on perceptions of trust, usage, and benefits plus analysis of real user prompts.

If this is right

  • Higher socioeconomic status is the strongest predictor of positive perceptions including trust, usage, and perceived benefits of LLMs for emotional support.
  • English-speaking countries show consistently more positive perceptions than Continental European countries even after separating cultural from demographic effects.
  • Users mainly seek LLM help for loneliness, stress, relationship conflicts, and mental health struggles.
  • LLM emotional support systems should be developed, deployed, and governed with attention to cultural and demographic differences to ensure safe and informed access.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If socioeconomic status drives adoption, emotional support via AI may reach some groups more readily than others, potentially affecting equity in mental health resources.
  • Cultural differences in acceptance may require tailored design or regulation of emotional AI tools rather than one-size-fits-all approaches.
  • The gap between self-reported use and actual behavior could be examined by combining surveys with platform data on prompt types.

Load-bearing premise

The 4641 survey participants represent the general populations of their countries and that their self-reported perceptions and usage of LLMs for emotional support accurately reflect real behaviors without significant bias.

What would settle it

A study that tracks actual LLM query logs from users in the same seven countries to measure the real frequency of emotional support requests, or a new survey using stricter probability sampling methods, would test whether the reported adoption rates and demographic predictors hold up.

Figures

Figures reproduced from arXiv: 2604.25525 by Amanda Cercas Curry, Flor Miriam Plaza-del-Arco, Mert Yazan, Natalia Amat-Lefort.

Figure 1
Figure 1. Figure 1: Global and Demographic Landscape in LLM Adoption and Perceptions for Emotional Support (N=4,641). SES: Socio-economic status. tems as potential companions or sources of inter￾personal support (e.g. Andersson, 2025; Wu et al., 2025; Savoldi et al., 2025). In particular, conver￾sational interfaces make LLMs accessible for emo￾tionally oriented interactions such as seeking reas￾surance, discussing personal co… view at source ↗
Figure 2
Figure 2. Figure 2: Cross-country comparison of AI chatbot use view at source ↗
Figure 3
Figure 3. Figure 3: Percentage-point differences in demographic view at source ↗
Figure 4
Figure 4. Figure 4: Mean scores per construct: Perceived Benefits ( view at source ↗
Figure 5
Figure 5. Figure 5: Mixed-model estimates of demographic factors (fixed effects) that are significant in at least 2 of the 3 view at source ↗
Figure 6
Figure 6. Figure 6: Country intercepts (random effects) between the three models (BENF, USE, TRU). Being from the UK and view at source ↗
Figure 7
Figure 7. Figure 7: Overview of user interaction patterns with AI agents for mental wellbeing and emotional support: view at source ↗
Figure 8
Figure 8. Figure 8: Top-rated Perceived Benefits of AI conversational agents for mental wellbeing and emotional support. view at source ↗
Figure 9
Figure 9. Figure 9: Perceived Benefits across countries. Average scores (Scale: 1-5). Countries ordered by overall enthusiasm view at source ↗
Figure 10
Figure 10. Figure 10: Trust and privacy perceptions by country. Items sorted by global average (ascending). Scores represent view at source ↗
Figure 11
Figure 11. Figure 11: Pairwise country comparisons showing the view at source ↗
Figure 12
Figure 12. Figure 12: Older generations show lower Trust, Per view at source ↗
Figure 13
Figure 13. Figure 13: Higher SES drives Trust, Perceived Benefits, view at source ↗
Figure 14
Figure 14. Figure 14: Topic distribution of shared prompts related to mental wellbeing across countries. view at source ↗
read the original abstract

Large Language Models (LLMs) are increasingly used not only for instrumental tasks, but as always-available and non-judgmental confidants for emotional support. Yet what drives adoption and how users perceive emotional support interactions across countries remains unknown. To address this gap, we present the first large-scale cross-cultural study of LLM use for emotional support, surveying 4,641 participants across seven countries (USA, UK, Germany, France, Spain, Italy, and The Netherlands). Our results show that adoption rates vary dramatically across countries (from 20% to 59%). Using mixed models that separate cultural effects from demographic composition, we find that: Being aged 25-44, religious, married, and of higher socioeconomic status are predictors of positive perceptions (trust, usage, perceived benefits), with socioeconomic status being the strongest. English-speaking countries consistently show more positive perceptions than Continental European countries. We further collect a corpus of 731 real multilingual prompts from user interactions, showing that users mainly seek help for loneliness, stress, relationship conflicts, and mental health struggles. Our findings reveal that LLM emotional support use is shaped by a complex sociotechnical landscape and call for a broader research agenda examining how these systems can be developed, deployed, and governed to ensure safe and informed access.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript presents the first large-scale cross-cultural survey of LLM adoption for emotional support, collecting responses from 4,641 participants across seven countries (USA, UK, Germany, France, Spain, Italy, Netherlands). It reports adoption rates ranging from 20% to 59%, uses mixed models to identify demographic predictors of positive perceptions (age 25-44, religious, married, higher socioeconomic status, with SES strongest), finds more positive views in English-speaking vs. Continental European countries, and analyzes a separate corpus of 731 real multilingual user prompts showing common themes of loneliness, stress, relationship conflicts, and mental health issues.

Significance. If the sampling is representative and self-reports track actual behavior, the study provides important empirical data on sociodemographic and cultural factors shaping LLM use for emotional support. The scale, cross-country design, mixed-model approach to separate culture from demographics, and the real prompt corpus are strengths that could inform research on AI in mental health contexts and policy on safe deployment.

major comments (2)
  1. [Methods] Methods section: The description of the survey provides no details on sampling method (e.g., online panel vs. probability sampling), response rates, post-stratification weights, or comparison to national census margins. This is load-bearing for the headline adoption rates (20-59%) and the demographic/country predictors, as unrepresentative samples or unweighted data could inflate or distort the reported effects.
  2. [Results] Results and Discussion: The claims about predictors and cross-country differences rest entirely on self-reported perceptions and usage without any validation against behavioral logs, objective usage data, or population benchmarks. This raises concerns about social desirability bias, recall error, and whether the mixed models truly isolate cultural effects.
minor comments (2)
  1. [Abstract] Abstract: The claim that the study is 'the first large-scale cross-cultural study' should be supported with a brief literature review reference or caveat if prior smaller studies exist.
  2. [Methods] The prompt corpus analysis is presented separately from the survey; clarify whether any linkage or joint analysis was performed to strengthen the overall narrative.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their detailed and constructive comments, which highlight important aspects of methodological transparency and the interpretation of self-reported data. We address each point below and commit to revisions that will strengthen the manuscript without altering its core contributions.

read point-by-point responses
  1. Referee: [Methods] Methods section: The description of the survey provides no details on sampling method (e.g., online panel vs. probability sampling), response rates, post-stratification weights, or comparison to national census margins. This is load-bearing for the headline adoption rates (20-59%) and the demographic/country predictors, as unrepresentative samples or unweighted data could inflate or distort the reported effects.

    Authors: We agree that these details are essential for evaluating sample representativeness and the robustness of our headline findings. The survey was fielded via a professional online panel provider employing quota sampling on age, gender, and education to approximate national population margins in each of the seven countries. We will substantially expand the Methods section to report the panel provider, exact quota targets, achieved sample sizes per country, response rates, any post-stratification weighting applied, and explicit comparisons to the most recent national census or Eurostat data. These additions will allow readers to assess potential biases in the reported adoption rates and mixed-model coefficients. revision: yes

  2. Referee: [Results] Results and Discussion: The claims about predictors and cross-country differences rest entirely on self-reported perceptions and usage without any validation against behavioral logs, objective usage data, or population benchmarks. This raises concerns about social desirability bias, recall error, and whether the mixed models truly isolate cultural effects.

    Authors: We acknowledge that the study relies on self-reported measures, which are standard for large-scale perceptual and adoption research but cannot be directly validated against platform logs in this design. The mixed-effects models do include random intercepts for country and fixed effects for individual demographics, which helps separate compositional from contextual effects; however, we agree that explicit discussion of remaining limitations is needed. In revision we will add a dedicated Limitations subsection that (a) discusses social desirability and recall biases, (b) notes the absence of objective usage or benchmark data, and (c) outlines how future studies could combine surveys with behavioral traces. This contextualization will not change the reported coefficients or country differences but will better frame their interpretation. revision: partial

Circularity Check

0 steps flagged

No circularity: purely empirical survey with no derivations or self-referential reductions

full rationale

The paper reports results from a cross-national survey of 4,641 participants and a separate corpus of 731 prompts. All claims rest on statistical associations (mixed models separating cultural from demographic effects) computed directly from the collected responses. There are no equations, first-principles derivations, fitted parameters renamed as predictions, uniqueness theorems, or ansatzes. No load-bearing self-citations or renamings of known results occur. The analysis is therefore self-contained; any limitations concern sampling validity or measurement error rather than circular reduction of outputs to inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

Based solely on the abstract; the study rests on standard social-science survey assumptions rather than new mathematical constructs or invented entities.

axioms (2)
  • domain assumption Survey participants provide honest and accurate self-reports of LLM usage and perceptions.
    Invoked implicitly to treat reported trust, usage, and benefits as valid measures.
  • domain assumption Mixed models successfully isolate cultural effects from demographic composition.
    Stated in the abstract as the basis for attributing country differences to culture rather than sample makeup.

pith-pipeline@v0.9.0 · 5543 in / 1381 out tokens · 69793 ms · 2026-05-07T16:11:07.232305+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

44 extracted references · 23 canonical work pages · 2 internal anchors

  1. [1]

    Nancy E Adler, Elissa S Epel, Grace Castellazzo, and Jeannette R Ickovics. 2000. Relationship of subjective and objective social status with psychological and physiological functioning: Preliminary data in healthy, white women. Health psychology, 19(6):586

  2. [2]

    Marta Andersson. 2025. Companionship in code: Ai’s role in the future of human connection. Humanities and Social Sciences Communications, 12(1):1--7

  3. [3]

    Autoriteit Persoonsgegevens . 2025. https://www.autoriteitpersoonsgegevens.nl/en/documents/ai-algorithmic-risks-report-netherlands-arr-february-2025 AI & Algorithmic Risks Report Netherlands (ARR) . Technical report, Dutch Data Protection Authority

  4. [4]

    Elisa Bassignana, Amanda Cercas Curry, and Dirk Hovy. 2025. https://doi.org/10.18653/v1/2025.acl-long.914 The AI gap: How socioeconomic status affects language technology interactions . In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 18647--18664, Vienna, Austria. Association for Co...

  5. [5]

    Gillian Cameron, David Cameron, Gavin Megaw, Raymond Bond, Maurice Mulvenna, Siobhan O’Neill, Cherie Armour, and Michael McTear. 2018. Assessing the usability of a chatbot for mental health care. In International conference on internet science, pages 121--132. Springer

  6. [6]

    Deming, Zoe Hitzig, Christopher Ong, Carl Yan Shan, and Kevin Wadman

    Aaron Chatterji, Thomas Cunningham, David J. Deming, Zoe Hitzig, Christopher Ong, Carl Yan Shan, and Kevin Wadman. 2025. https://doi.org/10.3386/w34255 How people use chatgpt . Working Paper 34255, National Bureau of Economic Research

  7. [7]

    Beenish Moalla Chaudhry and Hamid Reza Debi. 2024. https://doi.org/10.21037/mhealth-23-55 User perceptions and experiences of an AI -driven conversational agent for mental health support . mHealth, 10:22

  8. [8]

    Jianlyu Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu Lian, and Zheng Liu. 2024. https://doi.org/10.18653/v1/2024.findings-acl.137 M 3-embedding: Multi-linguality, multi-functionality, multi-granularity text embeddings through self-knowledge distillation . In Findings of the Association for Computational Linguistics: ACL 2024, pages 2318--2335, Bangkok,...

  9. [9]

    Myra Cheng, Sunny Yu, Cinoo Lee, Pranav Khadpe, Lujain Ibrahim, and Dan Jurafsky. 2025. Elephant: Measuring and understanding social sycophancy in llms. arXiv preprint arXiv:2505.13995

  10. [10]

    Minh Duc Chu, Patrick Gerard, Kshitij Pawar, Charles Bickham, and Kristina Lerman. 2025. Illusions of intimacy: Emotional attachment and emerging psychological risks in human-ai relationships. arXiv preprint arXiv:2505.11649

  11. [11]

    Wang, Chinmay Kulkarni, Lauren Wilcox, Michael Terry, and Michael Madaio

    Andrea Cuadra, Maria Wang, Lynn Andrea Stein, Malte F. Jung, Nicola Dell, Deborah Estrin, and James A. Landay. 2024. https://doi.org/10.1145/3613904.3642336 The illusion of empathy? notes on displays of emotion in human-computer interaction . In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, CHI '24, New York, NY, USA. Assoc...

  12. [12]

    Alba Curry and Amanda Cercas Curry. 2023. https://doi.org/10.18653/v1/2023.findings-acl.515 Computer says ``no'': The case against empathetic conversational AI . In Findings of the Association for Computational Linguistics: ACL 2023, pages 8123--8130, Toronto, Canada. Association for Computational Linguistics

  13. [13]

    Fred D. Davis. 1989. https://doi.org/10.2307/249008 Perceived usefulness, perceived ease of use, and user acceptance of information technology . MIS quarterly, 13(3):319--340

  14. [14]

    Julian De Freitas, Stuti Agarwal, Bernd Schmitt, and Nick Haslam. 2023. Psychological factors underlying attitudes toward ai tools. Nature Human Behaviour, 7(11):1845--1854

  15. [15]

    Malena Digiuni, Fergal W Jones, and Paul M Camic. 2013. Perceived social stigma and attitudes towards seeking therapy in training: A cross-national study. Psychotherapy, 50(2):213

  16. [16]

    Andrea Fiorillo. 2025. A roadmap for better and personalized mental health care in europe: the priorities of the european psychiatric association. European Psychiatry, 68(1):e60

  17. [17]

    FirstPageSage . 2026. https://firstpagesage.com/seo-blog/chatgpt-usage-statistics/ Chatgpt usage statistics: February 2026 . Accessed: 2026-02-26

  18. [18]

    Iason Gabriel, Arianna Manzini, Geoff Keeling, Lisa Anne Hendricks, Verena Rieser, Hasan Iqbal, Nenad Toma s ev, Ira Ktena, Zachary Kenton, Mikel Rodriguez, and 1 others. 2024. The ethics of advanced ai assistants. arXiv preprint arXiv:2404.16244

  19. [19]

    Yanzhu Guo, Simone Conia, Zelin Zhou, Min Li, Saloni Potdar, and Henry Xiao. 2025. Do large language models have an english accent? evaluating and improving the naturalness of multilingual llms. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3823--3838

  20. [20]

    it listens better than my therapist

    Anna-Carolina Haensch. 2025. https://doi.org/10.48550/arXiv.2504.12337 "it listens better than my therapist": Exploring social media discourse on llms as mental health tool . arXiv preprint arXiv:2504.12337

  21. [21]

    Yu Hou, Hal Daum \'e Iii, and Rachel Rudinger. 2025. https://doi.org/10.18653/v1/2025.naacl-long.611 Language models predict empathy gaps between social in-groups and out-groups . In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers...

  22. [22]

    Adam N Joinson. 2001. Self-disclosure in computer-mediated communication: The role of self-awareness and visual anonymity. European journal of social psychology, 31(2):177--192

  23. [23]

    Lucie-Aim \'e e Kaffee, Giada Pistilli, and Yacine Jernite. 2025. Intima: a benchmark for human-ai companionship behavior. arXiv preprint arXiv:2508.09998

  24. [24]

    Frederic Marimon, Natalia Amat-Lefort, Marta Mas-Machuca, and Anna Akhmedova. 2024. https://doi.org/10.17632/zd56zx44h7.1 GPT-QUAL [dataset and questionnaire]

  25. [25]

    Leland McInnes, John Healy, and Steve Astels. 2017. https://doi.org/10.21105/joss.00205 hdbscan: Hierarchical density based clustering . Journal of Open Source Software, 2(11):205

  26. [26]

    Leland McInnes, John Healy, and James Melville. 2020. https://arxiv.org/abs/1802.03426 Umap: Uniform manifold approximation and projection for dimension reduction . Preprint, arXiv:1802.03426

  27. [27]

    Clifford Nass and Youngme Moon. 2000. Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1):81--103

  28. [28]

    Lara Oblak. 2025. Public mental health stigma and suicide rates across europe. Frontiers in public health, 13:1554072

  29. [29]

    OpenAI . 2025. Introducing GPT-5.2 . https://openai.com/index/introducing-gpt-5-2/. Accessed: 2026-03-04

  30. [30]

    Aditya Pandya, Param Lodha, and Amit Ganatra. 2024. https://doi.org/10.3389/fhumd.2023.1289255 Is ChatGPT ready to change mental healthcare? Challenges and considerations: a reality-check . Frontiers in Human Dynamics, 5:1289255

  31. [31]

    Hashai Papneja and Nikhil Yadav. 2025. Self-disclosure to conversational ai: a literature review, emergent framework, and directions for future research. Personal and ubiquitous computing, 29(2):119--151

  32. [32]

    Sarah Perez. 2025. https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no-legal-confidentiality-when-using-chatgpt-as-a-therapist/ Sam altman warns there’s no legal confidentiality when using chatgpt as a therapist

  33. [33]

    Cercas Curry, Amanda Cercas Curry, and Dirk Hovy

    Flor Miriam Plaza-del Arco, Alba A. Cercas Curry, Amanda Cercas Curry, and Dirk Hovy. 2024 a . https://aclanthology.org/2024.lrec-main.506/ Emotion analysis in NLP : Trends, gaps and roadmap for future directions . In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), ...

  34. [34]

    Flor Miriam Plaza-del Arco, Amanda Cercas Curry, Alba Curry, Gavin Abercrombie, and Dirk Hovy. 2024 b . https://doi.org/10.18653/v1/2024.acl-long.415 Angry men, sad women: Large language models reflect gendered stereotypes in emotion attribution . In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa...

  35. [35]

    Flor Miriam Plaza-del Arco, Amanda Cercas Curry, Susanna Paoli, Alba Cercas Curry, and Dirk Hovy. 2024 c . https://doi.org/10.18653/v1/2024.findings-emnlp.251 Divine LL a MA s: Bias, stereotypes, stigmatization, and emotion representation of religion in large language models . In Findings of the Association for Computational Linguistics: EMNLP 2024, pages...

  36. [36]

    Nils Reimers and Iryna Gurevych. 2019. https://doi.org/10.18653/v1/D19-1410 Sentence- BERT : Sentence embeddings using S iamese BERT -networks . In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982--3992, Hong Kong, Chi...

  37. [37]

    Beatrice Savoldi, Giuseppe Attanasio, Olga Gorodetskaya, Marta Marchiori Manerba, Elisa Bassignana, Silvia Casola, Matteo Negri, Tommaso Caselli, Luisa Bentivogli, Alan Ramponi, and 1 others. 2025. Generative ai practices, literacy, and divides: An empirical analysis in the italian context. arXiv preprint arXiv:2512.03671

  38. [38]

    Matthias Schmidmaier, Jan Rupp, Darina Cvetanova, and Sven Mayer. 2024. https://doi.org/10.1145/3613904.3642035 Perceived empathy of technology scale ( PETS ): Measuring empathy of systems toward the user . In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, pages 1--18

  39. [39]

    Judy L Todd and Ariella Shapira. 1974. Us and british self-disclosure, anxiety, empathy, and attitudes to psychotherapy. Journal of Cross-Cultural Psychology, 5(3):364--369

  40. [40]

    Alec Tyson, Giancarlo Pasquini, Alison Spencer, and Cary Funk. 2023. https://www.pewresearch.org/science/2023/02/22/60-of-americans-would-be-uncomfortable-with-provider-relying-on-ai-in-their-own-health-care/ 60\ Technical report, Pew Research Center

  41. [41]

    Andrea Varaona, Rosa M Molina-Ruiz, Luis Guti \'e rrez-Rojas, Maria Perez-P \'a ramo, Guillermo Lahera, Carolina Donat-Vargas, and Miguel Angel Alvarez-Mon. 2024. Snapshot of knowledge and stigma toward mental health disorders and treatment in spain. Frontiers in Psychology, 15:1372955

  42. [42]

    Shenghan Wu, Yimo Zhu, Wynne Hsu, Mong-Li Lee, and Yang Deng. 2025. https://doi.org/10.18653/v1/2025.emnlp-main.277 From personas to talks: Revisiting the impact of personas on LLM -synthesized emotional support conversations . In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 5439--5453, Suzhou, China. Assoc...

  43. [43]

    online" 'onlinestring :=

    ENTRY address archivePrefix author booktitle chapter edition editor eid eprint eprinttype howpublished institution journal key month note number organization pages publisher school series title type volume year doi pubmed url lastchecked label extra.label sort.label short.list INTEGERS output.state before.all mid.sentence after.sentence after.block STRING...

  44. [44]

    write newline

    " write newline "" before.all 'output.state := FUNCTION n.dashify 't := "" t empty not t #1 #1 substring "-" = t #1 #2 substring "--" = not "--" * t #2 global.max substring 't := t #1 #1 substring "-" = "-" * t #2 global.max substring 't := while if t #1 #1 substring * t #2 global.max substring 't := if while FUNCTION word.in bbl.in capitalize " " * FUNCT...