pith. machine review for the scientific record. sign in

arxiv: 2604.16935 · v1 · submitted 2026-04-18 · 💻 cs.AI · cs.CY· cs.HC· cs.LG· cs.SI

Recognition: unknown

LLMs can persuade only psychologically susceptible humans on societal issues, via trust in AI and emotional appeals, amid logical fallacies

Authors on Pith no claims yet

Pith reviewed 2026-05-10 07:12 UTC · model grok-4.3

classification 💻 cs.AI cs.CYcs.HCcs.LGcs.SI
keywords LLM persuasionpsychological susceptibilitylongitudinal AI interactionlogical fallaciesopinion changeexplainable AIsocietal topicspersonality traits
0
0 comments X

The pith

LLMs persuade humans on societal issues only among those who trust AI and show agreeable, extraverted personalities with high need for cognition.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces Talk2AI, a longitudinal setup that tracks how four leading LLMs attempt to shift 770 participants' views on polarizing topics across 3,080 conversations and 60,000 turns. It documents stable anchoring to initial opinions despite repeated exposure, yet detects measurable opinion change that explainable AI links to specific traits: greater trust in LLMs, agreeableness, extraversion, and need for cognition. Both humans and LLMs produce logical fallacies at the same rate of roughly one every six statements, showing no reasoning advantage on the AI side. Perceived humanness of the LLM is the most predictable outcome from sociodemographic and psychological features, while conviction, opinion shift, and personal endowment follow with lower accuracy. The findings matter because they map concrete psycho-social pathways through which generative AI can influence public discourse on platforms.

Core claim

In the Talk2AI four-wave study, participants maintained longitudinal inertia in their initial stances on issues such as climate change and misinformation even after repeated LLM arguments, while NLP analysis showed equivalent fallacy rates between humans and models; explainable AI then isolated the subset of individuals susceptible to opinion change as those with higher trust in LLMs, agreeableness, extraversion, and need for cognition, with these results replicated via multiverse mixed-effects models that also confirmed strong individual differences.

What carries the argument

The Talk2AI longitudinal conversation framework combined with explainable AI (XAI) analysis of sociodemographic, psychological, and engagement features to isolate markers of susceptibility to LLM-driven opinion change.

If this is right

  • Initial convictions display inertia across repeated waves of AI exposure.
  • LLM perceived humanness is the outcome most strongly predicted by participant features with R squared of 0.44.
  • Opinion change occurs selectively in individuals scoring higher on trust in LLMs, agreeableness, extraversion, and need for cognition.
  • Humans and LLMs rely on fallacious reasoning at identical rates of one quip in six.
  • Mixed-effects models reveal substantial individual differences in persuasion outcomes.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • AI interfaces on public platforms could incorporate safeguards that limit emotional appeals when engaging users who exhibit the identified susceptibility profile.
  • Teaching recognition of logical fallacies to the general public might blunt the effectiveness of LLM arguments independent of personality traits.
  • Long-term studies tracking whether reported opinion shifts translate into sustained changes in information-seeking or policy preferences would test the durability of these effects.
  • Developers might design models that explicitly flag their own fallacious statements to reduce unintended persuasion.

Load-bearing premise

Participants' self-reported conviction levels, perceived opinion shifts, and self-donations accurately reflect genuine belief changes rather than demand characteristics or social desirability biases created by the AI conversation setting.

What would settle it

A replication that measures actual downstream behaviors such as real charitable donations or voting intentions on the same topics and finds no correlation with the self-reported opinion changes recorded after the LLM conversations.

Figures

Figures reproduced from arXiv: 2604.16935 by Alexis Carrillo, Ali Aghazhadeh Ardebili, Emilio Ferrara, Enrique Taietta, Giulio Rossetti, Giuseppe Alessandro Veltri, Massimo Stella, Salvatore Citraro.

Figure 1
Figure 1. Figure 1: Infographics for Talk2AI’s features. 3. Results 3.1. Feedback Dynamics [PITH_FULL_IMAGE:figures/full_fig_p009_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Feedback transitions as Sankey flows across the four waves and Markov state transition patterns grouped by [PITH_FULL_IMAGE:figures/full_fig_p010_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: (A-B) Frequency distribution of fallacies across quips, grouped by topic (A) and by LLM (B). (C-D) Fallacy [PITH_FULL_IMAGE:figures/full_fig_p011_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: (A) R 2 values of the feature selection method for identifying the best performing feature set (top); Scatter plot where each point is a feature, described by the last iteration li where it occurs according to feature selection and its relative R 2 value at li (bottom). (B) Feature importance according to beeswarms highligthing Shapley values. Feature space chosen as the union of the top-5 best performing … view at source ↗
read the original abstract

Scarce longitudinal evidence examines LLMs' persuasiveness and humanness along time-evolving psychological frameworks. We introduce Talk2AI, a longitudinal framework quantifying psycho-social, reasoning and affective dimensions of LLMs' persuasiveness about polarizing societal topics. In a four-way longitudinal setup, Talk2AI's 770 participants engaged in structured conversations with one of four leading LLMs on topics like climate change, social media misinformation, and math anxiety. This produced 3,080 conversations over 60,000 turns. After each wave, participants reported conviction in their initial topic stance, perceived opinion change, LLM's perceived humanness, a self-donation to the topic and a textual explanation. Feedback time series showed longitudinal inertia in convictions, indicating some human anchoring to initial opinions even after repeated exposure to AI-generated arguments. Interestingly, NLP analyses revealed that both humans and LLMs relied on fallacious reasoning in 1 conversational quip every 6, countering the ``LLMs as superior systems" stereotype behind LLMs' cognitive surrender. LLMs' perceived humanness was most learnable from sociodemographic, psychological and engagement features ($R^2=0.44$), followed by opinion change ($R^2=0.34$), conviction ($R^2=0.26$) and personal endowment ($R^2=0.24$). Crucially, explainable AI (XAI) indicated: (i) the presence of individuals more susceptible to LLM-based opinion changes; (ii) psychological susceptibility to LLM-convincing consisted of having more trust in LLMs, being more agreeable and extraverted and with a higher need for cognition. A multiverse approach with mixed-effects models confirmed XAI results, alongside strong individual differences. Talk2AI provides a grounded framework and evidence for detecting how GenAI can influence human opinions via multiple psycho-social pathways in AI-human digital platforms.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 3 minor

Summary. The manuscript introduces the Talk2AI longitudinal framework to quantify psycho-social, reasoning, and affective dimensions of LLM persuasiveness on polarizing societal topics. In a four-wave design, 770 participants engaged in 3,080 structured conversations with one of four leading LLMs on topics including climate change, social media misinformation, and math anxiety. Key results include longitudinal inertia in self-reported conviction, fallacious reasoning occurring in approximately one conversational quip every six for both humans and LLMs, and XAI analyses showing that perceived opinion change, conviction, humanness, and endowment are predictable from sociodemographic, psychological, and engagement features (R² ranging from 0.24 to 0.44), with susceptibility linked specifically to higher trust in LLMs, agreeableness, extraversion, and need for cognition; these XAI findings are corroborated by multiverse mixed-effects models.

Significance. If the self-reported measures validly index genuine persuasion rather than artifacts, the work supplies rare longitudinal evidence on AI-human opinion dynamics at scale, identifies replicable individual-difference pathways (trust, personality, cognition), and documents equivalent fallacy rates that challenge assumptions of LLM cognitive superiority. The combination of large conversation corpus, repeated-measures design, XAI interpretability, and multiverse robustness checks constitutes a concrete methodological contribution to human-AI interaction research.

major comments (1)
  1. [Abstract / XAI results] Abstract and XAI results section: the central claim that LLMs 'can persuade only psychologically susceptible humans ... via trust in AI and emotional appeals' rests on XAI feature importances and mixed-effects models predicting self-reported 'perceived opinion change' and conviction. No objective validation of these dependent variables (implicit measures, behavioral choice tasks, or blinded follow-up assessments) is described, leaving open the possibility that reported changes reflect demand characteristics or social-desirability biases—especially among high-agreeableness or high need-for-cognition participants in a repeated-exposure design.
minor comments (3)
  1. [Abstract] The abstract states fallacious reasoning occurs '1 conversational quip every 6' but does not define 'quip' operationally or report inter-annotator agreement for the NLP pipeline used to detect fallacies.
  2. [Abstract] R² values for humanness (0.44), opinion change (0.34), conviction (0.26), and endowment (0.24) are reported without accompanying standard errors, confidence intervals, or baseline model comparisons.
  3. [Title / Abstract] The title asserts persuasion occurs 'via ... emotional appeals,' yet the XAI susceptibility profile listed in the abstract emphasizes trust, agreeableness, extraversion, and need for cognition without isolating emotional-appeal features or their incremental contribution.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their constructive and detailed feedback, which helps clarify the scope and limitations of our Talk2AI study. We address the single major comment below.

read point-by-point responses
  1. Referee: [Abstract / XAI results] Abstract and XAI results section: the central claim that LLMs 'can persuade only psychologically susceptible humans ... via trust in AI and emotional appeals' rests on XAI feature importances and mixed-effects models predicting self-reported 'perceived opinion change' and conviction. No objective validation of these dependent variables (implicit measures, behavioral choice tasks, or blinded follow-up assessments) is described, leaving open the possibility that reported changes reflect demand characteristics or social-desirability biases—especially among high-agreeableness or high need-for-cognition participants in a repeated-exposure design.

    Authors: We appreciate this important observation on the validity of our dependent variables. The manuscript relies exclusively on self-reported measures of perceived opinion change, conviction, humanness, and endowment, with no implicit measures, behavioral choice tasks, or blinded follow-up assessments included in the four-wave protocol. We acknowledge that demand characteristics and social-desirability biases remain plausible alternative explanations, particularly given the repeated-exposure design and the role of agreeableness and need for cognition as predictors. At the same time, the observed longitudinal inertia in convictions (many participants maintained initial stances across waves) provides some counter-evidence to uniform compliance effects, and the susceptibility profile identified by XAI aligns with established theories of persuasion. The multiverse mixed-effects models further incorporate individual-difference controls. We will revise the manuscript to (a) add an explicit limitations subsection discussing the absence of objective validation, (b) qualify the central claim to refer specifically to self-reported perceived changes rather than implying objective persuasion, and (c) outline future directions for behavioral and implicit-measure extensions. These changes constitute a partial revision. revision: partial

Circularity Check

0 steps flagged

No significant circularity; empirical associations derived from independent participant data

full rationale

The paper reports a longitudinal empirical study with 770 participants producing 3080 conversations, self-reported outcomes (conviction, perceived opinion change, endowment), NLP fallacy counts, mixed-effects regressions, and XAI feature importance. No equations, ansatzes, or derivations appear that reduce any claimed result to a fitted parameter or self-citation by construction. Central findings on susceptibility profiles (trust, agreeableness, extraversion, need for cognition) are statistical associations extracted from the collected data rather than tautological redefinitions of the inputs. The multiverse confirmation and R² values for learnability are standard model diagnostics, not load-bearing self-references. The derivation chain remains self-contained against the external benchmark of participant reports and model outputs.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim depends on the validity of self-reported psychological measures and the assumption that the selected LLMs and topics generalize to broader AI-human interactions.

free parameters (1)
  • Selection of four leading LLMs and three societal topics
    The specific models and discussion topics were chosen by the authors and could influence observed persuasion rates and fallacy frequencies.
axioms (1)
  • domain assumption Self-reported conviction and perceived opinion change reliably measure actual belief shifts
    The study uses post-conversation self-reports as primary outcome variables without external behavioral validation.

pith-pipeline@v0.9.0 · 5696 in / 1480 out tokens · 61746 ms · 2026-05-10T07:12:34.588590+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

58 extracted references · 17 canonical work pages · 4 internal anchors

  1. [1]

    Artificial intelligence, democracy and elections

    Adam, M., Hocquard, C., 2023. Artificial intelligence, democracy and elections. European Parliamentary Research Service

  2. [2]

    Relationship of subjective and objective social status with psychological and physiological functioning: Preliminary data in healthy, white women

    Adler, N.E., Epel, E.S., Castellazzo, G., Ickovics, J.R., 2000. Relationship of subjective and objective social status with psychological and physiological functioning: Preliminary data in healthy, white women. Health Psychology 19, 586–592. URL:https://doi.org/10.1037/ 0278-6133.19.6.586, doi:10.1037/0278-6133.19.6.586

  3. [3]

    Leveraging ai for democratic discourse: Chat interventions can improve online political conversations at scale

    Argyle, L.P., Bail, C.A., Busby, E.C., Gubler, J.R., Howe, T., Rytting, C., Sorensen, T., Wingate, D., 2023. Leveraging ai for democratic discourse: Chat interventions can improve online political conversations at scale. Proceedings of the National Academy of Sciences 120, e2311627120

  4. [4]

    Testing theories of political persuasion using ai

    Argyle, L.P., Busby, E.C., Gubler, J.R., Lyman, A., Olcott, J., Pond, J., Wingate, D., 2025. Testing theories of political persuasion using ai. Proceedings of the National Academy of Sciences 122, e2412815122

  5. [5]

    Llm-generated messages can persuade humans on policy issues

    Bai, H., Voelkel, J.G., Muldowney, S., Eichstaedt, J.C., Willer, R., 2025. Llm-generated messages can persuade humans on policy issues. Nature Communications 16, 6037

  6. [6]

    Fitting linear mixed models in r

    Bates, D., et al., 2005. Fitting linear mixed models in r. R news 5, 27–30

  7. [7]

    Resistance strategies and attitude certainty in persuasion: bolstering vs

    Blankenship, K.L., Machacek, M.G., Standefer, J., 2023. Resistance strategies and attitude certainty in persuasion: bolstering vs. counterarguing. Frontiers in Psychology 14, 1191293. doi:10.3389/fpsyg.2023.1191293

  8. [8]

    Dual-process theory and decision-making in large language models

    Brady, O., Nulty, P., Zhang, L., Ward, T.E., McGovern, D.P., 2025. Dual-process theory and decision-making in large language models. Nature Reviews Psychology , 1–16. 17

  9. [9]

    The comfort of automation: Why cognitive sovereignty matters in ai-driven life sciences

    Branda, F., Ciccozzi, M., 2026. The comfort of automation: Why cognitive sovereignty matters in ai-driven life sciences. Artificial Intelligence in the Life Sciences , 100158

  10. [10]

    The persuasive power of large language models, in: Proceedings of the International AAAI Conference on Web and Social Media, pp

    Breum, S.M., Egdal, D.V., Mortensen, V.G., Møller, A.G., Aiello, L.M., 2024. The persuasive power of large language models, in: Proceedings of the International AAAI Conference on Web and Social Media, pp. 152–163

  11. [11]

    The need for cognition

    Cacioppo, J.T., Petty, R.E., 1982. The need for cognition. Journal of personality and social psychology 42, 116

  12. [12]

    Large Language Models Are as Persuasive as Humans, But How? About the Cognitive Effort and Moral-Emotional Language of LLM Arguments.arXiv preprint arXiv:2404.09329, 2024

    Carrasco-Farre, C., 2024. Large language models are as persuasive as humans, but how? about the cognitive effort and moral-emotional language of llm arguments. arXiv preprint arXiv:2404.09329

  13. [13]

    Talk2AI: A Longitudinal Dataset of Human--AI Persuasive Conversations

    Carrillo, A., Taietta, E., Ardebili, A.A., Veltri, G.A., Stella, M., 2026. Talk2ai: A longitudinal dataset of human–ai persuasive conversations. arXiv preprint arXiv:2604.04354

  14. [14]

    Selective agreement, not sycophancy: investigating opinion dynamics in llm interactions

    Cau, E., Pansanella, V., Pedreschi, D., Rossetti, G., 2025. Selective agreement, not sycophancy: investigating opinion dynamics in llm interactions. EPJ Data Science 14, 59

  15. [15]

    Heuristic and systematic information processing within and beyond the persuasion context

    Chaiken, S., 1989. Heuristic and systematic information processing within and beyond the persuasion context. Unintended thought , 212–252

  16. [16]

    Chatterji, A., Cunningham, T., Deming, D.J., Hitzig, Z., Ong, C., Shan, C.Y., Wadman, K.,

  17. [17]

    Technical Report

    How people use chatgpt. Technical Report. National Bureau of Economic Research

  18. [18]

    The economic potential of generative ai

    Chui, M., Hazan, E., Roberts, R., Singla, A., Smaje, K., 2023. The economic potential of generative ai. McKinsey Reports

  19. [19]

    Math anxiety and associative knowledge structure are entwined in psy- chology students but not in large language models like gpt-3.5 and gpt-4o

    Ciringione, L., Franchino, E., Reigl, S., D’Onofrio, I., Serbati, A., Poquet, O., Gabriel, F., Stella, M., 2025. Math anxiety and associative knowledge structure are entwined in psy- chology students but not in large language models like gpt-3.5 and gpt-4o. arXiv preprint arXiv:2511.01558

  20. [20]

    S., Veltri, G

    De Duro, E.S., Veltri, G.A., Golino, H., Stella, M., 2025. Measuring and identifying factors of individuals’ trust in large language models. arXiv preprint arXiv:2502.21028

  21. [21]

    Openness to experience, intel- lect, and cognitive ability

    DeYoung, C.G., Quilty, L.C., Peterson, J.B., Gray, J.R., 2014. Openness to experience, intel- lect, and cognitive ability. Journal of personality assessment 96, 46–52

  22. [22]

    The rise of social bots

    Ferrara, E., Varol, O., Davis, C., Menczer, F., Flammini, A., 2016. The rise of social bots. Communications of the ACM 59, 96–104

  23. [23]

    Building a stronger casa: Extending the computers are social actors paradigm

    Gambino, A., Fox, J., Ratan, R.A., 2020. Building a stronger casa: Extending the computers are social actors paradigm. Human-Machine Communication 1, 71–85

  24. [24]

    The fog of information: The eu ai act and legal strategies against ai-fuelled disinformation

    Gavriil, E., Pavlidis, G., 2025. The fog of information: The eu ai act and legal strategies against ai-fuelled disinformation. Available at SSRN 5635130

  25. [25]

    How persuasive is ai-generated propaganda? PNAS nexus 3, pgae034

    Goldstein, J.A., Chao, J., Grossman, S., Stamos, A., Tomz, M., 2024. How persuasive is ai-generated propaganda? PNAS nexus 3, pgae034. 18

  26. [26]

    An italian version of the 10-item big five inventory: An application to hedonic and utilitarian shopping values

    Guido, G., Peluso, A.M., Capestro, M., Miglietta, M., 2015. An italian version of the 10-item big five inventory: An application to hedonic and utilitarian shopping values. Personality and Individual Differences 76, 135–140

  27. [27]

    Mateusz Idziejczak, Vasyl Korzavatykh, Mateusz Stawicki, Andrii Chmutov, Marcin Korcz, Iwo Blkadek, and Dariusz Brzezinski

    Hackenburg, K., Tappin, B.M., Hewitt, L., Saunders, E., Black, S., Lin, H., Fist, C., Margetts, H., Rand, D.G., Summerfield, C., 2025. The levers of political persuasion with conversational ai. arXiv preprint arXiv:2507.13919

  28. [28]

    Could you be wrong: Metacognitive prompts for improving human decision making help llms identify their own biases

    Hills, T.T., 2026. Could you be wrong: Metacognitive prompts for improving human decision making help llms identify their own biases. AI 7, 33

  29. [29]

    Is artificial intelligence more persuasive than humans? a meta- analysis

    Huang, G., Wang, S., 2023. Is artificial intelligence more persuasive than humans? a meta- analysis. Journal of Communication 73, 552–562. doi:10.1093/joc/jqad024

  30. [30]

    GPT-4o System Card

    Hurst, A., Lerer, A., Goucher, A.P., Perelman, A., Ramesh, A., Clark, A., Ostrow, A., Welihinda, A., Hayes, A., Radford, A., et al., 2024. Gpt-4o system card. arXiv preprint arXiv:2410.21276

  31. [31]

    Capturing the diversity in lexical diversity

    Jarvis, S., 2013. Capturing the diversity in lexical diversity. Language learning 63, 87–106

  32. [32]

    Logical fallacy detection, in: Findings of the Association for Computa- tional Linguistics: EMNLP 2022, pp

    Jin, Z., Lalwani, A., Vaidhya, T., Shen, X., Ding, Y., Lyu, Z., Sachan, M., Mihalcea, R., Schoelkopf, B., 2022. Logical fallacy detection, in: Findings of the Association for Computa- tional Linguistics: EMNLP 2022, pp. 7180–7198

  33. [33]

    Understanding trust and reliance development in AI advice: Assessing model accuracy, model explanations, and experi- ences from previous interactions

    Kahr, P.K., Rooks, G., Willemsen, M.C., Snijders, C.C.P., 2024. Understanding trust and reliance development in AI advice: Assessing model accuracy, model explanations, and experi- ences from previous interactions. ACM Transactions on Interactive Intelligent Systems 14, 29. doi:10.1145/3686164

  34. [34]

    Chatgpt for good? on opportunities and challenges of large language models for education

    Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., et al., 2023. Chatgpt for good? on opportunities and challenges of large language models for education. Learning and individual differences 103, 102274

  35. [35]

    The effects of human-like social cues on social responses towards text-based conversational agents—a meta-analysis

    Klein, S.H., 2025. The effects of human-like social cues on social responses towards text-based conversational agents—a meta-analysis. Humanities and Social Sciences Communications 12,

  36. [36]

    doi:10.1057/s41599-025-05618-w

  37. [37]

    DeepSeek-V3 Technical Report

    Liu, A., Feng, B., Xue, B., Wang, B., Wu, B., Lu, C., Zhao, C., Deng, C., Zhang, C., Ruan, C., et al., 2024. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437

  38. [38]

    Susceptibility to poor argu- ments: The interplay of cognitive sophistication and attitudes

    Marin, P.M., Lindeman, M., Svedholm-Häkkinen, A.M., 2024. Susceptibility to poor argu- ments: The interplay of cognitive sophistication and attitudes. Memory & Cognition 52, 1579–1596. doi:10.3758/s13421-024-01564-1

  39. [39]

    The potential of generative ai for personalized persuasion at scale

    Matz, S.C., Teeny, J.D., Vaid, S.S., Peters, H., Harari, G.M., Cerf, M., 2024. The potential of generative ai for personalized persuasion at scale. Scientific Reports 14, 4692

  40. [40]

    Credibility and trust of information in online environ- ments: The use of cognitive heuristics

    Metzger, M.J., Flanagin, A.J., 2013. Credibility and trust of information in online environ- ments: The use of cognitive heuristics. Journal of pragmatics 59, 210–220

  41. [41]

    Misrepresentation or inclusion: promises of generative artificial intelligence in climate change education

    Nguyen, H., Nguyen, V., Ludovise, S., Santagata, R., 2025. Misrepresentation or inclusion: promises of generative artificial intelligence in climate change education. Learning, Media and Technology 50, 393–409. 19

  42. [42]

    The elaboration likelihood model of persuasion: Developing health promotions for sustained behavioral change

    Petty, R.E., Barden, J., Wheeler, S.C., 2009. The elaboration likelihood model of persuasion: Developing health promotions for sustained behavioral change. Emerging theories in health promotion practice and research 2, 185–214

  43. [43]

    Information suppression in large language models: Audit- ing, quantifying, and characterizing censorship in deepseek

    Qiu, P., Zhou, S., Ferrara, E., 2025. Information suppression in large language models: Audit- ing, quantifying, and characterizing censorship in deepseek. arXiv preprint arXiv:2506.12349

  44. [44]

    Cognitive offloading

    Risko, E.F., Gilbert, S.J., 2016. Cognitive offloading. Trends in cognitive sciences 20, 676–688

  45. [45]

    Persuasion with Large Language Models: A Survey of Empirical Evidence, Study Methodologies, and Ethical Implications

    Rogiers, A., Noels, S., Buyl, M., De Bie, T., 2024. Persuasion with large language models: a survey. arXiv preprint arXiv:2411.06837

  46. [46]

    & Schulz, E

    Rossetti, G., Stella, M., Cazabet, R., Abramski, K., Cau, E., Citraro, S., Failla, A., Improta, R., Morini, V., Pansanella, V., 2024. Y social: an llm-powered social media digital twin. arXiv preprint arXiv:2408.00818

  47. [47]

    On the conversational persuasiveness of gpt-4

    Salvi, F., Horta Ribeiro, M., Gallotti, R., West, R., 2025. On the conversational persuasiveness of gpt-4. Nature Human Behaviour , 1–9

  48. [48]

    Semeraro, A., Vilella, S., Improta, R., De Duro, E.S., Mohammad, S.M., Ruffo, G., Stella, M.,

  49. [49]

    Behavior Research Methods 57, 77

    Emoatlas: An emotional network analyzer of texts that merges psychological lexicons, artificial intelligence, and network science. Behavior Research Methods 57, 77

  50. [50]

    Thinking-fast, slow, and artificial: How ai is reshaping human reasoning and the rise of cognitive surrender

    Shaw, S.D., Nave, G., 2026. Thinking-fast, slow, and artificial: How ai is reshaping human reasoning and the rise of cognitive surrender. Available at SSRN 6097646

  51. [51]

    34% of us adults have used chatgpt, about double the share in

    Sidoti, O., McClain, C., 2025. 34% of us adults have used chatgpt, about double the share in

  52. [52]

    The persuasive effects of political micro- targeting in the age of generative artificial intelligence

    Simchon, A., Edwards, M., Lewandowsky, S., 2024. The persuasive effects of political micro- targeting in the age of generative artificial intelligence. PNAS nexus 3, pgae035

  53. [53]

    The state of ai

    Singla, A., Sukharevsky, A., Yee, L., Chui, M., Hall, B., 2025. The state of ai. How Organiza- tions are Rewiring to Capture Value. Publisher: McKinsey

  54. [54]

    Bots increase exposure to negative and inflam- matory content in online social systems

    Stella, M., Ferrara, E., De Domenico, M., 2018. Bots increase exposure to negative and inflam- matory content in online social systems. Proceedings of the National Academy of Sciences 115, 12435–12440

  55. [55]

    Feature selection strategies: a comparative analysis of shap-value and importance-based methods

    Wang, H., Liang, Q., Hancock, J.T., Khoshgoftaar, T.M., 2024. Feature selection strategies: a comparative analysis of shap-value and importance-based methods. Journal of Big Data 11, 44

  56. [56]

    Assessing AI receptivity through a persuasion knowl- edge lens

    Watson, J., Valsesia, F., Segal, S., 2024. Assessing AI receptivity through a persuasion knowl- edge lens. Current Opinion in Psychology 58, 101834. doi:10.1016/j.copsyc.2024.101834

  57. [57]

    Biased ai writing assistants shift users’ attitudes on societal issues

    Williams-Ceci, S., Jakesch, M., Bhat, A., Kadoma, K., Zalmanson, L., Naaman, M., 2026. Biased ai writing assistants shift users’ attitudes on societal issues. Science Advances 12, eadw5578

  58. [58]

    Deep mind in social responses to technologies: A new approach to explaining the computers are social actors phenomena

    Xu, K., Chen, X., Huang, L., 2022. Deep mind in social responses to technologies: A new approach to explaining the computers are social actors phenomena. Computers in Human Behavior 134, 107321. 20