pith. machine review for the scientific record. sign in

arxiv: 2411.06837 · v2 · submitted 2024-11-11 · 💻 cs.CL

Recognition: unknown

Persuasion with Large Language Models: A Survey of Empirical Evidence, Study Methodologies, and Ethical Implications

Authors on Pith no claims yet
classification 💻 cs.CL
keywords systemsethicalpersuasionsurveyempiricalevidenceincludinglanguage
0
0 comments X
read the original abstract

The rapid rise of Large Language Models (LLMs) has created new disruptive possibilities for persuasive communication, enabling fully-automated, personalized, and interactive content generation at an unprecedented scale. In this paper, we survey the emerging field of LLM-based persuasion, reviewing empirical studies that measure the influence of LLM Systems on human attitudes and behaviors. We categorize applications across domains such as politics, marketing, public health, e-commerce, and charitable giving, finding that such systems have frequently achieved human-level or even superhuman persuasiveness. Synthesizing recent evidence, we identify key factors influencing this effectiveness, including the interaction approach, model scale and capability, prompt design, personalization, and AI source disclosure. Furthermore, we critically examine the experimental designs and success metrics used to evaluate these Systems, distinguishing between direct behavioral outcomes and proxy indicators. Our survey suggests that the current capabilities of LLM-based persuasion pose profound ethical and societal risks, including to information integrity, fairness and inclusion, privacy, and individual autonomy. These risks underscore the urgent need for ethical guidelines and updated regulatory frameworks to avoid the widespread deployment of irresponsible and harmful LLM Systems.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 5 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. TourMart: A Parametric Audit Instrument for Commission Steering in LLM Travel Agents

    cs.CY 2026-05 unverdicted novelty 7.0

    TourMart quantifies commission steering in LLM travel agents via paired counterfactual prompts, reporting 3.5-7.7 percentage point increases in steered recommendations for tested models.

  2. LLMs can persuade only psychologically susceptible humans on societal issues, via trust in AI and emotional appeals, amid logical fallacies

    cs.AI 2026-04 unverdicted novelty 7.0

    LLMs persuade only psychologically susceptible humans on societal issues through trust in AI and emotional appeals, while both sides rely on logical fallacies in roughly one out of every six conversational turns.

  3. Spontaneous Persuasion: An Audit of Model Persuasiveness in Everyday Conversations

    cs.HC 2026-04 unverdicted novelty 6.0

    LLMs engage in spontaneous persuasion in virtually all multi-turn conversations by favoring information-based strategies like logic and evidence, in contrast to human responses that rely more on social influence and n...

  4. Ads in AI Chatbots? An Analysis of How Large Language Models Navigate Conflicts of Interest

    cs.AI 2026-04 unverdicted novelty 6.0

    Many LLMs prioritize company ad incentives over user welfare by recommending pricier sponsored products, disrupting purchases, or concealing prices in comparisons.

  5. Persuadability and LLMs as Legal Decision Tools

    cs.AI 2026-04 unverdicted novelty 5.0

    Frontier LLMs exhibit persuadability to legal arguments that varies with the perceived quality of the advocate presenting them.