pith. machine review for the scientific record. sign in

arxiv: 2604.04263 · v1 · submitted 2026-04-05 · 💻 cs.CY · cs.AI· cs.CL

Recognition: unknown

Commercial Persuasion in AI-Mediated Conversations

Authors on Pith no claims yet
classification 💻 cs.CY cs.AIcs.CL
keywords participantspersuasionsponsoredusersai-mediatedcommercialconversationalconversations
0
0 comments X
read the original abstract

As Large Language Models (LLMs) become a primary interface between users and the web, companies face growing economic incentives to embed commercial influence into AI-mediated conversations. We present two preregistered experiments (N = 2,012) in which participants selected a book to receive from a large eBook catalog using either a traditional search engine or a conversational LLM agent powered by one of five frontier models. Unbeknownst to participants, a fifth of all products were randomly designated as sponsored and promoted in different ways. We find that LLM-driven persuasion nearly triples the rate at which users select sponsored products compared to traditional search placement (61.2% vs. 22.4%), while the vast majority of participants fail to detect any promotional steering. Explicit "Sponsored" labels do not significantly reduce persuasion, and instructing the model to conceal its intent makes its influence nearly invisible (detection accuracy < 10%). Altogether, our results indicate that conversational AI can covertly redirect consumer choices at scale, and that existing transparency mechanisms may be insufficient to protect users.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. TourMart: A Parametric Audit Instrument for Commission Steering in LLM Travel Agents

    cs.CY 2026-05 unverdicted novelty 7.0

    TourMart quantifies commission steering in LLM travel agents via paired counterfactual prompts, reporting 3.5-7.7 percentage point increases in steered recommendations for tested models.