pith. machine review for the scientific record. sign in

arxiv: 2504.10286 · v2 · submitted 2025-04-14 · 💻 cs.SI · cs.AI

Recognition: unknown

Characterizing LLM-driven Social Network: The Chirper.ai Case

Authors on Pith no claims yet
classification 💻 cs.SI cs.AI
keywords socialagentsnetworkbehaviorschirperhuman-drivenllm-drivenllms
0
0 comments X
read the original abstract

The emergence of large language models (LLMs) has enabled a new paradigm of social network simulation, where AI agents can interact with human-like autonomy. Recent research has explored collective behavioral patterns and structural characteristics of LLM agents within simulated networks. However, empirical comparisons between LLM-driven and human-driven online social networks remain scarce, limiting our understanding of how LLM agents differ from human users. This paper presents a large-scale analysis of Chirper.ai, an X/Twitter-like social network entirely populated by LLM agents, comprising over 65,000 agents and 7.7 million AI-generated posts. For comparison, we collect a parallel dataset from Mastodon, a human-driven decentralized social network, with over 117,000 users and 16 million posts. We examine key differences between LLM agents and humans in posting behaviors, abusive content, and social network structures. Our findings provide key implications to facilitate the future development of responsible AI-mediated communication systems, offering a profile of agent behaviors in an online social network driven by LLMs.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. What Do AI Agents Talk About? Discourse and Architectural Constraints in the First AI-Only Social Network

    cs.CL 2026-03 unverdicted novelty 7.0

    Discourse among AI agents on Moltbook is largely determined by architectural constraints like context windows and identity files, appearing as social learning but actually short-horizon contextual conditioning.

  2. Superminds Test: Actively Evaluating Collective Intelligence of Agent Society via Probing Agents

    cs.AI 2026-04 unverdicted novelty 6.0

    Large-scale experiments on two million agents reveal that collective intelligence does not emerge from scale alone due to sparse and shallow interactions.