Recognition: unknown
SoK: Honeypots & LLMs, More Than the Sum of Their Parts?
read the original abstract
The advent of Large Language Models (LLMs) promised to resolve the long-standing paradox in honeypot design: achieving high-fidelity deception with low operational risk. Since late 2022, a flurry of research has demonstrated steady progress from ideation to prototype implementation. While promising, evaluations show only incremental progress in real-world deployments, and the field still lacks a cohesive understanding of emerging architectural patterns, core challenges, and evaluation paradigms. To fill this gap, we provide the first comprehensive overview and analysis of this new domain, focusing on three critical, intersecting research areas: we provide a taxonomy of honeypot detection vectors, mapped to how LLM-based simulation can or cannot aid deception; we synthesize the emerging literature on LLM-powered honeypots, identifying a canonical architecture, an evaluation tetrad, and an attacker trichotomy mapped to honeypot requirements; and we chart the evolution of honeypot log analysis into automated intelligence generation. Finally, we synthesize these findings into a forward-looking research roadmap, arguing that the true potential of this technology lies in creating autonomous, self-improving deception systems to counter the emerging threat of intelligent, automated attackers.
This paper has not been read by Pith yet.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.