Authors show prompt injection attacks that jailbreak LLM paper reviewers for biased acceptance and propose embedding triggers to detect when reviews are LLM-generated rather than human.
Scientists hide messages in papers to game ai peer review,
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CR 1years
2025 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
ChatGPT: Excellent Paper! Accept It. Editor: Imposter Found! Review Rejected
Authors show prompt injection attacks that jailbreak LLM paper reviewers for biased acceptance and propose embedding triggers to detect when reviews are LLM-generated rather than human.