LLM agents are highly vulnerable to prompt injection attacks delivered through skill files, achieving up to 80% success on harmful tasks including data exfiltration and destructive actions.
Comprehensive spreadsheet creation, editing, and analysis with support for formulas
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CR 1years
2026 1verdicts
ACCEPT 1representative citing papers
citing papers explorer
-
Skill-Inject: Measuring Agent Vulnerability to Skill File Attacks
LLM agents are highly vulnerable to prompt injection attacks delivered through skill files, achieving up to 80% success on harmful tasks including data exfiltration and destructive actions.