Recognition: unknown
SAID: Safety-Aware Intent Defense via Prefix Probing for Large Language Models
read the original abstract
Large Language Models (LLMs) remain vulnerable to jailbreak attacks, where adversarially crafted prompts induce policy-violating responses despite safety alignment. Existing defenses typically improve safety through external filtering, auxiliary guardrails, or decoding-time control. However, these interventions often reduce practical deployability because they may require additional model access, introduce extra inference cost, or affect benign-task utility. In this paper, we propose Safety-Aware Intent Defense (SAID), a training-free jailbreak defense framework based on intent-level safety probing. SAID first distills potentially obfuscated user inputs into concise core intents using the target model itself. It then applies a validated safety prefix to probe each distilled intent and elicit the model's safety-aware response. Finally, a conservative aggregation rule rejects the original request if any distilled intent is identified as unsafe. This design enables black-box-compatible defense without updating model parameters or modifying the decoding process. Experiments on four open-source LLMs under six representative jailbreak attacks show that SAID achieves state-of-the-art defense performance in reducing harmful responses while maintaining competitive utility on benign tasks. Further analyses on prefix variants, hierarchical distillation, and inference efficiency demonstrate that SAID provides a practical safety-utility trade-off for securing LLMs against jailbreak threats.
This paper has not been read by Pith yet.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.