← back to paper
arxiv: 2604.21251 · 2 revisions
CAP: Controllable Alignment Prompting for Unlearning in LLMs