pith. machine review for the scientific record. sign in

arxiv: 2508.13650 · v3 · submitted 2025-08-19 · 💻 cs.CL

Recognition: unknown

CRISP: Persistent Concept Unlearning via Sparse Autoencoders

Authors on Pith no claims yet
classification 💻 cs.CL
keywords crispfeaturespersistentunlearningautoencodersconceptinterventionsknowledge
0
0 comments X
read the original abstract

As large language models (LLMs) are increasingly deployed in real-world applications, the need to selectively remove unwanted knowledge while preserving model utility has become paramount. Recent work has explored sparse autoencoders (SAEs) to perform precise interventions on monosemantic features. However, most SAE-based methods operate at inference time, which does not create persistent changes in the model's parameters. Such interventions can be bypassed or reversed by malicious actors with parameter access. We introduce CRISP, a parameter-efficient method for persistent concept unlearning using SAEs. CRISP automatically identifies salient SAE features across multiple layers and suppresses their activations. We experiment with two LLMs and show that our method outperforms prior approaches on safety-critical unlearning tasks from the WMDP benchmark, successfully removing harmful knowledge while preserving general and in-domain capabilities. Feature-level analysis reveals that CRISP achieves semantically coherent separation between target and benign concepts, allowing precise suppression of the target features.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Interpretability Can Be Actionable

    cs.LG 2026-05 conditional novelty 6.0

    Interpretability research should be judged by actionability—the degree to which its insights support concrete decisions and interventions—rather than explanatory power alone.

  2. Latent Agents: A Post-Training Procedure for Internalized Multi-Agent Debate

    cs.AI 2026-04 unverdicted novelty 6.0

    Two-stage fine-tuning distills multi-agent debate into single LLMs, matching performance at 93% lower token cost while revealing agent-specific activation subspaces for steering.