pith. machine review for the scientific record. sign in

arxiv: 2604.26866 · v1 · submitted 2026-04-29 · 💻 cs.CL · cs.LG

Recognition: unknown

MoRFI: Monotonic Sparse Autoencoder Feature Identification

Authors on Pith no claims yet

Pith reviewed 2026-05-07 12:14 UTC · model grok-4.3

classification 💻 cs.CL cs.LG
keywords knowledgefine-tuninghallucinationsmorfiacrosscausallycontrolleddirections
0
0 comments X

The pith

Fine-tuning LLMs on new facts increases hallucinations by disrupting specific monotonic latent directions in the residual stream; these directions are discoverable across models via MoRFI on SAEs and can be corrected with single-latent interventions.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Large language models store most facts during initial pre-training. When later fine-tuned on new question-answering examples, they start making more mistakes on previously known facts. The authors ran controlled tests on Llama, Gemma, and Mistral models, varying how much new knowledge was added and how many training rounds were used. They measured rising hallucination rates on held-out tests. To find the cause inside the model, they examined internal activation patterns using sparse autoencoders that had already been trained on the models. They created MoRFI, a filter that keeps only those autoencoder features whose activation strength changes steadily and in one direction as more new knowledge is introduced. These features appear to mark directions in the model's residual stream where old knowledge becomes harder to retrieve. When the authors changed the strength of just one such feature, the model recovered correct answers without retraining. The work suggests hallucinations after fine-tuning are not random noise but tied to specific, identifiable internal pathways.

Core claim

Our findings show that exposure to unknown facts disrupts the model's ability to retrieve stored knowledge along a set of directions in the residual stream. Our pipeline reliably discovers them across distinct models, recovering knowledge through single-latent interventions.

Load-bearing premise

That SAE features showing monotonic response to the controlled fine-tuning mixtures are causally responsible for the observed rise in hallucinations rather than merely correlated with it.

read the original abstract

Large language models (LLMs) acquire most of their factual knowledge during the pre-training stage, through next token prediction. Subsequent stages of post-training often introduce new facts outwith the parametric knowledge, giving rise to hallucinations. While it has been demonstrated that supervised fine-tuning (SFT) on new knowledge may exacerbate the problem, the underlying mechanisms are still poorly understood. We conduct a controlled fine-tuning experiment, focusing on closed-book QA, and find latent directions that causally contribute to hallucinations. Specifically, we fine-tune Llama 3.1 8B, Gemma 2 9B and Mistral 7B v03 on seven distinct single QA datasets, controlling for the percentage of new knowledge and number of training epochs. By measuring performance on the test set, we validate that incrementally introducing new knowledge increases hallucinations, with the effect being more pronounced with prolonged training. We leverage pre-trained sparse autoencoders (SAEs) to analyze residual stream activations across various checkpoints for each model and propose Monotonic Relationship Feature Identification (MoRFI) for capturing causally relevant latents. MoRFI filters SAE features that respond monotonically to controlled fine-tuning data mixtures of a target property. Our findings show that exposure to unknown facts disrupts the model's ability to retrieve stored knowledge along a set of directions in the residual stream. Our pipeline reliably discovers them across distinct models, recovering knowledge through single-latent interventions.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

Based solely on the abstract, the central claim rests on the assumption that pre-trained SAEs yield causally meaningful features and that monotonicity with respect to the fine-tuning mixture identifies the relevant directions.

free parameters (1)
  • Monotonicity selection threshold
    MoRFI must apply some criterion to decide which SAE features count as monotonic; the abstract does not specify how this threshold is chosen or validated.
axioms (1)
  • domain assumption Pre-trained sparse autoencoders capture causally relevant directions in LLM residual-stream activations.
    The entire analysis pipeline depends on the quality and interpretability of existing SAEs without additional validation for the hallucination task.

pith-pipeline@v0.9.0 · 5556 in / 1416 out tokens · 107378 ms · 2026-05-07T12:14:03.133068+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.