Forward replay replaces backward spreading in LLM parameter editing by optimizing the target hidden state at the first editing layer and propagating it forward, yielding more accurate layer-wise targets at the same computational cost.
Thirty-Eighth
2 Pith papers cite this work. Polarity classification is still indexing.
fields
cs.CL 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
Per-head attention contributions to the residual stream serve as strong linear features for classifying relational knowledge in LLMs, with probe accuracy correlating to relation specificity and signal distribution.
citing papers explorer
-
From Backward Spreading to Forward Replay: Revisiting Target Construction in LLM Parameter Editing
Forward replay replaces backward spreading in LLM parameter editing by optimizing the target hidden state at the first editing layer and propagating it forward, yielding more accurate layer-wise targets at the same computational cost.
-
Tracing Relational Knowledge Recall in Large Language Models
Per-head attention contributions to the residual stream serve as strong linear features for classifying relational knowledge in LLMs, with probe accuracy correlating to relation specificity and signal distribution.