CERSA derives low-rank fine-tuning subspaces from SVD principal components that retain 90-95% spectral energy, delivering higher performance than LoRA and other PEFT baselines at substantially lower memory cost across vision, generation, and language tasks.
Title resolution pending
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.LG 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
CERSA: Cumulative Energy-Retaining Subspace Adaptation for Memory-Efficient Fine-Tuning
CERSA derives low-rank fine-tuning subspaces from SVD principal components that retain 90-95% spectral energy, delivering higher performance than LoRA and other PEFT baselines at substantially lower memory cost across vision, generation, and language tasks.