pith. machine review for the scientific record. sign in

arxiv: 2505.12942 · v4 · submitted 2025-05-19 · 💻 cs.CL · cs.AI· cs.LG

Recognition: unknown

A3 : an Analytical Low-Rank Approximation Framework for Attention

Authors on Pith no claims yet
classification 💻 cs.CL cs.AIcs.LG
keywords low-rankapproximationcompressionframeworktextttanalyticalcachecompared
0
0 comments X
read the original abstract

Large language models have demonstrated remarkable performance; however, their massive parameter counts make deployment highly expensive. Low-rank approximation offers a promising compression solution, yet existing approaches have two main limitations: (1) They focus on minimizing the output error of individual linear layers, without considering the architectural characteristics of Transformers, and (2) they decompose a large weight matrix into two small low-rank matrices. Consequently, these methods often fall short compared to other compression techniques like pruning and quantization, and introduce runtime overhead such as the extra GEMM kernel launches and memory operations for decomposed small matrices. To address these limitations, we propose $A^3$, a post-training low-rank approximation framework. $A^3$ splits a Transformer layer into three functional components, namely $\texttt{QK}$, $\texttt{OV}$, and $\texttt{MLP}$ and provides analytical solutions that reduces the hidden dimension size inside each component while minimizing the component's functional loss. This approach directly reduces model sizes, KV cache sizes, and FLOPs without introducing any runtime overheads. Through extensive experiments, we show that $A^3$ maintains superior performance compared to SoTAs. For example, under the same reduction budget in computation and memory, our low-rank approximated LLaMA 3.1-70B achieves a perplexity of 4.69 on WikiText-2, outperforming the previous SoTA's 7.87 by 3.18. We also show versatile applications of $A^3$ in KV cache compression, integration with quantization, fine-tuning and mixed-rank assignments. We open-sourced our framework and code at https://github.com/DeepWok/a3.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.