A pretrained LLM is adapted via LoRA fine-tuning into a content-adaptive compressor that maps long texts to compact variable-length Z-token sequences while preserving reconstruction quality and downstream performance.
Learned token pruning for transformers, 2022
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Large Language Model as Token Compressor and Decompressor
A pretrained LLM is adapted via LoRA fine-tuning into a content-adaptive compressor that maps long texts to compact variable-length Z-token sequences while preserving reconstruction quality and downstream performance.