pith. machine review for the scientific record. sign in

arxiv: 2512.12131 · v2 · submitted 2025-12-13 · 💻 cs.LG · cs.DC

Recognition: unknown

BOOST: BOttleneck-Optimized Scalable Training Framework for Low-Rank Large Language Models

Authors on Pith no claims yet
classification 💻 cs.LG cs.DC
keywords low-rankarchitecturesboostbottleneckparallelismtrainingcommunicationmodel
0
0 comments X
read the original abstract

The scale of transformer model pre-training is constrained by the increasing computation and communication cost. Low-rank bottleneck architectures offer a promising solution to significantly reduce the training time and memory footprint with minimum impact on accuracy. Despite algorithmic efficiency, bottleneck architectures scale poorly under standard tensor parallelism. Simply applying 3D parallelism designed for full-rank methods leads to excessive communication and poor GPU utilization. To address this limitation, we propose BOOST, an efficient training framework tailored for large-scale low-rank bottleneck architectures. BOOST introduces a novel Bottleneck-aware Tensor Parallelism, and combines optimizations such as online-RMSNorm, linear layer grouping, and low-rank activation checkpointing to achieve end-to-end training speedup. Evaluations on different low-rank bottleneck architectures demonstrate that BOOST achieves 1.46-1.91$\times$ speedup over full-rank model baselines and 1.87-2.27$\times$ speedup over low-rank model with naively integrated 3D parallelism, with improved GPU utilization and reduced communication overhead.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. ReCoVer: Resilient LLM Pre-Training System via Fault-Tolerant Collective and Versatile Workload

    cs.DC 2026-05 unverdicted novelty 6.0

    ReCoVer uses fault-tolerant collectives, in-step recovery, and dynamic microbatch redistribution to maintain training trajectory equivalence under GPU failures, delivering 2.23x higher effective throughput than checkp...