ScaleGNN uses communication-free sampling and 4D parallelism to scale mini-batch GNN training to 2048 GPUs, achieving 3.5x speedup over prior state-of-the-art on ogbn-products.
Distdgl: Distributed graph neural network training for billion-scale graphs
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
citation-role summary
background 1
citation-polarity summary
fields
cs.LG 1years
2026 1verdicts
UNVERDICTED 1roles
background 1polarities
background 1representative citing papers
citing papers explorer
-
Communication-free Sampling and 4D Hybrid Parallelism for Scalable Mini-batch GNN Training
ScaleGNN uses communication-free sampling and 4D parallelism to scale mini-batch GNN training to 2048 GPUs, achieving 3.5x speedup over prior state-of-the-art on ogbn-products.