ATLAS achieves 12-30x faster out-of-core full-graph GNN inference on graphs up to 4B edges by switching to broadcast-based layer-wise execution with graph reordering, minimum-pending-message eviction, and GPU-accelerated tiered memory-disk hierarchy.
Distdgl: Distributed graph neural network training for billion- scale graphs
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.DC 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
ATLAS: Efficient Out-of-Core Inference for Billion-Scale Graph Neural Networks
ATLAS achieves 12-30x faster out-of-core full-graph GNN inference on graphs up to 4B edges by switching to broadcast-based layer-wise execution with graph reordering, minimum-pending-message eviction, and GPU-accelerated tiered memory-disk hierarchy.