New techniques for error-independent unified path variation, non-degenerate batched sampling, and flexible contraction accelerate tensor network quantum trajectory simulations by more than 10^8 times.
Plexus: Taming billion-edge graphs with 3D parallel full-graph GNN training
3 Pith papers cite this work. Polarity classification is still indexing.
citation-role summary
citation-polarity summary
years
2026 3verdicts
UNVERDICTED 3roles
background 1polarities
background 1representative citing papers
ScaleGNN uses communication-free sampling and 4D parallelism to scale mini-batch GNN training to 2048 GPUs, achieving 3.5x speedup over prior state-of-the-art on ogbn-products.
H100 shows slightly higher efficiency for compute-bound workloads while H200 excels for memory-bound ones across power cap levels.
citing papers explorer
-
Accelerating Quantum Tensor Network Simulations with Unified Path Variations and Non-Degenerate Batched Sampling
New techniques for error-independent unified path variation, non-degenerate batched sampling, and flexible contraction accelerate tensor network quantum trajectory simulations by more than 10^8 times.
-
Communication-free Sampling and 4D Hybrid Parallelism for Scalable Mini-batch GNN Training
ScaleGNN uses communication-free sampling and 4D parallelism to scale mini-batch GNN training to 2048 GPUs, achieving 3.5x speedup over prior state-of-the-art on ogbn-products.
-
Architectural Trade-offs in the Energy-Efficient Era: A Comparative Study of power-capping NVIDIA H100 and H200
H100 shows slightly higher efficiency for compute-bound workloads while H200 excels for memory-bound ones across power cap levels.