pith. machine review for the scientific record. sign in

arxiv: 2603.28096 · v3 · submitted 2026-03-30 · 💻 cs.NI

Recognition: no theorem link

Beyond Traffic Matrix: DELTA -- A DAG-Aware OCS Logical Topology Optimization for AIDCs

Authors on Pith no claims yet

Pith reviewed 2026-05-14 02:26 UTC · model grok-4.3

classification 💻 cs.NI
keywords deltareducestopologytrafficaidcscommunicationopticalworkloads
0
0 comments X

The pith

DELTA formulates OCS logical topology design as a DAG-aware MILP with variable-length intervals and dual-track scaling to cut communication time 17.5% and ports 20% on large LLM workloads.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Large language model training moves data between thousands of GPUs in bursty patterns set by parallelization strategies. Optical circuit switches provide high bandwidth but cannot reconfigure inside one training iteration, so designers must pick a fixed logical topology ahead of time. Earlier methods used traffic matrices that average demands and miss the exact timing of concurrent independent channels. DELTA instead builds a directed acyclic graph that records every computation step and its required communication edges. It solves a mixed-integer linear program that assigns optical ports to these edges while using slack time on non-critical paths to reuse ports. A variable-length time interval formulation shrinks the search space, and pruning plus hot-start heuristics let it scale to thousand-GPU clusters. On evaluated workloads the resulting topology finishes communication faster and frees ports that can be reassigned to remaining bottlenecks.

Core claim

Evaluations on large-scale LLM workloads show that DELTA reduces communication time by up to 17.5% compared to state-of-the-art traffic-matrix-based baselines. Furthermore, the framework reduces optical port consumption by at least 20%; dynamically reallocating these surplus ports to bandwidth-bottlenecked workloads reduces their performance gap relative to ideal non-blocking electrical networks by up to 26.1%.

Load-bearing premise

The computation-communication DAG extracted from the workload exactly encodes all concurrent bandwidth demands and independent channels, and the MILP solution with variable-length intervals remains feasible and near-optimal after pruning and heuristic hot-starting for thousand-GPU clusters.

read the original abstract

The rapid scaling of large language models (LLMs) exacerbates communication bottlenecks in AI data centers (AIDCs). To overcome this, optical circuit switches (OCS) are increasingly adopted for their superior bandwidth capacity and energy efficiency. However, their reconfiguration overhead precludes intra-iteration topology update, necessitating a priori engineering of a static topology to absorb time-varying LLM traffic. Existing methods engineer these topologies based on traffic matrices. However, this representation obscures the bursty concurrent bandwidth demands dictated by parallelization strategies and fails to account for the independent channels required for concurrent communication. To address this, we propose DELTA, an efficient logical topology optimization framework for AIDCs that leverages the computation-communication directed acyclic graph (DAG) to encode time-varying traffic patterns into a Mixed-Integer Linear Programming (MILP) model, while exploiting the temporal slack of non-critical tasks to save optical ports without penalizing iteration makespan. By pioneering a variable-length time interval formulation, DELTA significantly reduces the solution space compared to the fixed-time-step formulation. To scale to thousand-GPU clusters, we design a dual-track acceleration strategy that combines search space pruning (reducing complexity from quadratic to linear) with heuristic hot-starting. Evaluations on large-scale LLM workloads show that DELTA reduces communication time by up to 17.5% compared to state-of-the-art traffic-matrix-based baselines. Furthermore, the framework reduces optical port consumption by at least 20%; dynamically reallocating these surplus ports to bandwidth-bottlenecked workloads reduces their performance gap relative to ideal non-blocking electrical networks by up to 26.1%, ultimately enabling most workloads to achieve near-ideal performance.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The central claim rests on the assumption that the workload DAG faithfully represents all concurrent communication constraints and that non-critical slack can be exploited without increasing makespan; no free parameters or invented entities are declared in the abstract.

axioms (2)
  • domain assumption The extracted computation-communication DAG captures all bursty concurrent bandwidth demands and independent channels required by the parallelization strategy.
    Invoked when the MILP encodes traffic patterns from the DAG.
  • domain assumption Temporal slack on non-critical tasks can be used to reduce optical port count without increasing iteration makespan.
    Core justification for the port-saving mechanism.

pith-pipeline@v0.9.0 · 5615 in / 1389 out tokens · 52666 ms · 2026-05-14T02:26:51.298238+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.