Ada-MK fuses LLM operators into persistent MegaKernels via MLIR DAG search and 3D shared-memory modeling, delivering up to 23.6% higher single-batch throughput than TensorRT-LLM on NVIDIA L20.
Title resolution pending
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.CL 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Ada-MK: Adaptive MegaKernel Optimization via Automated DAG-based Search for LLM Inference
Ada-MK fuses LLM operators into persistent MegaKernels via MLIR DAG search and 3D shared-memory modeling, delivering up to 23.6% higher single-batch throughput than TensorRT-LLM on NVIDIA L20.