pith. machine review for the scientific record. sign in

arxiv: 2604.04238 · v1 · submitted 2026-04-05 · 💻 cs.PL

Recognition: unknown

Agentic Code Optimization via Compiler-LLM Cooperation

Authors on Pith no claims yet
classification 💻 cs.PL
keywords codellm-basedoptimizationabstractioncompilerlevelscompiler-llmcooperation
0
0 comments X
read the original abstract

Generating performant executables from high level languages is critical to software performance across a wide range of domains. Modern compilers perform this task by passing code through a series of well-studied optimizations at progressively lower levels of abstraction, but may miss optimization opportunities that require high-level reasoning about a program's purpose. Recent work has proposed using LLMs to fill this gap. While LLMs can achieve large speedups on some programs, they frequently generate code that is incorrect. In this work, we propose a method to balance the correctness of conventional compiler optimizations with the ``creativity'' of LLM-based code generation: compiler-LLM cooperation. Our approach integrates existing compiler optimization passes with LLM-based code generation at multiple levels of abstraction, retaining the best features of both types of code optimization. We realize our approach with a multi-agent system that includes (1) LLM-based optimization agents for each level of abstraction, (2) individual compiler constituents as tools, (3) an LLM-based test generation agent that probes the correctness and performance of generated code, and (4) a guiding LLM that orchestrates the other components. The strategy enables LLM-based optimization of input programs at multiple levels of abstraction and introduces a method for distributing computational budget between levels. Our extensive evaluation shows that compiler-LLM cooperation outperforms both existing compiler optimizations and level-specific LLM-based baselines, producing speedups up to 1.25x.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 2 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. HARBOR: Automated Harness Optimization

    cs.LG 2026-04 unverdicted novelty 6.0

    HARBOR formalizes harness optimization as constrained noisy Bayesian optimization over mixed-variable spaces and reports a case study where it outperforms manual tuning on a production coding agent.

  2. LLM-Guided Strategy Synthesis for Scalable Equality Saturation

    cs.AI 2026-04 unverdicted novelty 6.0

    EggMind automates EqSat strategy synthesis via LLMs and EqSatL, cutting final cost 45.1% and peak RAM 69.1% versus full equality saturation on vectorization benchmarks while transferring to tensor compilers.