pith. machine review for the scientific record. sign in

arxiv: 2601.18681 · v2 · submitted 2026-01-26 · 💻 cs.LG · cs.AI· cs.SY· eess.SY· math.OC

Recognition: unknown

ART for Diffusion Sampling: A Reinforcement Learning Approach to Timestep Schedule

Authors on Pith no claims yet
classification 💻 cs.LG cs.AIcs.SYeess.SYmath.OC
keywords timedeterministicgaussianlearningart-rlbridgecontinuous-timediffusion
0
0 comments X
read the original abstract

We consider time discretization for score-based diffusion models to generate samples from a learned reverse-time dynamic on a finite grid. Uniform and hand-crafted grids can be suboptimal given a budget on the number of time steps. We introduce Adaptive Reparameterized Time (ART), which controls the clock speed of a reparameterized time variable to redistribute computation along the sampling trajectory while preserving the terminal time, with the objective of minimizing the aggregate Euler discretization error. We derive a randomized companion ART-RL that recasts ART as a continuous-time reinforcement learning problem with Gaussian policies, and prove a two-directional bridge between the two: the deterministic ART optimum lifts to an optimal Gaussian policy, and conversely any optimal Gaussian policy must recover the ART control through its mean. This bridge turns continuous-time actor--critic learning into a principled, rather than heuristic, route to the deterministic timestep optimum. Within the official EDM pipeline, ART-RL improves FID on CIFAR--10 across a wide range of budgets; after one-time offline training, the distilled deterministic schedule transfers without retraining to AFHQv2, FFHQ, and ImageNet at no extra inference cost.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Amortized Guidance for Image Inpainting with Pretrained Diffusion Models

    cs.CV 2026-05 unverdicted novelty 7.0

    AID amortizes guidance for diffusion inpainting by training a reusable module via an auxiliary Gaussian formulation and continuous-time actor-critic algorithm, improving quality-speed trade-off with under 1% overhead.