KL regularization enables Õ(1/n) convergence for offline Nash equilibria in zero-sum Markov games under unilateral concentrability via the ROSE framework and SOS-MD algorithm.
A new algorithm for non-stationary contextual bandits: Efficient, optimal and parameter-free
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
fields
cs.LG 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
Establishes almost sure convergence rates arbitrarily close to o(n^{1-2η}) for power-law rates η in (1/2,1) and o(n^{-1}) for harmonic rates in contractive stochastic approximation with Markovian noise.
citing papers explorer
-
Offline Two-Player Zero-Sum Markov Games with KL Regularization
KL regularization enables Õ(1/n) convergence for offline Nash equilibria in zero-sum Markov games under unilateral concentrability via the ROSE framework and SOS-MD algorithm.
-
Almost Sure Convergence Rates of Stochastic Approximation and Reinforcement Learning via a Poisson-Moreau Drift
Establishes almost sure convergence rates arbitrarily close to o(n^{1-2η}) for power-law rates η in (1/2,1) and o(n^{-1}) for harmonic rates in contractive stochastic approximation with Markovian noise.