With opponent-action feedback in zero-sum games, an efficient algorithm achieves near-optimal t^{-1/2} last-iterate convergence in duality gap with high probability.
International conference on machine learning , pages=
2 Pith papers cite this work. Polarity classification is still indexing.
2
Pith papers citing it
fields
cs.LG 2years
2026 2verdicts
UNVERDICTED 2representative citing papers
A dynamic pruning reduction from agnostic to realizable online learning via weak-consistency oracles achieves O(T^{d_VC+1}) query complexity with near-optimal regret and supplies matching upper and lower bounds on the regret-oracle tradeoff.
citing papers explorer
-
Near-Optimal Last-Iterate Convergence for Zero-Sum Games with Bandit Feedback and Opponent Actions
With opponent-action feedback in zero-sum games, an efficient algorithm achieves near-optimal t^{-1/2} last-iterate convergence in duality gap with high probability.
-
Regret-Oracle Complexity Tradeoffs in Agnostic Online Learning
A dynamic pruning reduction from agnostic to realizable online learning via weak-consistency oracles achieves O(T^{d_VC+1}) query complexity with near-optimal regret and supplies matching upper and lower bounds on the regret-oracle tradeoff.