Recognition: unknown
Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
read the original abstract
Multi-armed bandit problems are the most basic examples of sequential decision problems with an exploration-exploitation trade-off. This is the balance between staying with the option that gave highest payoffs in the past and exploring new options that might give higher payoffs in the future. Although the study of bandit problems dates back to the Thirties, exploration-exploitation trade-offs arise in several modern applications, such as ad placement, website optimization, and packet routing. Mathematically, a multi-armed bandit is defined by the payoff process associated with each option. In this survey, we focus on two extreme cases in which the analysis of regret is particularly simple and elegant: i.i.d. payoffs and adversarial payoffs. Besides the basic setting of finitely many actions, we also analyze some of the most important variants and extensions, such as the contextual bandit model.
This paper has not been read by Pith yet.
Forward citations
Cited by 4 Pith papers
-
Information-Theoretic Generalization Bounds for Sequential Decision Making
A sequential supersample construction controls the generalization gap in adaptive decision-making via sequential conditional mutual information under row-wise exchangeability.
-
Optimal Regret for Single Index Bandits
Single-index bandits have optimal regret of order T^{2/3} via a two-phase algorithm that estimates the index direction with a Stein estimator then applies UCB on a grid.
-
Budgeted Online Influence Maximization
A new algorithm for online influence maximization under a total budget constraint using the independent cascade model and edge-level semi-bandit feedback, with improved regret bounds for both budgeted and cardinality ...
-
Best of both worlds: Stochastic & adversarial best-arm identification
No algorithm can be optimal in both stochastic and adversarial best-arm identification; a new parameter-free algorithm matches the derived lower bound up to log factors in stochastic cases while handling adversarial rewards.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.