pith. machine review for the scientific record. sign in

arxiv: 1210.4862 · v1 · submitted 2012-10-16 · 💻 cs.LG · stat.ML

Recognition: unknown

Sample-efficient Nonstationary Policy Evaluation for Contextual Bandits

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords policyevaluationapproachesevaluatorexplorationinformationlearningnonstationary
0
0 comments X
read the original abstract

We present and prove properties of a new offline policy evaluator for an exploration learning setting which is superior to previous evaluators. In particular, it simultaneously and correctly incorporates techniques from importance weighting, doubly robust evaluation, and nonstationary policy evaluation approaches. In addition, our approach allows generating longer histories by careful control of a bias-variance tradeoff, and further decreases variance by incorporating information about randomness of the target policy. Empirical evidence from synthetic and realworld exploration learning problems shows the new evaluator successfully unifies previous approaches and uses information an order of magnitude more efficiently.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.