pith. machine review for the scientific record. sign in

arxiv: 1902.11199 · v1 · submitted 2019-02-28 · 📊 stat.ML · cs.LG

Recognition: unknown

Active Exploration in Markov Decision Processes

Authors on Pith no claims yet
classification 📊 stat.ML cs.LG
keywords explorationactivenoisedecisionestimateintroducemarkovmdps
0
0 comments X
read the original abstract

We introduce the active exploration problem in Markov decision processes (MDPs). Each state of the MDP is characterized by a random value and the learner should gather samples to estimate the mean value of each state as accurately as possible. Similarly to active exploration in multi-armed bandit (MAB), states may have different levels of noise, so that the higher the noise, the more samples are needed. As the noise level is initially unknown, we need to trade off the exploration of the environment to estimate the noise and the exploitation of these estimates to compute a policy maximizing the accuracy of the mean predictions. We introduce a novel learning algorithm to solve this problem showing that active exploration in MDPs may be significantly more difficult than in MAB. We also derive a heuristic procedure to mitigate the negative effect of slowly mixing policies. Finally, we validate our findings on simple numerical simulations.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.