pith. machine review for the scientific record. sign in

arxiv: 1902.03633 · v1 · submitted 2019-02-10 · 💻 cs.LG · stat.ML

Recognition: unknown

Diverse Exploration via Conjugate Policies for Policy Gradient Methods

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords explorationconjugatepolicygradientpoliciesdiversemethodsperformance
0
0 comments X
read the original abstract

We address the challenge of effective exploration while maintaining good performance in policy gradient methods. As a solution, we propose diverse exploration (DE) via conjugate policies. DE learns and deploys a set of conjugate policies which can be conveniently generated as a byproduct of conjugate gradient descent. We provide both theoretical and empirical results showing the effectiveness of DE at achieving exploration, improving policy performance, and the advantage of DE over exploration by random policy perturbations.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.