pith. machine review for the scientific record. sign in

arxiv: 1703.03078 · v3 · submitted 2017-03-08 · 💻 cs.RO

Recognition: unknown

Combining Model-Based and Model-Free Updates for Trajectory-Centric Reinforcement Learning

Authors on Pith no claims yet
classification 💻 cs.RO
keywords model-basedmodel-freelearningmethodscombinemethodpoliciespolicy
0
0 comments X
read the original abstract

Reinforcement learning (RL) algorithms for real-world robotic applications need a data-efficient learning process and the ability to handle complex, unknown dynamical systems. These requirements are handled well by model-based and model-free RL approaches, respectively. In this work, we aim to combine the advantages of these two types of methods in a principled manner. By focusing on time-varying linear-Gaussian policies, we enable a model-based algorithm based on the linear quadratic regulator (LQR) that can be integrated into the model-free framework of path integral policy improvement (PI2). We can further combine our method with guided policy search (GPS) to train arbitrary parameterized policies such as deep neural networks. Our simulation and real-world experiments demonstrate that this method can solve challenging manipulation tasks with comparable or better performance than model-free methods while maintaining the sample efficiency of model-based methods. A video presenting our results is available at https://sites.google.com/site/icml17pilqr

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.