pith. machine review for the scientific record. sign in

arxiv: 1802.05313 · v2 · submitted 2018-02-14 · 💻 cs.AI · cs.LG· stat.ML

Recognition: unknown

Reinforcement Learning from Imperfect Demonstrations

Authors on Pith no claims yet
classification 💻 cs.AI cs.LGstat.ML
keywords learningdemonstrationreinforcementdemonstrationsdataenvironmentalgorithmapproaches
0
0 comments X
read the original abstract

Robust real-world learning should benefit from both demonstrations and interactions with the environment. Current approaches to learning from demonstration and reward perform supervised learning on expert demonstration data and use reinforcement learning to further improve performance based on the reward received from the environment. These tasks have divergent losses which are difficult to jointly optimize and such methods can be very sensitive to noisy demonstrations. We propose a unified reinforcement learning algorithm, Normalized Actor-Critic (NAC), that effectively normalizes the Q-function, reducing the Q-values of actions unseen in the demonstration data. NAC learns an initial policy network from demonstrations and refines the policy in the environment, surpassing the demonstrator's performance. Crucially, both learning from demonstration and interactive refinement use the same objective, unlike prior approaches that combine distinct supervised and reinforcement losses. This makes NAC robust to suboptimal demonstration data since the method is not forced to mimic all of the examples in the dataset. We show that our unified reinforcement learning algorithm can learn robustly and outperform existing baselines when evaluated on several realistic driving games.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. AWAC: Accelerating Online Reinforcement Learning with Offline Datasets

    cs.LG 2020-06 unverdicted novelty 6.0

    AWAC combines offline data with online RL via advantage-weighted actor-critic updates to enable faster acquisition of robotic skills such as dexterous manipulation.