pith. machine review for the scientific record. sign in

arxiv: 1706.04208 · v2 · submitted 2017-06-13 · 💻 cs.LG

Recognition: unknown

Hybrid Reward Architecture for Reinforcement Learning

Authors on Pith no claims yet
classification 💻 cs.LG
keywords functionlearningrewardvaluedomainslow-dimensionalrepresentationarchitecture
0
0 comments X
read the original abstract

One of the main challenges in reinforcement learning (RL) is generalisation. In typical deep RL methods this is achieved by approximating the optimal value function with a low-dimensional representation using a deep network. While this approach works well in many domains, in domains where the optimal value function cannot easily be reduced to a low-dimensional representation, learning can be very slow and unstable. This paper contributes towards tackling such challenging domains, by proposing a new method, called Hybrid Reward Architecture (HRA). HRA takes as input a decomposed reward function and learns a separate value function for each component reward function. Because each component typically only depends on a subset of all features, the corresponding value function can be approximated more easily by a low-dimensional representation, enabling more effective learning. We demonstrate HRA on a toy-problem and the Atari game Ms. Pac-Man, where HRA achieves above-human performance.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.