pith. machine review for the scientific record. sign in

arxiv: 1705.08417 · v2 · submitted 2017-05-23 · 💻 cs.AI · cs.LG· stat.ML

Recognition: unknown

Reinforcement Learning with a Corrupted Reward Channel

Authors on Pith no claims yet
classification 💻 cs.AI cs.LGstat.ML
keywords rewardlearningreinforcementagentproblemsensoryassumptionscorrupt
0
0 comments X
read the original abstract

No real-world reward function is perfect. Sensory errors and software bugs may result in RL agents observing higher (or lower) rewards than they should. For example, a reinforcement learning agent may prefer states where a sensory error gives it the maximum reward, but where the true reward is actually small. We formalise this problem as a generalised Markov Decision Problem called Corrupt Reward MDP. Traditional RL methods fare poorly in CRMDPs, even under strong simplifying assumptions and when trying to compensate for the possibly corrupt rewards. Two ways around the problem are investigated. First, by giving the agent richer data, such as in inverse reinforcement learning and semi-supervised reinforcement learning, reward corruption stemming from systematic sensory errors may sometimes be completely managed. Second, by using randomisation to blunt the agent's optimisation, reward corruption can be partially managed under some assumptions.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Let's Verify Step by Step

    cs.LG 2023-05 accept novelty 7.0

    Process supervision significantly outperforms outcome supervision for training models on the MATH dataset, achieving 78% accuracy on a representative test subset with active learning and a released 800k step-label dataset.