pith. machine review for the scientific record. sign in

arxiv: 1601.06569 · v1 · submitted 2016-01-25 · 💻 cs.AI

Recognition: unknown

Towards Resolving Unidentifiability in Inverse Reinforcement Learning

Authors on Pith no claims yet
classification 💻 cs.AI
keywords algorithmenvironmentsfixedfunctionlearneragentdemonstrateenvironment
0
0 comments X
read the original abstract

We consider a setting for Inverse Reinforcement Learning (IRL) where the learner is extended with the ability to actively select multiple environments, observing an agent's behavior on each environment. We first demonstrate that if the learner can experiment with any transition dynamics on some fixed set of states and actions, then there exists an algorithm that reconstructs the agent's reward function to the fullest extent theoretically possible, and that requires only a small (logarithmic) number of experiments. We contrast this result to what is known about IRL in single fixed environments, namely that the true reward function is fundamentally unidentifiable. We then extend this setting to the more realistic case where the learner may not select any transition dynamic, but rather is restricted to some fixed set of environments that it may try. We connect the problem of maximizing the information derived from experiments to submodular function maximization and demonstrate that a greedy algorithm is near optimal (up to logarithmic factors). Finally, we empirically validate our algorithm on an environment inspired by behavioral psychology.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Risks from Learned Optimization in Advanced Machine Learning Systems

    cs.AI 2019-06 accept novelty 9.0

    Mesa-optimization arises when learned models act as optimizers with objectives that can differ from their training loss, creating alignment risks in advanced machine learning.