pith. machine review for the scientific record. sign in

arxiv: 1712.05812 · v6 · submitted 2017-12-15 · 💻 cs.AI

Recognition: unknown

Occam's razor is insufficient to infer the preferences of irrational agents

Authors on Pith no claims yet
classification 💻 cs.AI
keywords cannothumanagentfunctioninferoccamplanningpolicy
0
0 comments X
read the original abstract

Inverse reinforcement learning (IRL) attempts to infer human rewards or preferences from observed behavior. Since human planning systematically deviates from rationality, several approaches have been tried to account for specific human shortcomings. However, the general problem of inferring the reward function of an agent of unknown rationality has received little attention. Unlike the well-known ambiguity problems in IRL, this one is practically relevant but cannot be resolved by observing the agent's policy in enough environments. This paper shows (1) that a No Free Lunch result implies it is impossible to uniquely decompose a policy into a planning algorithm and reward function, and (2) that even with a reasonable simplicity prior/Occam's razor on the set of decompositions, we cannot distinguish between the true decomposition and others that lead to high regret. To address this, we need simple `normative' assumptions, which cannot be deduced exclusively from observations.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Risks from Learned Optimization in Advanced Machine Learning Systems

    cs.AI 2019-06 accept novelty 9.0

    Mesa-optimization arises when learned models act as optimizers with objectives that can differ from their training loss, creating alignment risks in advanced machine learning.