Recognition: unknown
Truly Batch Apprenticeship Learning with Deep Successor Features
read the original abstract
We introduce a novel apprenticeship learning algorithm to learn an expert's underlying reward structure in off-policy model-free \emph{batch} settings. Unlike existing methods that require a dynamics model or additional data acquisition for on-policy evaluation, our algorithm requires only the batch data of observed expert behavior. Such settings are common in real-world tasks---health care, finance or industrial processes ---where accurate simulators do not exist or data acquisition is costly. To address challenges in batch settings, we introduce Deep Successor Feature Networks(DSFN) that estimate feature expectations in an off-policy setting and a transition-regularized imitation network that produces a near-expert initial policy and an efficient feature representation. Our algorithm achieves superior results in batch settings on both control benchmarks and a vital clinical task of sepsis management in the Intensive Care Unit.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
Quantifying Potential Observation Missingness in Inverse Reinforcement Learning
A practical algorithm quantifies potential missing observations in IRL by computing minimal perturbations to recorded data that render expert actions optimal.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.