pith. machine review for the scientific record. sign in

arxiv: 1903.05697 · v1 · submitted 2019-03-13 · 💻 cs.RO · cs.LG

Recognition: unknown

Uncertainty Aware Learning from Demonstrations in Multiple Contexts using Bayesian Neural Networks

Authors on Pith no claims yet
classification 💻 cs.RO cs.LG
keywords uncertaintyconditionslearnednetworksneuraltrainingallowsbayesian
0
0 comments X
read the original abstract

Diversity of environments is a key challenge that causes learned robotic controllers to fail due to the discrepancies between the training and evaluation conditions. Training from demonstrations in various conditions can mitigate---but not completely prevent---such failures. Learned controllers such as neural networks typically do not have a notion of uncertainty that allows to diagnose an offset between training and testing conditions, and potentially intervene. In this work, we propose to use Bayesian Neural Networks, which have such a notion of uncertainty. We show that uncertainty can be leveraged to consistently detect situations in high-dimensional simulated and real robotic domains in which the performance of the learned controller would be sub-par. Also, we show that such an uncertainty based solution allows making an informed decision about when to invoke a fallback strategy. One fallback strategy is to request more data. We empirically show that providing data only when requested results in increased data-efficiency.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.