pith. machine review for the scientific record. sign in

arxiv: 1612.01251 · v2 · submitted 2016-12-05 · 📊 stat.ML · cs.LG· cs.NE

Recognition: unknown

Known Unknowns: Uncertainty Quality in Bayesian Neural Networks

Authors on Pith no claims yet
classification 📊 stat.ML cs.LGcs.NE
keywords bayesianuncertaintyapproximationneuralcandidatemodelsosbaquality
0
0 comments X
read the original abstract

We evaluate the uncertainty quality in neural networks using anomaly detection. We extract uncertainty measures (e.g. entropy) from the predictions of candidate models, use those measures as features for an anomaly detector, and gauge how well the detector differentiates known from unknown classes. We assign higher uncertainty quality to candidate models that lead to better detectors. We also propose a novel method for sampling a variational approximation of a Bayesian neural network, called One-Sample Bayesian Approximation (OSBA). We experiment on two datasets, MNIST and CIFAR10. We compare the following candidate neural network models: Maximum Likelihood, Bayesian Dropout, OSBA, and --- for MNIST --- the standard variational approximation. We show that Bayesian Dropout and OSBA provide better uncertainty information than Maximum Likelihood, and are essentially equivalent to the standard variational approximation, but much faster.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.