Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and
Risk-sensitive Learning
- UQCVPERBDLUD
Abstract
Bayesian neural networks with latent variables (BNNs+LVs) are scalable and flexible probabilistic models: They account for uncertainty in the estimation of the network weights and, by making use of latent variables, they can capture complex noise patterns in the data. In this work, we show how to separate these two forms of uncertainty for decision-making purposes. This decomposition allows us to successfully identify informative points for active learning of functions with heteroskedastic and bimodal noise. We also demonstrate how this decomposition allows us to define a novel risk-sensitive reinforcement learning criterion to identify policies that balance expected cost, model-bias and noise averseness.
View on arXivComments on this paper
