Pathologies of Factorised Gaussian and MC Dropout Posteriors in Bayesian
Neural Networks
- UQCVBDL
Abstract
Applying Bayesian inference to neural networks often requires approximating the posterior over parameters with simple distributions. The quality of the resulting approximate predictive distribution in function space is poorly understood. We prove that for single hidden layer ReLU networks, there exist simple situations where it is impossible for factorised Gaussian or MC dropout posteriors to give well-calibrated uncertainty estimates. Precisely, they cannot both fit the data confidently and have increased uncertainty in between well-separated clusters of data. This motivates more careful consideration of the consequences of approximate inference in Bayesian neural networks.
View on arXivComments on this paper
