Frequentist uncertainty estimates for deep learning
- UQCVBDL
We provide frequentist estimates of aleatoric and epistemic uncertainty for deep neural networks. To estimate aleatoric uncertainty we propose simultaneous quantile regression, a loss function to learn all the conditional quantiles of a given target variable. These quantiles can be used to compute well-calibrated prediction intervals. To estimate epistemic uncertainty we propose orthonormal certificates, a collection of diverse non-constant functions that map all training samples to zero. These certificates map out-of-distribution examples to non-zero values, signaling high epistemic uncertainty. Our uncertainty estimators are computationally attractive, since they do not require training an ensemble of deep models. Throughout a variety of real-world datasets and tasks, we show the state-of-the-art performance of our uncertainty estimators.
View on arXiv