Empirical uncertainty estimates for classification by deep neural
networks
- UQCV
While the accuracy of modern deep learning models has significantly improved in recent years, the ability of these models to generate uncertainty estimates has not progressed to the same degree. Uncertainty methods are designed to provide an estimate of the probability that a model is correct when predicting class assignment. There are number of methods for estimating uncertainty, but it is difficult to determine which method is best in which context. Currently, methods are compared using scores which were developed for other purposes. In this article we: (i) propose a definition of empirical uncertainty which covers a wide class of methods, (ii) define a new score, the expected odds ratio (EOR), for uncertainty methods, and (iii) demonstrate that the score has desirable properties which do not hold for existing scores. We score a number of popular empirical uncertainty methods for in distribution image classification tasks on benchmark datasets.
View on arXiv