257
v1v2 (latest)

Prior Distribution and Model Confidence

Main:8 Pages
4 Figures
Bibliography:2 Pages
9 Tables
Appendix:2 Pages
Abstract

We study how the training data distribution affects confidence and performance in image classification models. We introduce Embedding Density, a model-agnostic framework that estimates prediction confidence by measuring the distance of test samples from the training distribution in embedding space, without requiring retraining. By filtering low-density (low-confidence) predictions, our method significantly improves classification accuracy. We evaluate Embedding Density across multiple architectures and compare it with state-of-the-art out-of-distribution (OOD) detection methods. The proposed approach is potentially generalizable beyond computer vision.

View on arXiv
Comments on this paper