56
0

Robustness quantification: a new method for assessing the reliability of the predictions of a classifier

Abstract

Based on existing ideas in the field of imprecise probabilities, we present a new approach for assessing the reliability of the individual predictions of a generative probabilistic classifier. We call this approach robustness quantification, compare it to uncertainty quantification, and demonstrate that it continues to work well even for classifiers that are learned from small training sets that are sampled from a shifted distribution.

View on arXiv
@article{detavernier2025_2503.22418,
  title={ Robustness quantification: a new method for assessing the reliability of the predictions of a classifier },
  author={ Adrián Detavernier and Jasper De Bock },
  journal={arXiv preprint arXiv:2503.22418},
  year={ 2025 }
}
Comments on this paper