0

Protected Probabilistic Classification Library

Main:16 Pages
3 Figures
Bibliography:2 Pages
11 Tables
Abstract

This paper introduces a new Python package specifically designed to address calibration of probabilistic classifiers under dataset shift. The method is demonstrated in binary and multi-class settings and its effectiveness is measured against a number of existing post-hoc calibration methods. The empirical results are promising and suggest that our technique can be helpful in a variety of settings for batch and online learning classification problems where the underlying data distribution changes between the training and test sets.

View on arXiv
Comments on this paper