118
v1v2 (latest)

Extending F1 metric, probabilistic approach

Advances in Artificial Intelligence and Machine Learning (AAIML), 2022
Abstract

This article explores the extension of well-known F1 score used for assessing the performance of binary classifiers. We propose the new metric using probabilistic interpretation of precision, recall, specificity, and negative predictive value. We describe its properties and compare it to common metrics. Then we demonstrate its behavior in edge cases of the confusion matrix. Finally, the properties of the metric are tested on binary classifier trained on the real dataset.

View on arXiv
Comments on this paper