39

Irredundant k-Fold Cross-Validation

Main:8 Pages
4 Figures
Bibliography:2 Pages
1 Tables
Abstract

In traditional k-fold cross-validation, each instance is used (k ⁣ ⁣1k\!-\!1) times for training and once for testing, leading to redundancy that lets many instances disproportionately influence the learning phase. We introduce Irredundant kk--fold cross-validation, a novel method that guarantees each instance is used exactly once for training and once for testing across the entire validation procedure. This approach ensures a more balanced utilization of the dataset, mitigates overfitting due to instance repetition, and enables sharper distinctions in comparative model analysis. The method preserves stratification and remains model-agnostic, i.e., compatible with any classifier. Experimental results demonstrate that it delivers consistent performance estimates across diverse datasets --comparable to kk--fold cross-validation-- while providing less optimistic variance estimates because training partitions are non-overlapping, and significantly reducing the overall computational cost.

View on arXiv
Comments on this paper