This paper aims to understand whether machine learning models should be trained using cost-sensitive surrogates or cost-agnostic ones (e.g., cross-entropy). Analyzing this question through the lens of -calibration, we find that cost-sensitive surrogates can strictly outperform their cost-agnostic counterparts when learning small models under common distributional assumptions. Since these distributional assumptions are hard to verify in practice, we also show that cost-sensitive surrogates consistently outperform cost-agnostic surrogates on classification datasets from the UCI repository. Together, these make a strong case for using cost-sensitive surrogates in practice.
View on arXiv@article{shah2025_2502.19522, title={ Analyzing Cost-Sensitive Surrogate Losses via $\mathcal{H}$-calibration }, author={ Sanket Shah and Milind Tambe and Jessie Finocchiaro }, journal={arXiv preprint arXiv:2502.19522}, year={ 2025 } }