13
0

Rejection via Learning Density Ratios

Abstract

Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions. The predominant approach is to alter the supervised learning pipeline by augmenting typical loss functions, letting model rejection incur a lower loss than an incorrect prediction. Instead, we propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance. This can be formalized via the optimization of a loss's risk with a φ\varphi-divergence regularization term. Through this idealized distribution, a rejection decision can be made by utilizing the density ratio between this distribution and the data distribution. We focus on the setting where our φ\varphi-divergences are specified by the family of α\alpha-divergence. Our framework is tested empirically over clean and noisy datasets.

View on arXiv
@article{soen2025_2405.18686,
  title={ Rejection via Learning Density Ratios },
  author={ Alexander Soen and Hisham Husain and Philip Schulz and Vu Nguyen },
  journal={arXiv preprint arXiv:2405.18686},
  year={ 2025 }
}
Comments on this paper