21
0

Generalization Error Bounds for Learning under Censored Feedback

Abstract

Generalization error bounds from learning theory provide statistical guarantees on how well an algorithm will perform on previously unseen data. In this paper, we characterize the impacts of data non-IIDness due to censored feedback (a.k.a. selective labeling bias) on such bounds. Censored feedback is ubiquitous in many real-world online selection and classification tasks (e.g., hiring, lending, recommendation systems) where the true label of a data point is only revealed if a favorable decision is made (e.g., accepting a candidate, approving a loan, displaying an ad), and remains unknown otherwise. We first derive an extension of the well-known Dvoretzky-Kiefer-Wolfowitz (DKW) inequality, which characterizes the gap between empirical and theoretical data distribution CDFs learned from IID data, to problems with non-IID data due to censored feedback. We then use this CDF error bound to provide a bound on the generalization error guarantees of a classifier trained on such non-IID data. We show that existing generalization error bounds (which do not account for censored feedback) fail to correctly capture the model's generalization guarantees, verifying the need for our bounds. We further analyze the effectiveness of (pure and bounded) exploration techniques, proposed by recent literature as a way to alleviate censored feedback, on improving our error bounds. Together, our findings illustrate how a decision maker should account for the trade-off between strengthening the generalization guarantees of an algorithm and the costs incurred in data collection when future data availability is limited by censored feedback.

View on arXiv
@article{yang2025_2404.09247,
  title={ Generalization Error Bounds for Learning under Censored Feedback },
  author={ Yifan Yang and Ali Payani and Parinaz Naghizadeh },
  journal={arXiv preprint arXiv:2404.09247},
  year={ 2025 }
}
Comments on this paper