45
1

Learning Against Distributional Uncertainty: On the Trade-off Between Robustness and Specificity

Abstract

Trustworthy machine learning aims at combating distributional uncertainties in training data distributions compared to population distributions. Typical treatment frameworks include the Bayesian approach, (min-max) distributionally robust optimization (DRO), and regularization. However, three issues have to be raised: 1) the prior distribution in the Bayesian method and the regularizer in the regularization method are difficult to specify; 2) the DRO method tends to be overly conservative; 3) all the three methods are biased estimators of the true optimal cost. This paper studies a new framework that unifies the three approaches and addresses the three challenges above. The asymptotic properties (e.g., consistencies and asymptotic normalities), non-asymptotic properties (e.g., generalization bounds and unbiasedness), and solution methods of the proposed model are studied. The new model reveals the trade-off between the robustness to the unseen data and the specificity to the training data. Experiments on various real-world tasks validate the superiority of the proposed learning framework.

View on arXiv
@article{wang2025_2301.13565,
  title={ Learning Against Distributional Uncertainty: On the Trade-off Between Robustness and Specificity },
  author={ Shixiong Wang and Haowei Wang and Xinke Li and Jean Honorio },
  journal={arXiv preprint arXiv:2301.13565},
  year={ 2025 }
}
Comments on this paper