42
0

Group Distributionally Robust Optimization with Flexible Sample Queries

Main:13 Pages
7 Figures
Bibliography:5 Pages
Appendix:26 Pages
Abstract

Group distributionally robust optimization (GDRO) aims to develop models that perform well across mm distributions simultaneously. Existing GDRO algorithms can only process a fixed number of samples per iteration, either 1 or mm, and therefore can not support scenarios where the sample size varies dynamically. To address this limitation, we investigate GDRO with flexible sample queries and cast it as a two-player game: one player solves an online convex optimization problem, while the other tackles a prediction with limited advice (PLA) problem. Within such a game, we propose a novel PLA algorithm, constructing appropriate loss estimators for cases where the sample size is either 1 or not, and updating the decision using follow-the-regularized-leader. Then, we establish the first high-probability regret bound for non-oblivious PLA. Building upon the above approach, we develop a GDRO algorithm that allows an arbitrary and varying sample size per round, achieving a high-probability optimization error bound of O(1tj=1tmrjlogm)O\left(\frac{1}{t}\sqrt{\sum_{j=1}^t \frac{m}{r_j}\log m}\right), where rtr_t denotes the sample size at round tt. This result demonstrates that the optimization error decreases as the number of samples increases and implies a consistent sample complexity of O(mlog(m)/ϵ2)O(m\log (m)/\epsilon^2) for any fixed sample size r[m]r\in[m], aligning with existing bounds for cases of r=1r=1 or mm. We validate our approach on synthetic binary and real-world multi-class datasets.

View on arXiv
@article{bai2025_2505.15212,
  title={ Group Distributionally Robust Optimization with Flexible Sample Queries },
  author={ Haomin Bai and Dingzhi Yu and Shuai Li and Haipeng Luo and Lijun Zhang },
  journal={arXiv preprint arXiv:2505.15212},
  year={ 2025 }
}
Comments on this paper