33
5

Efficient Algorithms for Empirical Group Distributional Robust Optimization and Beyond

Abstract

We investigate the empirical counterpart of group distributionally robust optimization (GDRO), which aims to minimize the maximal empirical risk across mm distinct groups. We formulate empirical GDRO as a two-level\textit{two-level} finite-sum convex-concave minimax optimization problem and develop a stochastic variance reduced mirror prox algorithm. Unlike existing methods, we construct the stochastic gradient by per-group sampling technique and perform variance reduction for all groups, which fully exploits the two-level\textit{two-level} finite-sum structure of empirical GDRO. Furthermore, we compute the snapshot and mirror snapshot point by a one-index-shifted weighted average, which distinguishes us from the naive ergodic average. Our algorithm also supports non-constant learning rates, which is different from existing literature. We establish convergence guarantees both in expectation and with high probability, demonstrating a complexity of O(mnˉlnmε)\mathcal{O}\left(\frac{m\sqrt{\bar{n}\ln{m}}}{\varepsilon}\right), where nˉ\bar n is the average number of samples among mm groups. Remarkably, our approach outperforms the state-of-the-art method by a factor of m\sqrt{m}. Furthermore, we extend our methodology to deal with the empirical minimax excess risk optimization (MERO) problem and manage to give the expectation bound and the high probability bound, accordingly. The complexity of our empirical MERO algorithm matches that of empirical GDRO at O(mnˉlnmε)\mathcal{O}\left(\frac{m\sqrt{\bar{n}\ln{m}}}{\varepsilon}\right), significantly surpassing the bounds of existing methods.

View on arXiv
Comments on this paper