23
6

Stochastic Approximation Approaches to Group Distributionally Robust Optimization

Abstract

This paper investigates group distributionally robust optimization (GDRO), with the purpose to learn a model that performs well over mm different distributions. First, we formulate GDRO as a stochastic convex-concave saddle-point problem, and demonstrate that stochastic mirror descent (SMD), using mm samples in each iteration, achieves an O(m(logm)/ϵ2)O(m (\log m)/\epsilon^2) sample complexity for finding an ϵ\epsilon-optimal solution, which matches the Ω(m/ϵ2)\Omega(m/\epsilon^2) lower bound up to a logarithmic factor. Then, we make use of techniques from online learning to reduce the number of samples required in each round from mm to 11, keeping the same sample complexity. Specifically, we cast GDRO as a two-players game where one player simply performs SMD and the other executes an online algorithm for non-oblivious multi-armed bandits. Next, we consider a more practical scenario where the number of samples that can be drawn from each distribution is different, and propose a novel formulation of weighted GDRO, which allows us to derive distribution-dependent convergence rates. Denote by nin_i the sample budget for the ii-th distribution, and assume n1n2nmn_1 \geq n_2 \geq \cdots \geq n_m. In the first approach, we incorporate non-uniform sampling into SMD such that the sample budget is satisfied in expectation, and prove that the excess risk of the ii-th distribution decreases at an O(n1logm/ni)O(\sqrt{n_1 \log m}/n_i) rate. In the second approach, we use mini-batches to meet the budget exactly and also reduce the variance in stochastic gradients, and then leverage stochastic mirror-prox algorithm, which can exploit small variances, to optimize a carefully designed weighted GDRO problem. Under appropriate conditions, it attains an O((logm)/ni)O((\log m)/\sqrt{n_i}) convergence rate, which almost matches the optimal O(1/ni)O(\sqrt{1/n_i}) rate of only learning from the ii-th distribution with nin_i samples.

View on arXiv
Comments on this paper