Mirror Descent Algorithms with Nearly Dimension-Independent Rates for Differentially-Private Stochastic Saddle-Point Problems
We study the problem of differentially-private (DP) stochastic (convex-concave) saddle-points in the setting. We propose -DP algorithms based on stochastic mirror descent that attain nearly dimension-independent convergence rates for the expected duality gap, a type of guarantee that was known before only for bilinear objectives. For convex-concave and first-order-smooth stochastic objectives, our algorithms attain a rate of , where is the dimension of the problem and the dataset size. Under an additional second-order-smoothness assumption, we show that the duality gap is bounded by with high probability, by using bias-reduced gradient estimators. This rate provides evidence of the near-optimality of our approach, since a lower bound of exists. Finally, we show that combining our methods with acceleration techniques from online learning leads to the first algorithm for DP Stochastic Convex Optimization in the setting that is not based on Frank-Wolfe methods. For convex and first-order-smooth stochastic objectives, our algorithms attain an excess risk of , and when additionally assuming second-order-smoothness, we improve the rate to . Instrumental to all of these results are various extensions of the classical Maurey Sparsification Lemma \cite{Pisier:1980}, which may be of independent interest.
View on arXiv