375
v1v2 (latest)

Mirror Descent Algorithms with Nearly Dimension-Independent Rates for Differentially-Private Stochastic Saddle-Point Problems

Annual Conference Computational Learning Theory (COLT), 2024
Main:27 Pages
Bibliography:2 Pages
Abstract

We study the problem of differentially-private (DP) stochastic (convex-concave) saddle-points in the 1\ell_1 setting. We propose (ε,δ)(\varepsilon, \delta)-DP algorithms based on stochastic mirror descent that attain nearly dimension-independent convergence rates for the expected duality gap, a type of guarantee that was known before only for bilinear objectives. For convex-concave and first-order-smooth stochastic objectives, our algorithms attain a rate of log(d)/n+(log(d)3/2/[nε])1/3\sqrt{\log(d)/n} + (\log(d)^{3/2}/[n\varepsilon])^{1/3}, where dd is the dimension of the problem and nn the dataset size. Under an additional second-order-smoothness assumption, we show that the duality gap is bounded by log(d)/n+log(d)/nε\sqrt{\log(d)/n} + \log(d)/\sqrt{n\varepsilon} with high probability, by using bias-reduced gradient estimators. This rate provides evidence of the near-optimality of our approach, since a lower bound of log(d)/n+log(d)3/4/nε\sqrt{\log(d)/n} + \log(d)^{3/4}/\sqrt{n\varepsilon} exists. Finally, we show that combining our methods with acceleration techniques from online learning leads to the first algorithm for DP Stochastic Convex Optimization in the 1\ell_1 setting that is not based on Frank-Wolfe methods. For convex and first-order-smooth stochastic objectives, our algorithms attain an excess risk of log(d)/n+log(d)7/10/[nε]2/5\sqrt{\log(d)/n} + \log(d)^{7/10}/[n\varepsilon]^{2/5}, and when additionally assuming second-order-smoothness, we improve the rate to log(d)/n+log(d)/nε\sqrt{\log(d)/n} + \log(d)/\sqrt{n\varepsilon}. Instrumental to all of these results are various extensions of the classical Maurey Sparsification Lemma \cite{Pisier:1980}, which may be of independent interest.

View on arXiv
Comments on this paper