153

Improved Rates of Differentially Private Nonconvex-Strongly-Concave Minimax Optimization

AAAI Conference on Artificial Intelligence (AAAI), 2025
Main:8 Pages
7 Figures
Bibliography:4 Pages
5 Tables
Appendix:16 Pages
Abstract

In this paper, we study the problem of (finite sum) minimax optimization in the Differential Privacy (DP) model. Unlike most of the previous studies on the (strongly) convex-concave settings or loss functions satisfying the Polyak-Lojasiewicz condition, here we mainly focus on the nonconvex-strongly-concave one, which encapsulates many models in deep learning such as deep AUC maximization. Specifically, we first analyze a DP version of Stochastic Gradient Descent Ascent (SGDA) and show that it is possible to get a DP estimator whose l2l_2-norm of the gradient for the empirical risk function is upper bounded by O~(d1/4(nϵ)1/2)\tilde{O}(\frac{d^{1/4}}{({n\epsilon})^{1/2}}), where dd is the model dimension and nn is the sample size. We then propose a new method with less gradient noise variance and improve the upper bound to O~(d1/3(nϵ)2/3)\tilde{O}(\frac{d^{1/3}}{(n\epsilon)^{2/3}}), which matches the best-known result for DP Empirical Risk Minimization with non-convex loss. We also discussed several lower bounds of private minimax optimization. Finally, experiments on AUC maximization, generative adversarial networks, and temporal difference learning with real-world data support our theoretical analysis.

View on arXiv
Comments on this paper