153

Beyond Ordinary Lipschitz Constraints: Differentially Private Stochastic Optimization with Tsybakov Noise Condition

Main:12 Pages
7 Figures
Bibliography:4 Pages
1 Tables
Appendix:15 Pages
Abstract

We study Stochastic Convex Optimization in the Differential Privacy model (DP-SCO). Unlike previous studies, here we assume the population risk function satisfies the Tsybakov Noise Condition (TNC) with some parameter θ>1\theta>1, where the Lipschitz constant of the loss could be extremely large or even unbounded, but the 2\ell_2-norm gradient of the loss has bounded kk-th moment with k2k\geq 2. For the Lipschitz case with θ2\theta\geq 2, we first propose an (ε,δ)(\varepsilon, \delta)-DP algorithm whose utility bound is \TildeO((r~2k(1n+(dnε))k1k)θθ1)\Tilde{O}\left(\left(\tilde{r}_{2k}(\frac{1}{\sqrt{n}}+(\frac{\sqrt{d}}{n\varepsilon}))^\frac{k-1}{k}\right)^\frac{\theta}{\theta-1}\right) in high probability, where nn is the sample size, dd is the model dimension, and r~2k\tilde{r}_{2k} is a term that only depends on the 2k2k-th moment of the gradient. It is notable that such an upper bound is independent of the Lipschitz constant. We then extend to the case whereθθˉ>1\theta\geq \bar{\theta}> 1 for some known constant θˉ\bar{\theta}. Moreover, when the privacy budget ε\varepsilon is small enough, we show an upper bound of O~((r~k(1n+(dnε))k1k)θθ1)\tilde{O}\left(\left(\tilde{r}_{k}(\frac{1}{\sqrt{n}}+(\frac{\sqrt{d}}{n\varepsilon}))^\frac{k-1}{k}\right)^\frac{\theta}{\theta-1}\right) even if the loss function is not Lipschitz. For the lower bound, we show that for any θ2\theta\geq 2, the private minimax rate for ρ\rho-zero Concentrated Differential Privacy is lower bounded by Ω((r~k(1n+(dnρ))k1k)θθ1)\Omega\left(\left(\tilde{r}_{k}(\frac{1}{\sqrt{n}}+(\frac{\sqrt{d}}{n\sqrt{\rho}}))^\frac{k-1}{k}\right)^\frac{\theta}{\theta-1}\right).

View on arXiv
Comments on this paper