
We study Stochastic Convex Optimization in the Differential Privacy model (DP-SCO). Unlike previous studies, here we assume the population risk function satisfies the Tsybakov Noise Condition (TNC) with some parameter , where the Lipschitz constant of the loss could be extremely large or even unbounded, but the -norm gradient of the loss has bounded -th moment with . For the Lipschitz case with , we first propose an -DP algorithm whose utility bound is in high probability, where is the sample size, is the model dimension, and is a term that only depends on the -th moment of the gradient. It is notable that such an upper bound is independent of the Lipschitz constant. We then extend to the case where for some known constant . Moreover, when the privacy budget is small enough, we show an upper bound of even if the loss function is not Lipschitz. For the lower bound, we show that for any , the private minimax rate for -zero Concentrated Differential Privacy is lower bounded by .
View on arXiv