ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2509.04668
140
0

Beyond Ordinary Lipschitz Constraints: Differentially Private Stochastic Optimization with Tsybakov Noise Condition

4 September 2025
Difei Xu
Meng Ding
Zihang Xiang
Jinhui Xu
Haiyan Zhao
ArXiv (abs)PDFHTML
Main:12 Pages
7 Figures
Bibliography:4 Pages
1 Tables
Appendix:15 Pages
Abstract

We study Stochastic Convex Optimization in the Differential Privacy model (DP-SCO). Unlike previous studies, here we assume the population risk function satisfies the Tsybakov Noise Condition (TNC) with some parameter θ>1\theta>1θ>1, where the Lipschitz constant of the loss could be extremely large or even unbounded, but the ℓ2\ell_2ℓ2​-norm gradient of the loss has bounded kkk-th moment with k≥2k\geq 2k≥2. For the Lipschitz case with θ≥2\theta\geq 2θ≥2, we first propose an (ε,δ)(\varepsilon, \delta)(ε,δ)-DP algorithm whose utility bound is \TildeO((r~2k(1n+(dnε))k−1k)θθ−1)\Tilde{O}\left(\left(\tilde{r}_{2k}(\frac{1}{\sqrt{n}}+(\frac{\sqrt{d}}{n\varepsilon}))^\frac{k-1}{k}\right)^\frac{\theta}{\theta-1}\right)\TildeO((r~2k​(n​1​+(nεd​​))kk−1​)θ−1θ​) in high probability, where nnn is the sample size, ddd is the model dimension, and r~2k\tilde{r}_{2k}r~2k​ is a term that only depends on the 2k2k2k-th moment of the gradient. It is notable that such an upper bound is independent of the Lipschitz constant. We then extend to the case whereθ≥θˉ>1\theta\geq \bar{\theta}> 1θ≥θˉ>1 for some known constant θˉ\bar{\theta}θˉ. Moreover, when the privacy budget ε\varepsilonε is small enough, we show an upper bound of O~((r~k(1n+(dnε))k−1k)θθ−1)\tilde{O}\left(\left(\tilde{r}_{k}(\frac{1}{\sqrt{n}}+(\frac{\sqrt{d}}{n\varepsilon}))^\frac{k-1}{k}\right)^\frac{\theta}{\theta-1}\right)O~((r~k​(n​1​+(nεd​​))kk−1​)θ−1θ​) even if the loss function is not Lipschitz. For the lower bound, we show that for any θ≥2\theta\geq 2θ≥2, the private minimax rate for ρ\rhoρ-zero Concentrated Differential Privacy is lower bounded by Ω((r~k(1n+(dnρ))k−1k)θθ−1)\Omega\left(\left(\tilde{r}_{k}(\frac{1}{\sqrt{n}}+(\frac{\sqrt{d}}{n\sqrt{\rho}}))^\frac{k-1}{k}\right)^\frac{\theta}{\theta-1}\right)Ω((r~k​(n​1​+(nρ​d​​))kk−1​)θ−1θ​).

View on arXiv
Comments on this paper