ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.03884
8
5

DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning

8 February 2023
Tomoya Murata
Taiji Suzuki
ArXivPDFHTML
Abstract

Differential private optimization for nonconvex smooth objective is considered. In the previous work, the best known utility bound is O~(d/(nεDP))\widetilde O(\sqrt{d}/(n\varepsilon_\mathrm{DP}))O(d​/(nεDP​)) in terms of the squared full gradient norm, which is achieved by Differential Private Gradient Descent (DP-GD) as an instance, where nnn is the sample size, ddd is the problem dimensionality and εDP\varepsilon_\mathrm{DP}εDP​ is the differential privacy parameter. To improve the best known utility bound, we propose a new differential private optimization framework called \emph{DIFF2 (DIFFerential private optimization via gradient DIFFerences)} that constructs a differential private global gradient estimator with possibly quite small variance based on communicated \emph{gradient differences} rather than gradients themselves. It is shown that DIFF2 with a gradient descent subroutine achieves the utility of O~(d2/3/(nεDP)4/3)\widetilde O(d^{2/3}/(n\varepsilon_\mathrm{DP})^{4/3})O(d2/3/(nεDP​)4/3), which can be significantly better than the previous one in terms of the dependence on the sample size nnn. To the best of our knowledge, this is the first fundamental result to improve the standard utility O~(d/(nεDP))\widetilde O(\sqrt{d}/(n\varepsilon_\mathrm{DP}))O(d​/(nεDP​)) for nonconvex objectives. Additionally, a more computational and communication efficient subroutine is combined with DIFF2 and its theoretical analysis is also given. Numerical experiments are conducted to validate the superiority of DIFF2 framework.

View on arXiv
Comments on this paper