ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15647
12
0

Second-Order Convergence in Private Stochastic Non-Convex Optimization

21 May 2025
Youming Tao
Zuyuan Zhang
Dongxiao Yu
Xiuzhen Cheng
Falko Dressler
Di Wang
ArXivPDFHTML
Abstract

We investigate the problem of finding second-order stationary points (SOSP) in differentially private (DP) stochastic non-convex optimization. Existing methods suffer from two key limitations: (i) inaccurate convergence error rate due to overlooking gradient variance in the saddle point escape analysis, and (ii) dependence on auxiliary private model selection procedures for identifying DP-SOSP, which can significantly impair utility, particularly in distributed settings. To address these issues, we propose a generic perturbed stochastic gradient descent (PSGD) framework built upon Gaussian noise injection and general gradient oracles. A core innovation of our framework is using model drift distance to determine whether PSGD escapes saddle points, ensuring convergence to approximate local minima without relying on second-order information or additional DP-SOSP identification. By leveraging the adaptive DP-SPIDER estimator as a specific gradient oracle, we develop a new DP algorithm that rectifies the convergence error rates reported in prior work. We further extend this algorithm to distributed learning with arbitrarily heterogeneous data, providing the first formal guarantees for finding DP-SOSP in such settings. Our analysis also highlights the detrimental impacts of private selection procedures in distributed learning under high-dimensional models, underscoring the practical benefits of our design. Numerical experiments on real-world datasets validate the efficacy of our approach.

View on arXiv
@article{tao2025_2505.15647,
  title={ Second-Order Convergence in Private Stochastic Non-Convex Optimization },
  author={ Youming Tao and Zuyuan Zhang and Dongxiao Yu and Xiuzhen Cheng and Falko Dressler and Di Wang },
  journal={arXiv preprint arXiv:2505.15647},
  year={ 2025 }
}
Comments on this paper