ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.11973
8
3

Decentralized Gradient-Free Methods for Stochastic Non-Smooth Non-Convex Optimization

18 October 2023
Zhenwei Lin
Jingfan Xia
Qi Deng
Luo Luo
ArXivPDFHTML
Abstract

We consider decentralized gradient-free optimization of minimizing Lipschitz continuous functions that satisfy neither smoothness nor convexity assumption. We propose two novel gradient-free algorithms, the Decentralized Gradient-Free Method (DGFM) and its variant, the Decentralized Gradient-Free Method+^++ (DGFM+^{+}+). Based on the techniques of randomized smoothing and gradient tracking, DGFM requires the computation of the zeroth-order oracle of a single sample in each iteration, making it less demanding in terms of computational resources for individual computing nodes. Theoretically, DGFM achieves a complexity of O(d3/2δ−1ε−4)\mathcal O(d^{3/2}\delta^{-1}\varepsilon ^{-4})O(d3/2δ−1ε−4) for obtaining an (δ,ε)(\delta,\varepsilon)(δ,ε)-Goldstein stationary point. DGFM+^{+}+, an advanced version of DGFM, incorporates variance reduction to further improve the convergence behavior. It samples a mini-batch at each iteration and periodically draws a larger batch of data, which improves the complexity to O(d3/2δ−1ε−3)\mathcal O(d^{3/2}\delta^{-1} \varepsilon^{-3})O(d3/2δ−1ε−3). Moreover, experimental results underscore the empirical advantages of our proposed algorithms when applied to real-world datasets.

View on arXiv
Comments on this paper