ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2209.05045
39
50

Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization

12 September 2022
Tianyi Lin
Zeyu Zheng
Michael I. Jordan
ArXivPDFHTML
Abstract

Nonsmooth nonconvex optimization problems broadly emerge in machine learning and business decision making, whereas two core challenges impede the development of efficient solution methods with finite-time convergence guarantee: the lack of computationally tractable optimality criterion and the lack of computationally powerful oracles. The contributions of this paper are two-fold. First, we establish the relationship between the celebrated Goldstein subdifferential~\citep{Goldstein-1977-Optimization} and uniform smoothing, thereby providing the basis and intuition for the design of gradient-free methods that guarantee the finite-time convergence to a set of Goldstein stationary points. Second, we propose the gradient-free method (GFM) and stochastic GFM for solving a class of nonsmooth nonconvex optimization problems and prove that both of them can return a (δ,ϵ)(\delta,\epsilon)(δ,ϵ)-Goldstein stationary point of a Lipschitz function fff at an expected convergence rate at O(d3/2δ−1ϵ−4)O(d^{3/2}\delta^{-1}\epsilon^{-4})O(d3/2δ−1ϵ−4) where ddd is the problem dimension. Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results. Finally, we demonstrate the effectiveness of 2-SGFM on training ReLU neural networks with the \textsc{Minst} dataset.

View on arXiv
Comments on this paper