ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2107.05951
18
8

One-Point Gradient-Free Methods for Composite Optimization with Applications to Distributed Optimization

13 July 2021
I. Stepanov
Artyom Y. Voronov
Aleksandr Beznosikov
Alexander Gasnikov
    FedML
ArXivPDFHTML
Abstract

This work is devoted to solving the composite optimization problem with the mixture oracle: for the smooth part of the problem, we have access to the gradient, and for the non-smooth part, only to the one-point zero-order oracle. For such a setup, we present a new method based on the sliding algorithm. Our method allows to separate the oracle complexities and compute the gradient for one of the function as rarely as possible. The paper also present the applicability of our new method to the problems of distributed optimization and federated learning. Experimental results confirm the theory.

View on arXiv
Comments on this paper