ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.07378
  4. Cited By
Boosting One-Point Derivative-Free Online Optimization via Residual
  Feedback
v1v2v3 (latest)

Boosting One-Point Derivative-Free Online Optimization via Residual Feedback

14 October 2020
Yan Zhang
Yi Zhou
Kaiyi Ji
Michael M. Zavlanos
ArXiv (abs)PDFHTML

Papers citing "Boosting One-Point Derivative-Free Online Optimization via Residual Feedback"

2 / 2 papers shown
Title
A Zeroth-Order Momentum Method for Risk-Averse Online Convex Games
A Zeroth-Order Momentum Method for Risk-Averse Online Convex Games
Zifan Wang
Yi Shen
Zachary I. Bell
Scott A. Nivison
Michael M. Zavlanos
Karl H. Johansson
61
5
0
06 Sep 2022
Recent Theoretical Advances in Non-Convex Optimization
Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova
Pavel Dvurechensky
Alexander Gasnikov
Eduard A. Gorbunov
Sergey Guminov
Dmitry Kamzolov
Innokentiy Shibaev
129
79
0
11 Dec 2020
1