ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.07862
  4. Cited By
Exploiting Higher Order Smoothness in Derivative-free Optimization and
  Continuous Bandits
v1v2v3v4 (latest)

Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandits

14 June 2020
A. Akhavan
Massimiliano Pontil
Alexandre B. Tsybakov
ArXiv (abs)PDFHTML

Papers citing "Exploiting Higher Order Smoothness in Derivative-free Optimization and Continuous Bandits"

13 / 13 papers shown
Title
Non-stationary Bandit Convex Optimization: A Comprehensive Study
Non-stationary Bandit Convex Optimization: A Comprehensive Study
Xiaoqi Liu
Dorian Baudry
Julian Zimmert
Patrick Rebeschini
Arya Akhavan
60
0
0
03 Jun 2025
Gradient-free stochastic optimization for additive models
Gradient-free stochastic optimization for additive models
A. Akhavan
Alexandre B. Tsybakov
159
0
0
03 Mar 2025
Optimal estimators of cross-partial derivatives and surrogates of
  functions
Optimal estimators of cross-partial derivatives and surrogates of functions
Matieyendou Lamboni
34
1
0
05 Jul 2024
How to Boost Any Loss Function
How to Boost Any Loss Function
Richard Nock
Yishay Mansour
62
0
0
02 Jul 2024
Stochastic Zeroth-Order Optimization under Strongly Convexity and
  Lipschitz Hessian: Minimax Sample Complexity
Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity
Qian-long Yu
Yining Wang
Baihe Huang
Qi Lei
Jason D. Lee
ODL
61
1
0
28 Jun 2024
Gradient-free optimization of highly smooth functions: improved analysis
  and a new algorithm
Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm
A. Akhavan
Evgenii Chzhen
Massimiliano Pontil
Alexandre B. Tsybakov
54
11
0
03 Jun 2023
Estimating the minimizer and the minimum value of a regression function
  under passive design
Estimating the minimizer and the minimum value of a regression function under passive design
A. Akhavan
Davit Gogolashvili
Alexandre B. Tsybakov
109
0
0
29 Nov 2022
A gradient estimator via L1-randomization for online zero-order
  optimization with two point feedback
A gradient estimator via L1-randomization for online zero-order optimization with two point feedback
A. Akhavan
Evgenii Chzhen
Massimiliano Pontil
Alexandre B. Tsybakov
121
20
0
27 May 2022
Black-Box Generalization: Stability of Zeroth-Order Learning
Black-Box Generalization: Stability of Zeroth-Order Learning
Konstantinos E. Nikolakakis
Farzin Haddadpour
Dionysios S. Kalogerias
Amin Karbasi
MLT
68
2
0
14 Feb 2022
Distributed Zero-Order Optimization under Adversarial Noise
Distributed Zero-Order Optimization under Adversarial Noise
A. Akhavan
Massimiliano Pontil
Alexandre B. Tsybakov
85
20
0
01 Feb 2021
Smooth Bandit Optimization: Generalization to Hölder Space
Smooth Bandit Optimization: Generalization to Hölder Space
Yusha Liu
Yining Wang
Aarti Singh
60
10
0
11 Dec 2020
Continuum-Armed Bandits: A Function Space Perspective
Continuum-Armed Bandits: A Function Space Perspective
Shashank Singh
66
10
0
15 Oct 2020
A New One-Point Residual-Feedback Oracle For Black-Box Learning and
  Control
A New One-Point Residual-Feedback Oracle For Black-Box Learning and Control
Yan Zhang
Yi Zhou
Kaiyi Ji
Michael M. Zavlanos
66
41
0
18 Jun 2020
1