ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1810.00997
40
33
v1v2 (latest)

A simple parameter-free and adaptive approach to optimization under a minimal local smoothness assumption

1 October 2018
Peter L. Bartlett
Victor Gabillon
Michal Valko
ArXiv (abs)PDFHTML
Abstract

We study the problem of optimizing a function under a \emph{budgeted number of evaluations}. We only assume that the function is \emph{locally} smooth around one of its global optima. The difficulty of optimization is measured in terms of 1) the amount of \emph{noise} bbb of the function evaluation and 2) the local smoothness, ddd, of the function. A smaller ddd results in smaller optimization error. We come with a new, simple, and parameter-free approach. First, for all values of bbb and ddd, this approach recovers at least the state-of-the-art regret guarantees. Second, our approach additionally obtains these results while being \textit{agnostic} to the values of both bbb and ddd. This leads to the first algorithm that naturally adapts to an \textit{unknown} range of noise bbb and leads to significant improvements in a moderate and low-noise regime. Third, our approach also obtains a remarkable improvement over the state-of-the-art SOO algorithm when the noise is very low which includes the case of optimization under deterministic feedback (b=0b=0b=0). There, under our minimal local smoothness assumption, this improvement is of exponential magnitude and holds for a class of functions that covers the vast majority of functions that practitioners optimize (d=0d=0d=0). We show that our algorithmic improvement is borne out in experiments as we empirically show faster convergence on common benchmarks.

View on arXiv
Comments on this paper