ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.05917
79
23

Adaptivity and Optimality: A Universal Algorithm for Online Convex Optimization

15 May 2019
Guanghui Wang
Shiyin Lu
Lijun Zhang
    ODL
ArXiv (abs)PDFHTML
Abstract

In this paper, we study adaptive online convex optimization, and aim to design a universal algorithm that achieves optimal regret bounds for multiple common types of loss functions. Existing universal methods are limited in the sense that they are optimal for only a subclass of loss functions. To address this limitation, we propose a novel online method, namely Maler, which enjoys the optimal O(T)O(\sqrt{T})O(T​), O(dlog⁡T)O(d\log T)O(dlogT) and O(log⁡T)O(\log T)O(logT) regret bounds for general convex, exponentially concave, and strongly convex functions respectively. The essential idea is to run multiple types of learning algorithms with different learning rates in parallel, and utilize a meta algorithm to track the best one on the fly. Empirical results demonstrate the effectiveness of our method.

View on arXiv
Comments on this paper