ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.07877
24
25

PDE-Based Optimal Strategy for Unconstrained Online Learning

19 January 2022
Zhiyu Zhang
Ashok Cutkosky
I. Paschalidis
ArXivPDFHTML
Abstract

Unconstrained Online Linear Optimization (OLO) is a practical problem setting to study the training of machine learning models. Existing works proposed a number of potential-based algorithms, but in general the design of these potential functions relies heavily on guessing. To streamline this workflow, we present a framework that generates new potential functions by solving a Partial Differential Equation (PDE). Specifically, when losses are 1-Lipschitz, our framework produces a novel algorithm with anytime regret bound CT+∣∣u∣∣2T[log⁡(1+∣∣u∣∣/C)+2]C\sqrt{T}+||u||\sqrt{2T}[\sqrt{\log(1+||u||/C)}+2]CT​+∣∣u∣∣2T​[log(1+∣∣u∣∣/C)​+2], where CCC is a user-specified constant and uuu is any comparator unknown and unbounded a priori. Such a bound attains an optimal loss-regret trade-off without the impractical doubling trick. Moreover, a matching lower bound shows that the leading order term, including the constant multiplier 2\sqrt{2}2​, is tight. To our knowledge, the proposed algorithm is the first to achieve such optimalities.

View on arXiv
Comments on this paper