ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1603.04190
84
18
v1v2 (latest)

Online Isotonic Regression

14 March 2016
W. Kotłowski
Wouter M. Koolen
Alan Malek
ArXiv (abs)PDFHTML
Abstract

We consider the online version of the isotonic regression problem. Given a set of linearly ordered points (e.g., on the real line), the learner must predict labels sequentially at adversarially chosen positions and is evaluated by her total squared loss compared against the best isotonic (non-decreasing) function in hindsight. We survey several standard online learning algorithms and show that none of them achieve the optimal regret exponent; in fact, most of them (including Online Gradient Descent, Follow the Leader and Exponential Weights) incur linear regret. We then prove that the Exponential Weights algorithm played over a covering net of isotonic functions has regret is bounded by O(T1/3log⁡2/3(T))O(T^{1/3} \log^{2/3}(T))O(T1/3log2/3(T)) and present a matching Ω(T1/3)\Omega(T^{1/3})Ω(T1/3) lower bound on regret. We also provide a computationally efficient version of this algorithm. We also analyze the noise-free case, in which the revealed labels are isotonic, and show that the bound can be improved to O(log⁡T)O(\log T)O(logT) or even to O(1)O(1)O(1) (when the labels are revealed in the isotonic order). Finally, we extend the analysis beyond squared loss and give bounds for log-loss and absolute loss.

View on arXiv
Comments on this paper