ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1811.09955
16
4
v1v2v3 (latest)

Online Newton Step Algorithm with Estimated Gradient

25 November 2018
Binbin Liu
Jundong Li
Yunquan Song
Xijun Liang
Ling Jian
Huan Liu
ArXiv (abs)PDFHTML
Abstract

Online learning with limited information feedback (bandit) tries to solve the problem where an online learner receives partial feedback information from the environment in the course of learning. Under this setting, Flaxman et al.[8] extended Zinkevich's classical Online Gradient Descent (OGD) algorithm [29] by proposing the Online Gradient Descent with Expected Gradient (OGDEG) algorithm. Specifically, it uses a simple trick to approximate the gradient of the loss function ftf_tft​ by evaluating it at a single point and bounds the expected regret as O(T5/6)\mathcal{O}(T^{5/6})O(T5/6) [8], where the number of rounds is TTT. Meanwhile, past research efforts have shown that compared with the first-order algorithms, second-order online learning algorithms such as Online Newton Step (ONS) [11] can significantly accelerate the convergence rate of traditional online learning algorithms. Motivated by this, this paper aims to exploit the second-order information to speed up the convergence of the OGDEG algorithm. In particular, we extend the ONS algorithm with the trick of expected gradient and develop a novel second-order online learning algorithm, i.e., Online Newton Step with Expected Gradient (ONSEG). Theoretically, we show that the proposed ONSEG algorithm significantly reduces the expected regret of OGDEG algorithm from O(T5/6)\mathcal{O}(T^{5/6})O(T5/6) to O(T2/3)\mathcal{O}(T^{2/3})O(T2/3) in the bandit feedback scenario. Empirically, we further demonstrate the advantages of the proposed algorithm on multiple real-world datasets.

View on arXiv
Comments on this paper