ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.06357
23
20

Convergence of Online Mirror Descent

18 February 2018
Yunwen Lei
Ding-Xuan Zhou
ArXivPDFHTML
Abstract

In this paper we consider online mirror descent (OMD) algorithms, a class of scalable online learning algorithms exploiting data geometric structures through mirror maps. Necessary and sufficient conditions are presented in terms of the step size sequence {ηt}t\{\eta_t\}_{t}{ηt​}t​ for the convergence of an OMD algorithm with respect to the expected Bregman distance induced by the mirror map. The condition is lim⁡t→∞ηt=0,∑t=1∞ηt=∞\lim_{t\to\infty}\eta_t=0, \sum_{t=1}^{\infty}\eta_t=\inftylimt→∞​ηt​=0,∑t=1∞​ηt​=∞ in the case of positive variances. It is reduced to ∑t=1∞ηt=∞\sum_{t=1}^{\infty}\eta_t=\infty∑t=1∞​ηt​=∞ in the case of zero variances for which the linear convergence may be achieved by taking a constant step size sequence. A sufficient condition on the almost sure convergence is also given. We establish tight error bounds under mild conditions on the mirror map, the loss function, and the regularizer. Our results are achieved by some novel analysis on the one-step progress of the OMD algorithm using smoothness and strong convexity of the mirror map and the loss function.

View on arXiv
Comments on this paper