ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2201.01652
31
3
v1v2v3 (latest)

Convergence and Complexity of Stochastic Block Majorization-Minimization

5 January 2022
Hanbaek Lyu
ArXiv (abs)PDFHTML
Abstract

Stochastic majorization-minimization (SMM) is an online extension of the classical principle of majorization-minimization, which consists of sampling i.i.d. data points from a fixed data distribution and minimizing a recursively defined majorizing surrogate of an objective function. In this paper, we introduce stochastic block majorization-minimization, where the surrogates can now be only block multi-convex and a single block is optimized at a time within a diminishing radius. Relaxing the standard strong convexity requirements for surrogates in SMM, our framework gives wider applicability including online CANDECOMP/PARAFAC (CP) dictionary learning and yields greater computational efficiency especially when the problem dimension is large. We provide an extensive convergence analysis on the proposed algorithm, which we derive under possibly dependent data streams, relaxing the standard i.i.d. assumption on data samples. We show that the proposed algorithm converges almost surely to the set of stationary points of a nonconvex objective under constraints at a rate O((log⁡n)1+\eps/n1/2)O((\log n)^{1+\eps}/n^{1/2})O((logn)1+\eps/n1/2) for the empirical loss function and O((log⁡n)1+\eps/n1/4)O((\log n)^{1+\eps}/n^{1/4})O((logn)1+\eps/n1/4) for the expected loss function, where nnn denotes the number of data samples processed. Under some additional assumption, the latter convergence rate can be improved to O((log⁡n)1+\eps/n1/2)O((\log n)^{1+\eps}/n^{1/2})O((logn)1+\eps/n1/2). Our results provide first convergence rate bounds for various online matrix and tensor decomposition algorithms under a general Markovian data setting.

View on arXiv
Comments on this paper