ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.13332
28
0

Stochastic Proximal Gradient Algorithm with Minibatches. Application to Large Scale Learning Models

30 March 2020
A. Pătraşcu
C. Paduraru
Paul Irofti
ArXivPDFHTML
Abstract

Stochastic optimization lies at the core of most statistical learning models. The recent great development of stochastic algorithmic tools focused significantly onto proximal gradient iterations, in order to find an efficient approach for nonsmooth (composite) population risk functions. The complexity of finding optimal predictors by minimizing regularized risk is largely understood for simple regularizations such as ℓ1/ℓ2\ell_1/\ell_2ℓ1​/ℓ2​ norms. However, more complex properties desired for the predictor necessitates highly difficult regularizers as used in grouped lasso or graph trend filtering. In this chapter we develop and analyze minibatch variants of stochastic proximal gradient algorithm for general composite objective functions with stochastic nonsmooth components. We provide iteration complexity for constant and variable stepsize policies obtaining that, for minibatch size NNN, after O(1Nϵ)\mathcal{O}(\frac{1}{N\epsilon})O(Nϵ1​) iterations ϵ−\epsilon-ϵ−suboptimality is attained in expected quadratic distance to optimal solution. The numerical tests on ℓ2−\ell_2-ℓ2​−regularized SVMs and parametric sparse representation problems confirm the theoretical behaviour and surpasses minibatch SGD performance.

View on arXiv
Comments on this paper