ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2008.10526
14
33

Stochastic Multi-level Composition Optimization Algorithms with Level-Independent Convergence Rates

24 August 2020
Krishnakumar Balasubramanian
Saeed Ghadimi
A. Nguyen
ArXivPDFHTML
Abstract

In this paper, we study smooth stochastic multi-level composition optimization problems, where the objective function is a nested composition of TTT functions. We assume access to noisy evaluations of the functions and their gradients, through a stochastic first-order oracle. For solving this class of problems, we propose two algorithms using moving-average stochastic estimates, and analyze their convergence to an ϵ\epsilonϵ-stationary point of the problem. We show that the first algorithm, which is a generalization of \cite{GhaRuswan20} to the TTT level case, can achieve a sample complexity of O(1/ϵ6)\mathcal{O}(1/\epsilon^6)O(1/ϵ6) by using mini-batches of samples in each iteration. By modifying this algorithm using linearized stochastic estimates of the function values, we improve the sample complexity to O(1/ϵ4)\mathcal{O}(1/\epsilon^4)O(1/ϵ4). {\color{black}This modification not only removes the requirement of having a mini-batch of samples in each iteration, but also makes the algorithm parameter-free and easy to implement}. To the best of our knowledge, this is the first time that such an online algorithm designed for the (un)constrained multi-level setting, obtains the same sample complexity of the smooth single-level setting, under standard assumptions (unbiasedness and boundedness of the second moments) on the stochastic first-order oracle.

View on arXiv
Comments on this paper