ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.13430
  4. Cited By
Achieving Linear Speedup in Decentralized Stochastic Compositional
  Minimax Optimization

Achieving Linear Speedup in Decentralized Stochastic Compositional Minimax Optimization

25 July 2023
Hongchang Gao
ArXivPDFHTML

Papers citing "Achieving Linear Speedup in Decentralized Stochastic Compositional Minimax Optimization"

5 / 5 papers shown
Title
Decentralized Multi-Level Compositional Optimization Algorithms with
  Level-Independent Convergence Rate
Decentralized Multi-Level Compositional Optimization Algorithms with Level-Independent Convergence Rate
Hongchang Gao
6
0
0
06 Jun 2023
GT-STORM: Taming Sample, Communication, and Memory Complexities in
  Decentralized Non-Convex Learning
GT-STORM: Taming Sample, Communication, and Memory Complexities in Decentralized Non-Convex Learning
Xin Zhang
Jia Liu
Zhengyuan Zhu
Elizabeth S. Bentley
19
14
0
04 May 2021
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
DecentLaM: Decentralized Momentum SGD for Large-batch Deep Training
Kun Yuan
Yiming Chen
Xinmeng Huang
Yingya Zhang
Pan Pan
Yinghui Xu
W. Yin
MoE
36
60
0
24 Apr 2021
Solving Stochastic Compositional Optimization is Nearly as Easy as
  Solving Stochastic Optimization
Solving Stochastic Compositional Optimization is Nearly as Easy as Solving Stochastic Optimization
Tianyi Chen
Yuejiao Sun
W. Yin
44
81
0
25 Aug 2020
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
Chelsea Finn
Pieter Abbeel
Sergey Levine
OOD
234
11,568
0
09 Mar 2017
1