ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.03934
  4. Cited By
Stagewise Training Accelerates Convergence of Testing Error Over SGD

Stagewise Training Accelerates Convergence of Testing Error Over SGD

10 December 2018
Zhuoning Yuan
Yan Yan
R. L. Jin
Tianbao Yang
ArXivPDFHTML

Papers citing "Stagewise Training Accelerates Convergence of Testing Error Over SGD"

3 / 3 papers shown
Title
Effective Federated Adaptive Gradient Methods with Non-IID Decentralized
  Data
Effective Federated Adaptive Gradient Methods with Non-IID Decentralized Data
Qianqian Tong
Guannan Liang
J. Bi
FedML
33
27
0
14 Sep 2020
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning
  Rate Procedure For Least Squares
The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure For Least Squares
Rong Ge
Sham Kakade
Rahul Kidambi
Praneeth Netrapalli
21
149
0
29 Apr 2019
Linear Convergence of Gradient and Proximal-Gradient Methods Under the
  Polyak-Łojasiewicz Condition
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
139
1,199
0
16 Aug 2016
1