ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2003.04339
  4. Cited By
Revisiting SGD with Increasingly Weighted Averaging: Optimization and
  Generalization Perspectives
v1v2v3 (latest)

Revisiting SGD with Increasingly Weighted Averaging: Optimization and Generalization Perspectives

9 March 2020
Zhishuai Guo
Yan Yan
Tianbao Yang
    MoMe
ArXiv (abs)PDFHTML

Papers citing "Revisiting SGD with Increasingly Weighted Averaging: Optimization and Generalization Perspectives"

4 / 4 papers shown
Symmetric Mean-field Langevin Dynamics for Distributional Minimax
  Problems
Symmetric Mean-field Langevin Dynamics for Distributional Minimax ProblemsInternational Conference on Learning Representations (ICLR), 2023
Juno Kim
Kakei Yamamoto
Kazusato Oko
Zhuoran Yang
Taiji Suzuki
424
13
0
02 Dec 2023
Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters
  and Non-ergodic Case
Convergence of Adam for Non-convex Objectives: Relaxed Hyperparameters and Non-ergodic CaseMachine-mediated learning (ML), 2023
Meixuan He
Yuqing Liang
Jinlan Liu
Dongpo Xu
296
18
0
20 Jul 2023
On the Convergence of Step Decay Step-Size for Stochastic Optimization
On the Convergence of Step Decay Step-Size for Stochastic OptimizationNeural Information Processing Systems (NeurIPS), 2021
Xiaoyu Wang
Sindri Magnússon
M. Johansson
271
31
0
18 Feb 2021
Gradient Descent Averaging and Primal-dual Averaging for Strongly Convex
  Optimization
Gradient Descent Averaging and Primal-dual Averaging for Strongly Convex OptimizationAAAI Conference on Artificial Intelligence (AAAI), 2020
Wei Tao
Wei Li
Zhisong Pan
Qing Tao
MoMe
185
5
0
29 Dec 2020
1
Page 1 of 1