Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2107.08686
Cited By
Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints
19 July 2021
Shaojie Li
Yong Liu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints"
9 / 9 papers shown
Title
Stability and Sharper Risk Bounds with Convergence Rate
O
(
1
/
n
2
)
O(1/n^2)
O
(
1/
n
2
)
Bowei Zhu
Shaojie Li
Yong Liu
16
0
0
13 Oct 2024
Towards Sharper Risk Bounds for Minimax Problems
Bowei Zhu
Shaojie Li
Yong Liu
36
0
0
11 Oct 2024
Convex SGD: Generalization Without Early Stopping
Julien Hendrickx
A. Olshevsky
MLT
LRM
25
1
0
08 Jan 2024
Towards Understanding the Generalization of Graph Neural Networks
Huayi Tang
Y. Liu
GNN
AI4CE
32
29
0
14 May 2023
Sharper Utility Bounds for Differentially Private Models
Yilin Kang
Yong Liu
Jian Li
Weiping Wang
FedML
23
3
0
22 Apr 2022
Learning Rates for Nonconvex Pairwise Learning
Shaojie Li
Yong Liu
25
2
0
09 Nov 2021
Data Heterogeneity Differential Privacy: From Theory to Algorithm
Yilin Kang
Jian Li
Yong Liu
Weiping Wang
16
1
0
20 Feb 2020
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
130
1,198
0
16 Aug 2016
Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes
Ohad Shamir
Tong Zhang
99
570
0
08 Dec 2012
1