Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.04477
Cited By
A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
13 February 2018
Zhize Li
Jian Li
Re-assign community
ArXiv
PDF
HTML
Papers citing
"A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization"
20 / 20 papers shown
Title
Probabilistic Guarantees of Stochastic Recursive Gradient in Non-Convex Finite Sum Problems
Yanjie Zhong
Jiaqi Li
Soumendra Lahiri
22
1
0
29 Jan 2024
Convergence of Nonconvex PnP-ADMM with MMSE Denoisers
Chicago Park
S. Shoushtari
Weijie Gan
Ulugbek S. Kamilov
10
0
0
30 Nov 2023
A Coefficient Makes SVRG Effective
Yida Yin
Zhiqiu Xu
Zhiyuan Li
Trevor Darrell
Zhuang Liu
25
1
0
09 Nov 2023
Demystifying the Myths and Legends of Nonconvex Convergence of SGD
Aritra Dutta
El Houcine Bergou
Soumia Boucherouite
Nicklas Werge
M. Kandemir
Xin Li
26
0
0
19 Oct 2023
Stochastic Variable Metric Proximal Gradient with variance reduction for non-convex composite optimization
G. Fort
Eric Moulines
46
6
0
02 Jan 2023
Distributed Policy Gradient with Variance Reduction in Multi-Agent Reinforcement Learning
Xiaoxiao Zhao
Jinlong Lei
Li Li
Jie-bin Chen
OffRL
18
2
0
25 Nov 2021
Distributed stochastic proximal algorithm with random reshuffling for non-smooth finite-sum optimization
Xia Jiang
Xianlin Zeng
Jian Sun
Jie Chen
Lihua Xie
13
6
0
06 Nov 2021
Provably Faster Algorithms for Bilevel Optimization
Junjie Yang
Kaiyi Ji
Yingbin Liang
41
132
0
08 Jun 2021
Greedy-GQ with Variance Reduction: Finite-time Analysis and Improved Complexity
Shaocong Ma
Ziyi Chen
Yi Zhou
Shaofeng Zou
15
11
0
30 Mar 2021
ANITA: An Optimal Loopless Accelerated Variance-Reduced Gradient Method
Zhize Li
33
14
0
21 Mar 2021
Variance Reduced Training with Stratified Sampling for Forecasting Models
Yucheng Lu
Youngsuk Park
Lifan Chen
Bernie Wang
Christopher De Sa
Dean Phillips Foster
AI4TS
30
17
0
02 Mar 2021
PAGE: A Simple and Optimal Probabilistic Gradient Estimator for Nonconvex Optimization
Zhize Li
Hongyan Bao
Xiangliang Zhang
Peter Richtárik
ODL
24
125
0
25 Aug 2020
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization
Samuel Horváth
Lihua Lei
Peter Richtárik
Michael I. Jordan
55
30
0
13 Feb 2020
History-Gradient Aided Batch Size Adaptation for Variance Reduced Algorithms
Kaiyi Ji
Zhe Wang
Bowen Weng
Yi Zhou
Wei Zhang
Yingbin Liang
ODL
13
5
0
21 Oct 2019
Sample Efficient Policy Gradient Methods with Recursive Variance Reduction
Pan Xu
F. Gao
Quanquan Gu
23
83
0
18 Sep 2019
Stabilized SVRG: Simple Variance Reduction for Nonconvex Optimization
Rong Ge
Zhize Li
Weiyao Wang
Xiang Wang
17
33
0
01 May 2019
ProxSARAH: An Efficient Algorithmic Framework for Stochastic Composite Nonconvex Optimization
Nhan H. Pham
Lam M. Nguyen
Dzung Phan
Quoc Tran-Dinh
11
139
0
15 Feb 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
281
2,889
0
15 Sep 2016
Linear Convergence of Gradient and Proximal-Gradient Methods Under the Polyak-Łojasiewicz Condition
Hamed Karimi
J. Nutini
Mark W. Schmidt
136
1,198
0
16 Aug 2016
A Proximal Stochastic Gradient Method with Progressive Variance Reduction
Lin Xiao
Tong Zhang
ODL
84
736
0
19 Mar 2014
1