Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1906.11985
Cited By
v1
v2
v3 (latest)
Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond
27 June 2019
Oliver Hinder
Aaron Sidford
N. Sohoni
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond"
39 / 39 papers shown
Title
Minimisation of Quasar-Convex Functions Using Random Zeroth-Order Oracles
Amir Ali Farzin
Yuen-Man Pun
Iman Shames
38
0
0
04 May 2025
Effect-driven interpretation: Functors for natural language composition
Dylan Bumford
Simon Charlow
101
0
0
01 Apr 2025
Expected Variational Inequalities
B. Zhang
Ioannis Anagnostides
Emanuel Tewolde
Ratip Emin Berker
Gabriele Farina
Vincent Conitzer
Tuomas Sandholm
457
1
0
25 Feb 2025
Deep Loss Convexification for Learning Iterative Models
Ziming Zhang
Yuping Shao
Yiqing Zhang
Fangzhou Lin
Haichong K. Zhang
Elke Rundensteiner
3DPC
96
0
0
16 Nov 2024
Nesterov acceleration in benignly non-convex landscapes
Kanan Gupta
Stephan Wojtowytsch
78
2
0
10 Oct 2024
Online Non-Stationary Stochastic Quasar-Convex Optimization
Yuen-Man Pun
Iman Shames
30
0
0
04 Jul 2024
Demystifying SGD with Doubly Stochastic Gradients
Kyurae Kim
Joohwan Ko
Yian Ma
Jacob R. Gardner
140
2
0
03 Jun 2024
How to Make the Gradients Small Privately: Improved Rates for Differentially Private Non-Convex Optimization
Andrew Lowy
Jonathan R. Ullman
Stephen J. Wright
95
8
0
17 Feb 2024
Mean-field underdamped Langevin dynamics and its spacetime discretization
Qiang Fu
Ashia Wilson
67
4
0
26 Dec 2023
Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates
Michael Menart
Enayat Ullah
Raman Arora
Raef Bassily
Cristóbal Guzmán
86
2
0
22 Nov 2023
Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Methods
Constantine Caramanis
Dimitris Fotakis
Alkis Kalavasis
Vasilis Kontonis
Christos Tzamos
78
5
0
08 Oct 2023
Invex Programs: First Order Algorithms and Their Convergence
Adarsh Barik
S. Sra
Jean Honorio
58
2
0
10 Jul 2023
Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates
Siqi Zhang
S. Choudhury
Sebastian U. Stich
Nicolas Loizou
FedML
125
4
0
08 Jun 2023
Aiming towards the minimizers: fast convergence of SGD for overparametrized problems
Chaoyue Liu
Dmitriy Drusvyatskiy
M. Belkin
Damek Davis
Yi-An Ma
ODL
77
18
0
05 Jun 2023
PRISE: Demystifying Deep Lucas-Kanade with Strongly Star-Convex Constraints for Multimodel Image Alignment
Yiqing Zhang
Xinming Huang
Ziming Zhang
71
4
0
21 Mar 2023
Practical and Matching Gradient Variance Bounds for Black-Box Variational Bayesian Inference
Kyurae Kim
Kaiwen Wu
Jisu Oh
Jacob R. Gardner
BDL
96
8
0
18 Mar 2023
Continuized Acceleration for Quasar Convex Functions in Non-Convex Optimization
Jun-Kun Wang
Andre Wibisono
76
10
0
15 Feb 2023
DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule
Maor Ivgi
Oliver Hinder
Y. Carmon
ODL
154
66
0
08 Feb 2023
Accelerated Riemannian Optimization: Handling Constraints with a Prox to Bound Geometric Penalties
David Martínez-Rubio
Sebastian Pokutta
66
10
0
26 Nov 2022
Spectral Regularization Allows Data-frugal Learning over Combinatorial Spaces
Amirali Aghazadeh
Nived Rajaraman
Tony Tu
Kannan Ramchandran
65
2
0
05 Oct 2022
On the Convergence of AdaGrad(Norm) on
R
d
\R^{d}
R
d
: Beyond Convexity, Non-Asymptotic Rate and Acceleration
Zijian Liu
Ta Duy Nguyen
Alina Ene
Huy Le Nguyen
75
8
0
29 Sep 2022
SP2: A Second Order Stochastic Polyak Method
Shuang Li
W. Swartworth
Martin Takávc
Deanna Needell
Robert Mansel Gower
61
13
0
17 Jul 2022
On the Convergence to a Global Solution of Shuffling-Type Gradient Algorithms
Lam M. Nguyen
Trang H. Tran
63
2
0
13 Jun 2022
Special Properties of Gradient Descent with Large Learning Rates
Amirkeivan Mohtashami
Martin Jaggi
Sebastian U. Stich
MLT
98
9
0
30 May 2022
Sharper Utility Bounds for Differentially Private Models
Yilin Kang
Yong Liu
Jian Li
Weiping Wang
FedML
83
3
0
22 Apr 2022
A Local Convergence Theory for the Stochastic Gradient Descent Method in Non-Convex Optimization With Non-isolated Local Minima
Tae-Eon Ko
Xiantao Li
55
2
0
21 Mar 2022
Federated Minimax Optimization: Improved Convergence Analyses and Algorithms
Pranay Sharma
Rohan Panda
Gauri Joshi
P. Varshney
FedML
110
49
0
09 Mar 2022
Tackling benign nonconvexity with smoothing and stochastic gradients
Harsh Vardhan
Sebastian U. Stich
91
8
0
18 Feb 2022
Towards Noise-adaptive, Problem-adaptive (Accelerated) Stochastic Gradient Descent
Sharan Vaswani
Benjamin Dubois-Taine
Reza Babanezhad
98
13
0
21 Oct 2021
Improved Learning Rates for Stochastic Optimization: Two Theoretical Viewpoints
Shaojie Li
Yong Liu
103
13
0
19 Jul 2021
Stochastic Polyak Stepsize with a Moving Target
Robert Mansel Gower
Aaron Defazio
Michael G. Rabbat
76
17
0
22 Jun 2021
Recent Theoretical Advances in Non-Convex Optimization
Marina Danilova
Pavel Dvurechensky
Alexander Gasnikov
Eduard A. Gorbunov
Sergey Guminov
Dmitry Kamzolov
Innokentiy Shibaev
129
79
0
11 Dec 2020
Global Riemannian Acceleration in Hyperbolic and Spherical Spaces
David Martínez-Rubio
107
20
0
07 Dec 2020
Persistent Reductions in Regularized Loss Minimization for Variable Selection
Amin Jalali
111
0
0
30 Nov 2020
Towards Optimal Problem Dependent Generalization Error Bounds in Statistical Learning Theory
Yunbei Xu
A. Zeevi
116
17
0
12 Nov 2020
On The Convergence of First Order Methods for Quasar-Convex Optimization
Jikai Jin
55
9
0
10 Oct 2020
Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex Optimization
Jun-Kun Wang
Jacob D. Abernethy
96
7
0
04 Oct 2020
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation
Robert Mansel Gower
Othmane Sebbouh
Nicolas Loizou
120
76
0
18 Jun 2020
The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication
Sebastian U. Stich
Sai Praneeth Karimireddy
FedML
80
20
0
11 Sep 2019
1