ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.00347
  4. Cited By
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with
  Linear Convergence Rate
v1v2 (latest)

Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate

1 November 2016
Aryan Mokhtari
Mert Gurbuzbalaban
Alejandro Ribeiro
ArXiv (abs)PDFHTML

Papers citing "Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate"

19 / 19 papers shown
Shuffling Heuristic in Variational Inequalities: Establishing New Convergence Guarantees
Shuffling Heuristic in Variational Inequalities: Establishing New Convergence Guarantees
Daniil Medyakov
Gleb Molodtsov
Grigoriy Evseev
Egor Petrov
Aleksandr Beznosikov
331
3
0
04 Sep 2025
Adjusted Shuffling SARAH: Advancing Complexity Analysis via Dynamic Gradient Weighting
Adjusted Shuffling SARAH: Advancing Complexity Analysis via Dynamic Gradient Weighting
Duc Toan Nguyen
Trang H. Tran
Lam M. Nguyen
133
0
0
14 Jun 2025
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Variance Reduction Methods Do Not Need to Compute Full Gradients: Improved Efficiency through Shuffling
Daniil Medyakov
Gleb Molodtsov
S. Chezhegov
Alexey Rebrikov
Aleksandr Beznosikov
422
1
0
20 Feb 2025
Beyond adaptive gradient: Fast-Controlled Minibatch Algorithm for
  large-scale optimization
Beyond adaptive gradient: Fast-Controlled Minibatch Algorithm for large-scale optimization
Corrado Coppola
Lorenzo Papa
Irene Amerini
L. Palagi
ODL
395
0
0
24 Nov 2024
High Probability Guarantees for Random Reshuffling
High Probability Guarantees for Random Reshuffling
Hengxu Yu
Xiao Li
282
3
0
20 Nov 2023
SPIRAL: A superlinearly convergent incremental proximal algorithm for
  nonconvex finite sum minimization
SPIRAL: A superlinearly convergent incremental proximal algorithm for nonconvex finite sum minimizationComputational optimization and applications (Comput. Optim. Appl.), 2022
Pourya Behmandpoor
P. Latafat
Andreas Themelis
Marc Moonen
Panagiotis Patrinos
219
2
0
17 Jul 2022
Federated Random Reshuffling with Compression and Variance Reduction
Federated Random Reshuffling with Compression and Variance Reduction
Grigory Malinovsky
Peter Richtárik
FedML
300
12
0
08 May 2022
Random-reshuffled SARAH does not need a full gradient computations
Random-reshuffled SARAH does not need a full gradient computationsOptimization Letters (Optim. Lett.), 2021
Aleksandr Beznosikov
Martin Takáč
245
11
0
26 Nov 2021
L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method
L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method
Bugra Can
Saeed Soori
M. Dehnavi
Mert Gurbuzbalaban
222
2
0
20 Aug 2021
Improved Analysis and Rates for Variance Reduction under
  Without-replacement Sampling Orders
Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders
Xinmeng Huang
Kun Yuan
Xianghui Mao
W. Yin
224
15
0
25 Apr 2021
Random Reshuffling with Variance Reduction: New Analysis and Better
  Rates
Random Reshuffling with Variance Reduction: New Analysis and Better RatesConference on Uncertainty in Artificial Intelligence (UAI), 2021
Grigory Malinovsky
Alibek Sailanbayev
Peter Richtárik
202
24
0
19 Apr 2021
Cyclic Coordinate Dual Averaging with Extrapolation
Cyclic Coordinate Dual Averaging with ExtrapolationSIAM Journal on Optimization (SIAM J. Optim.), 2021
Chaobing Song
Jelena Diakonikolas
381
9
0
26 Feb 2021
A fast randomized incremental gradient method for decentralized
  non-convex optimization
A fast randomized incremental gradient method for decentralized non-convex optimization
Ran Xin
U. Khan
S. Kar
270
38
0
07 Nov 2020
Advances in Asynchronous Parallel and Distributed Optimization
Advances in Asynchronous Parallel and Distributed OptimizationProceedings of the IEEE (Proc. IEEE), 2020
By Mahmoud Assran
Arda Aytekin
Hamid Reza Feyzmahdavian
M. Johansson
Michael G. Rabbat
229
90
0
24 Jun 2020
Stochastic Newton and Cubic Newton Methods with Simple Local
  Linear-Quadratic Rates
Stochastic Newton and Cubic Newton Methods with Simple Local Linear-Quadratic Rates
D. Kovalev
Konstantin Mishchenko
Peter Richtárik
ODL
171
51
0
03 Dec 2019
A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm
A Distributed Flexible Delay-tolerant Proximal Gradient Algorithm
Konstantin Mishchenko
F. Iutzeler
J. Malick
293
22
0
25 Jun 2018
Curvature-aided Incremental Aggregated Gradient Method
Curvature-aided Incremental Aggregated Gradient MethodAllerton Conference on Communication, Control, and Computing (Allerton), 2017
Hoi-To Wai
Wei Shi
A. Nedić
Anna Scaglione
112
11
0
24 Oct 2017
IQN: An Incremental Quasi-Newton Method with Local Superlinear
  Convergence Rate
IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence RateSIAM Journal on Optimization (SIAM J. Optim.), 2017
Aryan Mokhtari
Mark Eisen
Alejandro Ribeiro
205
76
0
02 Feb 2017
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale
  Learning
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning
Aryan Mokhtari
Alec Koppel
Alejandro Ribeiro
166
15
0
15 Jun 2016
1