ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.01108
  4. Cited By
Stochastic Reformulations of Linear Systems: Algorithms and Convergence
  Theory

Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory

4 June 2017
Peter Richtárik
Martin Takáč
ArXivPDFHTML

Papers citing "Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory"

17 / 17 papers shown
Title
Acceleration of stochastic gradient descent with momentum by averaging:
  finite-sample rates and asymptotic normality
Acceleration of stochastic gradient descent with momentum by averaging: finite-sample rates and asymptotic normality
Kejie Tang
Weidong Liu
Yichen Zhang
Xi Chen
26
2
0
28 May 2023
Single-Call Stochastic Extragradient Methods for Structured Non-monotone
  Variational Inequalities: Improved Analysis under Weaker Conditions
Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions
S. Choudhury
Eduard A. Gorbunov
Nicolas Loizou
34
13
0
27 Feb 2023
Distributed Stochastic Optimization under a General Variance Condition
Distributed Stochastic Optimization under a General Variance Condition
Kun-Yen Huang
Xiao Li
Shin-Yi Pu
FedML
45
6
0
30 Jan 2023
ALS: Augmented Lagrangian Sketching Methods for Linear Systems
ALS: Augmented Lagrangian Sketching Methods for Linear Systems
M. Morshed
36
0
0
12 Aug 2022
Towards Practical Large-scale Randomized Iterative Least Squares Solvers
  through Uncertainty Quantification
Towards Practical Large-scale Randomized Iterative Least Squares Solvers through Uncertainty Quantification
Nathaniel Pritchard
V. Patel
26
1
0
09 Aug 2022
Distributed Newton-Type Methods with Communication Compression and
  Bernoulli Aggregation
Distributed Newton-Type Methods with Communication Compression and Bernoulli Aggregation
Rustem Islamov
Xun Qian
Slavomír Hanzely
M. Safaryan
Peter Richtárik
45
16
0
07 Jun 2022
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient
  Methods
Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods
Aleksandr Beznosikov
Eduard A. Gorbunov
Hugo Berard
Nicolas Loizou
24
49
0
15 Feb 2022
Solving, Tracking and Stopping Streaming Linear Inverse Problems
Solving, Tracking and Stopping Streaming Linear Inverse Problems
Nathaniel Pritchard
V. Patel
26
0
0
15 Jan 2022
DSAG: A mixed synchronous-asynchronous iterative method for
  straggler-resilient learning
DSAG: A mixed synchronous-asynchronous iterative method for straggler-resilient learning
A. Severinson
E. Rosnes
S. E. Rouayheb
Alexandre Graell i Amat
22
2
0
27 Nov 2021
Stochastic Extragradient: General Analysis and Improved Rates
Stochastic Extragradient: General Analysis and Improved Rates
Eduard A. Gorbunov
Hugo Berard
Gauthier Gidel
Nicolas Loizou
22
40
0
16 Nov 2021
Optimization for Supervised Machine Learning: Randomized Algorithms for
  Data and Parameters
Optimization for Supervised Machine Learning: Randomized Algorithms for Data and Parameters
Filip Hanzely
42
0
0
26 Aug 2020
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and
  Interpolation
SGD for Structured Nonconvex Functions: Learning Rates, Minibatching and Interpolation
Robert Mansel Gower
Othmane Sebbouh
Nicolas Loizou
30
74
0
18 Jun 2020
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast
  Convergence
Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast Convergence
Nicolas Loizou
Sharan Vaswani
I. Laradji
Simon Lacoste-Julien
29
181
0
24 Feb 2020
Better Theory for SGD in the Nonconvex World
Better Theory for SGD in the Nonconvex World
Ahmed Khaled
Peter Richtárik
17
180
0
09 Feb 2020
Stochastic Proximal Langevin Algorithm: Potential Splitting and
  Nonasymptotic Rates
Stochastic Proximal Langevin Algorithm: Potential Splitting and Nonasymptotic Rates
Adil Salim
D. Kovalev
Peter Richtárik
27
25
0
28 May 2019
SEGA: Variance Reduction via Gradient Sketching
SEGA: Variance Reduction via Gradient Sketching
Filip Hanzely
Konstantin Mishchenko
Peter Richtárik
25
71
0
09 Sep 2018
Momentum and Stochastic Momentum for Stochastic Gradient, Newton,
  Proximal Point and Subspace Descent Methods
Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods
Nicolas Loizou
Peter Richtárik
24
200
0
27 Dec 2017
1