ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1908.02246
  4. Cited By
On Convergence of Distributed Approximate Newton Methods: Globalization,
  Sharper Bounds and Beyond

On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond

6 August 2019
Xiao-Tong Yuan
Ping Li
ArXiv (abs)PDFHTML

Papers citing "On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond"

13 / 13 papers shown
Title
A Unified Theory of Stochastic Proximal Point Methods without Smoothness
A Unified Theory of Stochastic Proximal Point Methods without Smoothness
Peter Richtárik
Abdurakhmon Sadiev
Yury Demidovich
85
4
0
24 May 2024
Stochastic Distributed Optimization under Average Second-order
  Similarity: Algorithms and Analysis
Stochastic Distributed Optimization under Average Second-order Similarity: Algorithms and Analysis
Dachao Lin
Yuze Han
Haishan Ye
Zhihua Zhang
94
12
0
15 Apr 2023
Similarity, Compression and Local Steps: Three Pillars of Efficient
  Communications for Distributed Variational Inequalities
Similarity, Compression and Local Steps: Three Pillars of Efficient Communications for Distributed Variational Inequalities
Aleksandr Beznosikov
Martin Takáč
Alexander Gasnikov
86
11
0
15 Feb 2023
Sharper Analysis for Minibatch Stochastic Proximal Point Methods:
  Stability, Smoothness, and Deviation
Sharper Analysis for Minibatch Stochastic Proximal Point Methods: Stability, Smoothness, and Deviation
Xiao-Tong Yuan
P. Li
89
2
0
09 Jan 2023
Faster federated optimization under second-order similarity
Faster federated optimization under second-order similarity
Ahmed Khaled
Chi Jin
FedML
100
19
0
06 Sep 2022
Scalable K-FAC Training for Deep Neural Networks with Distributed
  Preconditioning
Scalable K-FAC Training for Deep Neural Networks with Distributed Preconditioning
Lin Zhang
Shaoshuai Shi
Wei Wang
Yue Liu
70
10
0
30 Jun 2022
Compression and Data Similarity: Combination of Two Techniques for
  Communication-Efficient Solving of Distributed Variational Inequalities
Compression and Data Similarity: Combination of Two Techniques for Communication-Efficient Solving of Distributed Variational Inequalities
Aleksandr Beznosikov
Alexander Gasnikov
60
10
0
19 Jun 2022
Optimal Gradient Sliding and its Application to Distributed Optimization
  Under Similarity
Optimal Gradient Sliding and its Application to Distributed Optimization Under Similarity
D. Kovalev
Aleksandr Beznosikov
Ekaterina Borodich
Alexander Gasnikov
G. Scutari
72
13
0
30 May 2022
Acceleration in Distributed Optimization under Similarity
Acceleration in Distributed Optimization under Similarity
Helena Lofstrom
G. Scutari
Tianyue Cao
Alexander Gasnikov
79
28
0
24 Oct 2021
Robust Distributed Optimization With Randomly Corrupted Gradients
Robust Distributed Optimization With Randomly Corrupted Gradients
Berkay Turan
César A. Uribe
Hoi-To Wai
M. Alizadeh
63
17
0
28 Jun 2021
Data-Free Knowledge Distillation for Heterogeneous Federated Learning
Data-Free Knowledge Distillation for Heterogeneous Federated Learning
Zhuangdi Zhu
Junyuan Hong
Jiayu Zhou
FedML
90
667
0
20 May 2021
Newton Method over Networks is Fast up to the Statistical Precision
Newton Method over Networks is Fast up to the Statistical Precision
Amir Daneshmand
G. Scutari
Pavel Dvurechensky
Alexander Gasnikov
72
22
0
12 Feb 2021
Statistically Preconditioned Accelerated Gradient Method for Distributed
  Optimization
Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization
Hadrien Hendrikx
Lin Xiao
Sébastien Bubeck
Francis R. Bach
Laurent Massoulie
74
58
0
25 Feb 2020
1