ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1710.05080
  4. Cited By
DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for
  Asynchronous Distributed Optimization

DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization

13 October 2017
Lin Xiao
Adams Wei Yu
Qihang Lin
Weizhu Chen
ArXiv (abs)PDFHTML

Papers citing "DSCOVR: Randomized Primal-Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization"

19 / 19 papers shown
Title
Contractivity and linear convergence in bilinear saddle-point problems: An operator-theoretic approach
Contractivity and linear convergence in bilinear saddle-point problems: An operator-theoretic approach
Colin Dirren
Mattia Bianchi
Panagiotis D. Grontas
John Lygeros
Florian Dorfler
149
0
0
18 Oct 2024
Pick your Neighbor: Local Gauss-Southwell Rule for Fast Asynchronous
  Decentralized Optimization
Pick your Neighbor: Local Gauss-Southwell Rule for Fast Asynchronous Decentralized Optimization
Marina Costantini
N. Liakopoulos
P. Mertikopoulos
T. Spyropoulos
58
1
0
15 Jul 2022
Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave
  Saddle-Point Problems with Bilinear Coupling
Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling
D. Kovalev
Alexander Gasnikov
Peter Richtárik
137
33
0
30 Dec 2021
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free
  Optimization
Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization
Kaiwen Zhou
Anthony Man-Cho So
James Cheng
70
1
0
30 Sep 2021
L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method
L-DQN: An Asynchronous Limited-Memory Distributed Quasi-Newton Method
Bugra Can
Saeed Soori
M. Dehnavi
Mert Gurbuzbalaban
72
2
0
20 Aug 2021
Stability and Generalization for Randomized Coordinate Descent
Stability and Generalization for Randomized Coordinate Descent
Puyu Wang
Liang Wu
Yunwen Lei
58
7
0
17 Aug 2021
Dual-Free Stochastic Decentralized Optimization with Variance Reduction
Dual-Free Stochastic Decentralized Optimization with Variance Reduction
Hadrien Hendrikx
Francis R. Bach
Laurent Massoulié
52
26
0
25 Jun 2020
An Optimal Algorithm for Decentralized Finite Sum Optimization
An Optimal Algorithm for Decentralized Finite Sum Optimization
Hadrien Hendrikx
Francis R. Bach
Laurent Massoulie
66
45
0
20 May 2020
Differential Network Analysis: A Statistical Perspective
Differential Network Analysis: A Statistical Perspective
Ali Shojaie
91
49
0
09 Mar 2020
Statistically Preconditioned Accelerated Gradient Method for Distributed
  Optimization
Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization
Hadrien Hendrikx
Lin Xiao
Sébastien Bubeck
Francis R. Bach
Laurent Massoulie
66
58
0
25 Feb 2020
Estimating Normalizing Constants for Log-Concave Distributions:
  Algorithms and Lower Bounds
Estimating Normalizing Constants for Log-Concave Distributions: Algorithms and Lower Bounds
Rong Ge
Holden Lee
Jianfeng Lu
78
22
0
08 Nov 2019
A Stochastic Proximal Point Algorithm for Saddle-Point Problems
A Stochastic Proximal Point Algorithm for Saddle-Point Problems
Luo Luo
Cheng Chen
Yujun Li
Guangzeng Xie
Zhihua Zhang
142
16
0
13 Sep 2019
Minimum $L^q$-distance estimators for non-normalized parametric models
Minimum LqL^qLq-distance estimators for non-normalized parametric models
Steffen Betsch
B. Ebner
B. Klar
52
9
0
30 Aug 2019
On the Variance of the Adaptive Learning Rate and Beyond
On the Variance of the Adaptive Learning Rate and Beyond
Liyuan Liu
Haoming Jiang
Pengcheng He
Weizhu Chen
Xiaodong Liu
Jianfeng Gao
Jiawei Han
ODL
320
1,916
0
08 Aug 2019
On Convergence of Distributed Approximate Newton Methods: Globalization,
  Sharper Bounds and Beyond
On Convergence of Distributed Approximate Newton Methods: Globalization, Sharper Bounds and Beyond
Xiao-Tong Yuan
Ping Li
131
32
0
06 Aug 2019
Neural ODEs as the Deep Limit of ResNets with constant weights
Neural ODEs as the Deep Limit of ResNets with constant weights
B. Avelin
K. Nystrom
ODL
141
32
0
28 Jun 2019
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly
  Convex Distributed Finite Sums
Asynchronous Accelerated Proximal Stochastic Gradient for Strongly Convex Distributed Finite Sums
Hadrien Hendrikx
Francis R. Bach
Laurent Massoulié
FedML
67
26
0
28 Jan 2019
Harnessing the Power of Serverless Runtimes for Large-Scale Optimization
Harnessing the Power of Serverless Runtimes for Large-Scale Optimization
Arda Aytekin
M. Johansson
47
14
0
10 Jan 2019
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave
  Saddle Point Problems without Strong Convexity
Linear Convergence of the Primal-Dual Gradient Method for Convex-Concave Saddle Point Problems without Strong Convexity
S. Du
Wei Hu
139
122
0
05 Feb 2018
1