ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.09109
  4. Cited By
DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online
  Optimization

DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization

25 January 2019
Parvin Nazari
Davoud Ataee Tarzanagh
George Michailidis
    ODL
ArXivPDFHTML

Papers citing "DADAM: A Consensus-based Distributed Adaptive Gradient Method for Online Optimization"

12 / 12 papers shown
Title
Combinatorial Optimization with Automated Graph Neural Networks
Combinatorial Optimization with Automated Graph Neural Networks
Yang Liu
Peng Zhang
Yang Gao
Chuan Zhou
Zhao Li
Hongyang Chen
AI4CE
GNN
40
2
0
05 Jun 2024
DIMAT: Decentralized Iterative Merging-And-Training for Deep Learning
  Models
DIMAT: Decentralized Iterative Merging-And-Training for Deep Learning Models
Nastaran Saadati
Minh Pham
Nasla Saleem
Joshua R. Waite
Aditya Balu
Zhanhong Jiang
Chinmay Hegde
Soumik Sarkar
MoMe
52
1
0
11 Apr 2024
Federated Multi-Sequence Stochastic Approximation with Local
  Hypergradient Estimation
Federated Multi-Sequence Stochastic Approximation with Local Hypergradient Estimation
Davoud Ataee Tarzanagh
Mingchen Li
Pranay Sharma
Samet Oymak
40
0
0
02 Jun 2023
A Penalty-Based Method for Communication-Efficient Decentralized Bilevel
  Programming
A Penalty-Based Method for Communication-Efficient Decentralized Bilevel Programming
Parvin Nazari
Ahmad Mousavi
Davoud Ataee Tarzanagh
George Michailidis
43
4
0
08 Nov 2022
Distributed Online Non-convex Optimization with Composite Regret
Distributed Online Non-convex Optimization with Composite Regret
Zhanhong Jiang
Aditya Balu
Xian Yeow Lee
Young M. Lee
Chinmay Hegde
Soumik Sarkar
46
4
0
21 Sep 2022
Online Bilevel Optimization: Regret Analysis of Online Alternating
  Gradient Methods
Online Bilevel Optimization: Regret Analysis of Online Alternating Gradient Methods
Davoud Ataee Tarzanagh
Parvin Nazari
Bojian Hou
Li Shen
Laura Balzano
53
10
0
06 Jul 2022
Efficient-Adam: Communication-Efficient Distributed Adam
Efficient-Adam: Communication-Efficient Distributed Adam
Congliang Chen
Li Shen
Wei Liu
Zhi-Quan Luo
36
19
0
28 May 2022
FedNest: Federated Bilevel, Minimax, and Compositional Optimization
FedNest: Federated Bilevel, Minimax, and Compositional Optimization
Davoud Ataee Tarzanagh
Mingchen Li
Christos Thrampoulidis
Samet Oymak
FedML
49
73
0
04 May 2022
On the Convergence of Decentralized Adaptive Gradient Methods
On the Convergence of Decentralized Adaptive Gradient Methods
Xiangyi Chen
Belhal Karimi
Weijie Zhao
Ping Li
23
21
0
07 Sep 2021
Descending through a Crowded Valley - Benchmarking Deep Learning
  Optimizers
Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers
Robin M. Schmidt
Frank Schneider
Philipp Hennig
ODL
50
162
0
03 Jul 2020
Optimal Complexity in Decentralized Training
Optimal Complexity in Decentralized Training
Yucheng Lu
Christopher De Sa
40
72
0
15 Jun 2020
MixML: A Unified Analysis of Weakly Consistent Parallel Learning
MixML: A Unified Analysis of Weakly Consistent Parallel Learning
Yucheng Lu
J. Nash
Christopher De Sa
FedML
34
12
0
14 May 2020
1