ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.07950
  4. Cited By
Accelerating Distributed ML Training via Selective Synchronization

Accelerating Distributed ML Training via Selective Synchronization

16 July 2023
S. Tyagi
Martin Swany
    FedML
ArXivPDFHTML

Papers citing "Accelerating Distributed ML Training via Selective Synchronization"

3 / 3 papers shown
Title
OmniLearn: A Framework for Distributed Deep Learning over Heterogeneous Clusters
OmniLearn: A Framework for Distributed Deep Learning over Heterogeneous Clusters
S. Tyagi
Prateek Sharma
55
0
0
21 Mar 2025
Flexible Communication for Optimal Distributed Learning over
  Unpredictable Networks
Flexible Communication for Optimal Distributed Learning over Unpredictable Networks
S. Tyagi
Martin Swany
29
1
0
05 Dec 2023
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp
  Minima
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,878
0
15 Sep 2016
1