ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2307.07950
  4. Cited By
Accelerating Distributed ML Training via Selective Synchronization
v1v2 (latest)

Accelerating Distributed ML Training via Selective Synchronization

IEEE International Conference on Cluster Computing (CLUSTER), 2023
16 July 2023
S. Tyagi
Martin Swany
    FedML
ArXiv (abs)PDFHTML

Papers citing "Accelerating Distributed ML Training via Selective Synchronization"

2 / 2 papers shown
Title
On Using Large-Batches in Federated Learning
On Using Large-Batches in Federated Learning
Sahil Tyagi
FedML
70
0
0
05 Sep 2025
OmniLearn: A Framework for Distributed Deep Learning over Heterogeneous Clusters
OmniLearn: A Framework for Distributed Deep Learning over Heterogeneous ClustersIEEE Transactions on Parallel and Distributed Systems (TPDS), 2025
S. Tyagi
Prateek Sharma
316
2
0
21 Mar 2025
1