ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.12596
  4. Cited By
Poplar: Efficient Scaling of Distributed DNN Training on Heterogeneous
  GPU Clusters

Poplar: Efficient Scaling of Distributed DNN Training on Heterogeneous GPU Clusters

AAAI Conference on Artificial Intelligence (AAAI), 2024
22 August 2024
WenZheng Zhang
Yang Hu
Jing Shi
Xiaoying Bai
ArXiv (abs)PDFHTMLGithub

Papers citing "Poplar: Efficient Scaling of Distributed DNN Training on Heterogeneous GPU Clusters"

1 / 1 papers shown
Distributed Low-Communication Training with Decoupled Momentum Optimization
Distributed Low-Communication Training with Decoupled Momentum Optimization
S. Nedelkoski
Alexander Acker
O. Kao
Soeren Becker
Dominik Scheinert
142
0
0
03 Oct 2025
1
Page 1 of 1