212
v1v2 (latest)

Load Balancing for AI Training Workloads

Mark Silberstein
Sylvia Ratnasamy
Scott Shenker
Isaac Keslassy
Main:12 Pages
20 Figures
Bibliography:3 Pages
4 Tables
Appendix:8 Pages
Abstract

The extreme bandwidth demands of AI training has made load-balancing a critical component in AI fabrics, and a variety of load-balancing designs have emerged in recent work from both industry and research. However, there is currently little consensus on which design approach dominates or the conditions under which an approach dominates. We also lack an understanding of how far these approaches are from optimal.We provide a technical foundation for answering these questions by systematically evaluating leading load-balancing designs, while decoupling them from specific congestion control and loss recovery stacks. We find that load-balancing based on packet spraying dominates traditional approaches that load balance traffic at flow, flowlet, or subflow granularities. When comparing host- vs switch-based approaches to packet spraying, we find that they perform similarly in failure-free scenarios but that a host-based approach dominates under link failure because of its rapid visibility into end-to-end path conditions. We also identify that no leading approach achieves optimal O(1) queue scaling at maximum utilization. We demonstrate why a destination-based rotation (DR) discipline can reach this optimum and introduce Ofan, a switch-based implementation of DR that we show offers valuable performance gains over other packet spraying approaches.

View on arXiv
Comments on this paper