LoRDO: Distributed Low-Rank Optimization with Infrequent Communication
Distributed training of foundation models via is limited by interconnect bandwidth. While infrequent communication strategies reduce synchronization frequency, they remain bottlenecked by the memory and communication requirements of optimizer states. Low-rank optimizers can alleviate these constraints; however, in the local-update regime, workers lack access to the full-batch gradients required to compute low-rank projections, which degrades performance. We propose , a principled framework unifying low-rank optimization with infrequent synchronization. We first demonstrate that, while global projections based on pseudo-gradients are theoretically superior, they permanently restrict the optimization trajectory to a low-rank subspace. To restore subspace exploration, we introduce a full-rank quasi-hyperbolic update. achieves near-parity with low-rank in language modeling and downstream tasks at model scales of M--M, while reducing communication by . Finally, we show that improves performance even more in very low-memory settings with small rank/batch size.
View on arXiv