Low-Bandwidth Matrix Multiplication: Faster Algorithms and More General Forms of Sparsity

In prior work, Gupta et al. (SPAA 2022) presented a distributed algorithm for multiplying sparse matrices, using computers. They assumed that the input matrices are uniformly sparse -- there are at most non-zeros in each row and column -- and the task is to compute a uniformly sparse part of the product matrix. Initially each computer knows one row of each input matrix, and eventually each computer needs to know one row of the product matrix. In each communication round each computer can send and receive one -bit message. Their algorithm solves this task in rounds, while the trivial bound is . We improve on the prior work in two dimensions: First, we show that we can solve the same task faster, in only rounds. Second, we explore what happens when matrices are not uniformly sparse. We consider the following alternative notions of sparsity: row-sparse matrices (at most non-zeros per row), column-sparse matrices, matrices with bounded degeneracy (we can recursively delete a row or column with at most non-zeros), average-sparse matrices (at most non-zeros in total), and general matrices. We show that we can still compute in rounds even if one of the three matrices (, , or ) is average-sparse instead of uniformly sparse. We present algorithms that handle a much broader range of sparsity in rounds, and present conditional hardness results that put limits on further improvements and generalizations.
View on arXiv