In-depth Analysis of Low-rank Matrix Factorisation in a Federated Setting

We analyze a distributed algorithm to compute a low-rank matrix factorization on clients, each holding a local dataset , mathematically, we seek to solve . Considering a power initialization of , we rewrite the previous smooth non-convex problem into a smooth strongly-convex problem that we solve using a parallel Nesterov gradient descent potentially requiring a single step of communication at the initialization step. For any client in , we obtain a global in common to all clients and a local variable in . We provide a linear rate of convergence of the excess loss which depends on , where is the singular value of the concatenation of the matrices . This result improves the rates of convergence given in the literature, which depend on . We provide an upper bound on the Frobenius-norm error of reconstruction under the power initialization strategy. We complete our analysis with experiments on both synthetic and real data.
View on arXiv