26
5

A Distributed Cubic-Regularized Newton Method for Smooth Convex Optimization over Networks

Abstract

We propose a distributed, cubic-regularized Newton method for large-scale convex optimization over networks. The proposed method requires only local computations and communications and is suitable for federated learning applications over arbitrary network topologies. We show a O(k3)O(k^{{-}3}) convergence rate when the cost function is convex with Lipschitz gradient and Hessian, with kk being the number of iterations. We further provide network-dependent bounds for the communication required in each step of the algorithm. We provide numerical experiments that validate our theoretical results.

View on arXiv
Comments on this paper