484

Communication Efficient Distributed Optimization using an Approximate Newton-type Method

International Conference on Machine Learning (ICML), 2013
Abstract

We present a novel Newton-type method for distributed optimization, which converges to the empirical optimum of distributed stochastic optimization problems using a small number of simple communication rounds. For quadratic objectives, the number of communication rounds provably scales \emph{down} with the data size, and is a constant under reasonable assumptions. We also present a looser analysis for the non-quadratic case, and discuss the advantages of our approach compared to recently-proposed single-round communication algorithms.

View on arXiv
Comments on this paper