403

Training Multi-Layer Over-Parametrized Neural Network in Subquadratic Time

Abstract

We consider the problem of training a multi-layer over-parametrized neural networks to minimize the empirical risk induced by a loss function. In the typical setting of over-parametrization, the network width mm is much larger than the data dimension dd and number of training samples nn (m=poly(n,d)m=\mathrm{poly}(n,d)), which induces a prohibitive large weight matrix WRm×mW\in \mathbb{R}^{m\times m} per layer. Naively, one has to pay O(m2)O(m^2) time to read the weight matrix and evaluate the neural network function in both forward and backward computation. In this work, we show how to reduce the training cost per iteration, specifically, we propose a framework that uses m2m^2 cost only in the initialization phase and achieves a truly subquadratic cost per iteration in terms of mm, i.e., m2Ω(1)m^{2-\Omega(1)} per iteration. To obtain this result, we make use of various techniques, including a shifted ReLU-based sparsifier, a lazy low rank maintenance data structure, fast rectangular matrix multiplication, tensor-based sketching techniques and preconditioning.

View on arXiv
Comments on this paper