Distributed Learning of Deep Neural Networks using Independent Subnet
Training
- OOD
Distributed machine learning (ML) can bring more computational resources to bear than single-machine learning, reducing training time. Further, distribution allows models to be partitioned over many machines, allowing very large models to be trained -- models that may be much larger than the available memory of any individual machine. However, in practice, distributed ML remains challenging, primarily due to high communication costs. We propose a new approach to distributed neural network learning, called independent subnet training (IST). In IST, a neural network is decomposed into a set of subnetworks of the same depth as the original network, each of which is trained locally, before the various subnets are exchanged and the process is repeated. IST training has many advantages over standard data parallel approaches. Because the subsets are independent, communication frequency is reduced. Because the original network is decomposed into independent parts, communication volume is reduced. Further, the decomposition makes IST naturally model parallel, and so IST scales to very large models that cannot fit on any single machine. We show experimentally that IST results in training time that are much lower than data parallel approaches to distributed learning, and that it scales to large models that cannot be learned using standard approaches.
View on arXiv