How Does Topology of Neural Architectures Impact Gradient Propagation
and Model Performance?
In this paper, we address two fundamental questions in neural architecture design research: (i) How does an architecture topology impact the gradient flow during training? (ii) Can certain topological characteristics of deep networks indicate a priori (i.e., without training) which models, with a different number of parameters/FLOPS/layers, achieve a similar accuracy? To this end, we formulate the problem of deep learning architecture design from a network science perspective and introduce a new metric called NN-Mass to quantify how effectively information flows through a given architecture. We demonstrate that our proposed NN-Mass is more effective than the number of parameters to characterize the gradient flow properties, and to identify models with similar accuracy, despite having significantly different size/compute requirements. Detailed experiments on both synthetic and real datasets (e.g., MNIST and CIFAR-10/100) provide extensive empirical evidence for our insights. Finally, we exploit our new metric to design efficient architectures directly, and achieve up to 3x fewer parameters and FLOPS, while losing minimal accuracy (96.82% vs. 97%) over large CNNs on CIFAR-10.
View on arXiv