Pruning Deep Neural Networks using Partial Least Squares
To handle the high computational cost in deep convolutional networks, recent approaches have proposed to find and remove unimportant filters in these networks. Despite achieving remarkable results, these approaches demand a high computational cost mostly because the pruning is performed layer-by-layer, which requires many fine-tuning iterations. In this work, we propose a novel approach to efficiently remove filters in deep convolutional neural networks based on Partial Least Squares (PLS) and Variable Importance in Projection (VIP) to measure the importance of each filter, removing the unimportant (or least important) ones. These techniques allow us to estimate the filter importance based on its contribution in predicting the class label, which we show to be an adequate indicator to remove filters. We validate the proposed method on ImageNet, CIFAR-10 and Food-101 datasets, where it eliminates up to 65% of the filters and reduces 88% of the floating point operations (FLOPs) without penalizing the network accuracy. Additionally, sometimes the method is even able to improve the accuracy compared to the network without pruning. We show that employing PLS+VIP as the criterion for detecting the filters to be removed is better than recent feature selection techniques that have been employed by state-of-the-art pruning methods. Finally, we show that the proposed method is more efficient and achieves a higher reduction in FLOPs than existing methods.
View on arXiv