Weight-Sharing Regularization
Weight-sharing is ubiquitous in deep learning. Motivated by this, we propose a "weight-sharing regularization" penalty on the weights of a neural network, defined as . We study the proximal mapping of and provide an intuitive interpretation of it in terms of a physical system of interacting particles. We also parallelize existing algorithms for (to run on GPU) and find that one of them is fast in practice but slow () for worst-case inputs. Using the physical interpretation, we design a novel parallel algorithm which runs in when sufficient processors are available, thus guaranteeing fast training. Our experiments reveal that weight-sharing regularization enables fully connected networks to learn convolution-like filters even when pixels have been shuffled while convolutional neural networks fail in this setting. Our code is available on github.
View on arXiv