DrMAD: Distilling Reverse-Mode Automatic Differentiation for Optimizing
Hyperparameters of Deep Neural Networks
The performance of deep neural networks is well known to be sensitive to the setting of their hyperparameters. Recent advances in reverse-mode automatic differentiation allow for optimizing hyperparameters with gradients. The standard way of computing these gradients involves a forward and backward pass of computations. However, the backward pass usually needs to consume unaffordable memory to store all the intermediate variables to exactly reverse a training procedure. In this work we propose a new method, DrMAD, to distill the knowledge of the forward pass into an shortcut path, through which we approximately reverse the training trajectory. Experiments on MNIST dataset show that DrMAD reduces memory consumption by 4 orders of magnitude for optimizing hyperparameters without sacrificing its effectiveness. To the best of our knowledge, DrMAD is the first research attempt to automatically tune hundreds of thousands of hyperparameters of deep neural networks in practice.
View on arXiv