LO: Compute-Efficient Meta-Generalization of Learned Optimizers
- AI4CE
Learned optimizers (LOs) can significantly reduce the wall-clock training time of neural networks, substantially reducing training costs. However, they can struggle to optimize unseen tasks (meta-generalize), especially when training networks much larger than those seen during meta-training. To address this, we derive the Maximal Update Parametrization (P) for two popular learned optimizer architectures and propose a simple meta-training recipe for -parameterized LOs (LOs). Our empirical evaluation demonstrates that LOs meta-trained with our recipe substantially improve meta-generalization to wider unseen tasks when compared to LOs trained under standard parametrization (e.g., as they are trained in existing work). When applying our LOs, each trained for less than 250 GPU-hours, to large-width models we are often able to match or exceed the performance of pre-trained VeLO, the most performant publicly available learned optimizer, meta-trained with 4000 TPU-months of compute. We also observe that learned optimizers trained with our LO recipe also exhibit substantially improved meta-generalization to deeper networks ( meta-training) and remarkable generalization to much longer training horizons ( meta-training).
View on arXiv