187
v1v2 (latest)

SurCo: Learning Linear Surrogates For Combinatorial Nonlinear Optimization Problems

International Conference on Machine Learning (ICML), 2022
Abstract

Optimization problems with nonlinear cost functions and combinatorial constraints appear in many real-world applications but remain challenging to solve efficiently compared to their linear counterparts. To bridge this gap, we propose SurCo\textbf{SurCo} that learns linear Sur\underline{\text{Sur}}rogate costs which can be used in existing Co\underline{\text{Co}}mbinatorial solvers to output good solutions to the original nonlinear combinatorial optimization problem. The surrogate costs are learned end-to-end with nonlinear loss by differentiating through the linear surrogate solver, combining the flexibility of gradient-based methods with the structure of linear combinatorial optimization. We propose three SurCo\texttt{SurCo} variants: SurCozero\texttt{SurCo}-\texttt{zero} for individual nonlinear problems, SurCoprior\texttt{SurCo}-\texttt{prior} for problem distributions, and SurCohybrid\texttt{SurCo}-\texttt{hybrid} to combine both distribution and problem-specific information. We give theoretical intuition motivating SurCo\texttt{SurCo}, and evaluate it empirically. Experiments show that SurCo\texttt{SurCo} finds better solutions faster than state-of-the-art and domain expert approaches in real-world optimization problems such as embedding table sharding, inverse photonic design, and nonlinear route planning.

View on arXiv
Comments on this paper