8
2

Dual Interior Point Optimization Learning

Michael Klamkin
Mathieu Tanneau
Pascal Van Hentenryck
Abstract

In many practical applications of constrained optimization, scale and solving time limits make traditional optimization solvers prohibitively slow. Thus, the research question of how to design optimization proxies -- machine learning models that produce high-quality solutions -- has recently received significant attention. Orthogonal to this research thread which focuses on learning primal solutions, this paper studies how to learn dual feasible solutions that complement primal approaches and provide quality guarantees. The paper makes two distinct contributions. First, to train dual linear optimization proxies, the paper proposes a smoothed self-supervised loss function that augments the objective function with a dual penalty term. Second, the paper proposes a novel dual completion strategy that guarantees dual feasibility by solving a convex optimization problem. Moreover, the paper derives closed-form solutions to this completion optimization for several classes of dual penalties, eliminating the need for computationally-heavy implicit layers. Numerical results are presented on large linear optimization problems and demonstrate the effectiveness of the proposed approach. The proposed dual completion outperforms methods for learning optimization proxies which do not exploit the structure of the dual problem. Compared to commercial optimization solvers, the learned dual proxies achieve optimality gaps below 1%1\% and several orders of magnitude speedups.

View on arXiv
@article{klamkin2025_2402.02596,
  title={ Dual Interior Point Optimization Learning },
  author={ Michael Klamkin and Mathieu Tanneau and Pascal Van Hentenryck },
  journal={arXiv preprint arXiv:2402.02596},
  year={ 2025 }
}
Comments on this paper