ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.00203
25
0

Generalization Bound and Learning Methods for Data-Driven Projections in Linear Programming

1 September 2023
Shinsaku Sakaue
Taihei Oki
ArXivPDFHTML
Abstract

How to solve high-dimensional linear programs (LPs) efficiently is a fundamental question. Recently, there has been a surge of interest in reducing LP sizes using random projections, which can accelerate solving LPs independently of improving LP solvers. This paper explores a new direction of data-driven projections, which use projection matrices learned from data instead of random projection matrices. Given training data of nnn-dimensional LPs, we learn an n×kn\times kn×k projection matrix with n>kn > kn>k. When addressing a future LP instance, we reduce its dimensionality from nnn to kkk via the learned projection matrix, solve the resulting LP to obtain a kkk-dimensional solution, and apply the learned matrix to it to recover an nnn-dimensional solution. On the theoretical side, a natural question is: how much data is sufficient to ensure the quality of recovered solutions? We address this question based on the framework of data-driven algorithm design, which connects the amount of data sufficient for establishing generalization bounds to the pseudo-dimension of performance metrics. We obtain an O~(nk2)\tilde{\mathrm{O}}(nk^2)O~(nk2) upper bound on the pseudo-dimension, where O~\tilde{\mathrm{O}}O~ compresses logarithmic factors. We also provide an Ω(nk)\Omega(nk)Ω(nk) lower bound, implying our result is tight up to an O~(k)\tilde{\mathrm{O}}(k)O~(k) factor. On the practical side, we explore two simple methods for learning projection matrices: PCA- and gradient-based methods. While the former is relatively efficient, the latter can sometimes achieve better solution quality. Experiments demonstrate that learning projection matrices from data is indeed beneficial: it leads to significantly higher solution quality than the existing random projection while greatly reducing the time for solving LPs.

View on arXiv
Comments on this paper