ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1512.06238
79
58
v1v2v3 (latest)

The Limitations of Optimization from Samples

19 December 2015
Eric Balkanski
A. Rubinstein
Yaron Singer
ArXiv (abs)PDFHTML
Abstract

In this paper we consider the following question: can we optimize decisions on models learned from data and be guaranteed that we achieve desirable outcomes? We formalize this question through a novel framework called optimization from samples (OPS). In the OPS framework, we are given sampled values of a function drawn from some distribution and the objective is to optimize the function under some constraint. We show that there are classes of functions which have desirable learnability and optimizability guarantees and for which no reasonable approximation for optimization from samples is achievable. In particular, our main result shows that even for maximization of coverage functions under a cardinality constraint kkk, there exists a hypothesis class of functions that cannot be approximated within a factor of n−1/4+ϵn^{-1/4 + \epsilon}n−1/4+ϵ (for any constant ϵ>0\epsilon > 0ϵ>0) of the optimal solution, from samples drawn from the uniform distribution over all sets of size at most kkk. In the general case of monotone submodular functions, we show an n−1/3+ϵn^{-1/3 + \epsilon}n−1/3+ϵ lower bound and an almost matching Ω~(n−1/3)\tilde{\Omega}(n^{-1/3})Ω~(n−1/3)-optimization from samples algorithm. On the positive side, if a monotone subadditive function has bounded curvature we obtain desirable guarantees. We also show that additive and unit-demand functions can be optimized from samples to within arbitrarily good precision, and that budget additive functions can be optimized from samples to a factor of 1/2.

View on arXiv
Comments on this paper