How much data is sufficient to learn high-performing algorithms?

Algorithms -- for example for scientific analysis -- typically have tunable parameters that significantly influence computational efficiency and solution quality. If a parameter setting leads to strong algorithmic performance on average over a set of training instances, that parameter setting -- ideally -- will perform well on previously unseen future instances. However, if the set of training instances is too small, average performance will not generalize to future performance. This raises the question: how large should this training set be? We answer this question for any algorithm satisfying an easy-to-describe, ubiquitous property: its performance is a piecewise-structured function of its parameters. We provide the first unified sample complexity framework for algorithm parameter configuration; prior research followed case-by-case analyses. We present example applications to diverse domains including biology, political science, economics, integer programming, and clustering.
View on arXiv