Optimal Sampling Gaps for Adaptive Submodular Maximization
Running machine learning algorithms on large and rapidly growing volumes of data is often computationally expensive, one common trick to reduce the size of a data set, and thus reduce the computational cost of machine learning algorithms, is \emph{probability sampling}. It creates a sampled data set by including each data point from the original data set with a known probability. Although the benefit of running machine learning algorithms on the reduced data set is obvious, one major concern is that the performance of the solution obtained from samples might be much worse than that of the optimal solution when using the full data set. In this paper, we examine the performance loss caused by probability sampling in the context of adaptive submodular maximization. We consider a simple probability sampling method which selects each data point with probability . If we set the sampling rate , our problem reduces to finding a solution based on the original full data set. We define sampling gap as the largest ratio between the optimal solution obtained from the full data set and the optimal solution obtained from the samples, over independence systems. %It captures the performance loss of the optimal solution caused by the probability sampling. Our main contribution is to show that if the utility function is policywise submodular, then for a given sampling rate , the sampling gap is both upper bounded and lower bounded by . One immediate implication of our result is that if we can find an -approximation solution based on a sampled data set (which is sampled at sampling rate ), then this solution achieves an approximation ratio against the optimal solution when using the full data set.
View on arXiv