Worst-Case Analysis for Randomly Collected Data
We introduce a framework for statistical estimation that leverages knowledge of how samples are collected but makes no distributional assumptions on the data values. Specifically, we consider a population of elements with corresponding data values . We observe the values for a "sample" set and wish to estimate some statistic of the values for a "target" set where could be the entire set. Crucially, we assume that the sets and are drawn according to some known distribution over pairs of subsets of . A given estimation algorithm is evaluated based on its "worst-case, expected error" where the expectation is with respect to the distribution from which the sample and target sets are drawn, and the worst-case is with respect to the data values . Within this framework, we give an efficient algorithm for estimating the target mean that returns a weighted combination of the sample values--where the weights are functions of the distribution and the sample and target sets , --and show that the worst-case expected error achieved by this algorithm is at most a multiplicative factor worse than the optimal of such algorithms. The algorithm and proof leverage a surprising connection to the Grothendieck problem. This framework, which makes no distributional assumptions on the data values but rather relies on knowledge of the data collection process, is a significant departure from typical estimation and introduces a uniform algorithmic analysis for the many natural settings where membership in a sample may be correlated with data values, such as when sampling probabilities vary as in "importance sampling", when individuals are recruited into a sample via a social network as in "snowball sampling", or when samples have chronological structure as in "selective prediction".
View on arXiv