Sample-Adaptivity Tradeoff in On-Demand Sampling

We study the tradeoff between sample complexity and round complexity in on-demand sampling, where the learning algorithm adaptively samples from distributions over a limited number of rounds. In the realizable setting of Multi-Distribution Learning (MDL), we show that the optimal sample complexity of an -round algorithm scales approximately as . For the general agnostic case, we present an algorithm that achieves near-optimal sample complexity of within rounds. Of independent interest, we introduce a new framework, Optimization via On-Demand Sampling (OODS), which abstracts the sample-adaptivity tradeoff and captures most existing MDL algorithms. We establish nearly tight bounds on the round complexity in the OODS setting. The upper bounds directly yield the -round algorithm for agnostic MDL, while the lower bounds imply that achieving sub-polynomial round complexity would require fundamentally new techniques that bypass the inherent hardness of OODS.
View on arXiv