128

Sample-Adaptivity Tradeoff in On-Demand Sampling

Main:12 Pages
Bibliography:5 Pages
2 Tables
Appendix:33 Pages
Abstract

We study the tradeoff between sample complexity and round complexity in on-demand sampling, where the learning algorithm adaptively samples from kk distributions over a limited number of rounds. In the realizable setting of Multi-Distribution Learning (MDL), we show that the optimal sample complexity of an rr-round algorithm scales approximately as dkΘ(1/r)/ϵdk^{\Theta(1/r)} / \epsilon. For the general agnostic case, we present an algorithm that achieves near-optimal sample complexity of O~((d+k)/ϵ2)\widetilde O((d + k) / \epsilon^2) within O~(k)\widetilde O(\sqrt{k}) rounds. Of independent interest, we introduce a new framework, Optimization via On-Demand Sampling (OODS), which abstracts the sample-adaptivity tradeoff and captures most existing MDL algorithms. We establish nearly tight bounds on the round complexity in the OODS setting. The upper bounds directly yield the O~(k)\widetilde O(\sqrt{k})-round algorithm for agnostic MDL, while the lower bounds imply that achieving sub-polynomial round complexity would require fundamentally new techniques that bypass the inherent hardness of OODS.

View on arXiv
Comments on this paper