Nearly-Linear Time Private Hypothesis Selection with the Optimal Approximation Factor

Estimating the density of a distribution from its samples is a fundamental problem in statistics. Hypothesis selection addresses the setting where, in addition to a sample set, we are given candidate distributions -- referred to as hypotheses -- and the goal is to determine which one best describes the underlying data distribution. This problem is known to be solvable very efficiently, requiring roughly samples and running in time. The quality of the output is measured via the total variation distance to the unknown distribution, and the approximation factor of the algorithm determines how large this distance is compared to the optimal distance achieved by the best candidate hypothesis. It is known that is the optimal approximation factor for this problem. We study hypothesis selection under the constraint of differential privacy. We propose a differentially private algorithm in the central model that runs in nearly-linear time with respect to the number of hypotheses, achieves the optimal approximation factor, and incurs only a modest increase in sample complexity, which remains polylogarithmic in . This resolves an open question posed by [Bun, Kamath, Steinke, Wu, NeurIPS 2019]. Prior to our work, existing upper bounds required quadratic time.
View on arXiv@article{aliakbarpour2025_2506.01162, title={ Nearly-Linear Time Private Hypothesis Selection with the Optimal Approximation Factor }, author={ Maryam Aliakbarpour and Zhan Shi and Ria Stevens and Vincent X. Wang }, journal={arXiv preprint arXiv:2506.01162}, year={ 2025 } }