34
0

Nearly-Linear Time Private Hypothesis Selection with the Optimal Approximation Factor

Main:29 Pages
Bibliography:4 Pages
1 Tables
Abstract

Estimating the density of a distribution from its samples is a fundamental problem in statistics. Hypothesis selection addresses the setting where, in addition to a sample set, we are given nn candidate distributions -- referred to as hypotheses -- and the goal is to determine which one best describes the underlying data distribution. This problem is known to be solvable very efficiently, requiring roughly O(logn)O(\log n) samples and running in O~(n)\tilde{O}(n) time. The quality of the output is measured via the total variation distance to the unknown distribution, and the approximation factor of the algorithm determines how large this distance is compared to the optimal distance achieved by the best candidate hypothesis. It is known that α=3\alpha = 3 is the optimal approximation factor for this problem. We study hypothesis selection under the constraint of differential privacy. We propose a differentially private algorithm in the central model that runs in nearly-linear time with respect to the number of hypotheses, achieves the optimal approximation factor, and incurs only a modest increase in sample complexity, which remains polylogarithmic in nn. This resolves an open question posed by [Bun, Kamath, Steinke, Wu, NeurIPS 2019]. Prior to our work, existing upper bounds required quadratic time.

View on arXiv
@article{aliakbarpour2025_2506.01162,
  title={ Nearly-Linear Time Private Hypothesis Selection with the Optimal Approximation Factor },
  author={ Maryam Aliakbarpour and Zhan Shi and Ria Stevens and Vincent X. Wang },
  journal={arXiv preprint arXiv:2506.01162},
  year={ 2025 }
}
Comments on this paper