191
v1v2v3 (latest)

Nearly Minimax Discrete Distribution Estimation in Kullback-Leibler Divergence with High Probability

Main:13 Pages
Bibliography:4 Pages
Appendix:21 Pages
Abstract

We consider the fundamental problem of estimating a discrete distribution on a domain of size KK with high probability in Kullback-Leibler divergence. We provide upper and lower bounds on the minimax estimation rate, which show that the optimal rate is between (K+ln(K)ln(1/δ))/n\big(K + \ln(K)\ln(1/\delta)\big) /n and (Klnln(K)+ln(K)ln(1/δ))/n\big(K\ln\ln(K) + \ln(K)\ln(1/\delta)\big) /n at error probability δ\delta and sample size nn, which pins down the rate up to the doubly logarithmic factor lnlnK\ln \ln K that multiplies KK. Our upper bound uses techniques from online learning to construct a novel estimator via online-to-batch conversion. Perhaps surprisingly, the tail behavior of the minimax rate is worse than for the squared total variation and squared Hellinger distance, for which it is (K+ln(1/δ))/n\big(K + \ln(1/\delta)\big) /n, i.e. without the lnK\ln K multiplying ln(1/δ)\ln (1/\delta). As a consequence, we cannot obtain a fully tight lower bound from the usual reduction to these smaller distances. Moreover, we show that this lower bound cannot be achieved by the standard lower bound approach based on a reduction to hypothesis testing, and instead we need to introduce a new reduction to what we call weak hypothesis testing. We investigate the source of the gap with other divergences further in refined results, which show that the total variation rate is achievable for Kullback-Leibler divergence after all (in fact by he maximum likelihood estimator) if we rule out outcome probabilities smaller than O(ln(K/δ)/n)O(\ln(K/\delta) / n), which is a vanishing set as nn increases for fixed KK and δ\delta. This explains why minimax Kullback-Leibler estimation is more difficult than asymptotic estimation.

View on arXiv
Comments on this paper