76

An optimal (ε,δ)(ε,δ)-approximation scheme for the mean of random variables with bounded relative variance

Abstract

Randomized approximation algorithms for many #P-complete problems (such as the partition function of a Gibbs distribution, the volume of a convex body, the permanent of a {0,1}\{0,1\}-matrix, and many others) reduce to creating random variables X1,X2,X_1,X_2,\ldots with finite mean μ\mu and standard deviationσ\sigma such that μ\mu is the solution for the problem input, and the relative standard deviation σ/μc|\sigma/\mu| \leq c for known cc. Under these circumstances, it is known that the number of samples from the {Xi}\{X_i\} needed to form an (ϵ,δ)(\epsilon,\delta)-approximation μ^\hat \mu that satisfies P(μ^μ>ϵμ)δ\mathbb{P}(|\hat \mu - \mu| > \epsilon \mu) \leq \delta is at least (2o(1))ϵ2c2ln(1/δ)(2-o(1))\epsilon^{-2} c^2\ln(1/\delta). We present here an easy to implement (ϵ,δ)(\epsilon,\delta)-approximation μ^\hat \mu that uses (2+o(1))c2ϵ2ln(1/δ)(2+o(1))c^2\epsilon^{-2}\ln(1/\delta) samples. This achieves the same optimal running time as other estimators, but without the need for extra conditions such as bounds on third or fourth moments.

View on arXiv
Comments on this paper