62

Product distribution learning with imperfect advice

Main:8 Pages
Bibliography:3 Pages
Abstract

Given i.i.d.~samples from an unknown distribution PP, the goal of distribution learning is to recover the parameters of a distribution that is close to PP. When PP belongs to the class of product distributions on the Boolean hypercube {0,1}d\{0,1\}^d, it is known that Ω(d/ε2)\Omega(d/\varepsilon^2) samples are necessary to learn PP within total variation (TV) distance ε\varepsilon. We revisit this problem when the learner is also given as advice the parameters of a product distribution QQ. We show that there is an efficient algorithm to learn PP within TV distance ε\varepsilon that has sample complexity O~(d1η/ε2)\tilde{O}(d^{1-\eta}/\varepsilon^2), if pq1<εd0.5Ω(η)\|\mathbf{p} - \mathbf{q}\|_1 < \varepsilon d^{0.5 - \Omega(\eta)}. Here, p\mathbf{p} and q\mathbf{q} are the mean vectors of PP and QQ respectively, and no bound on pq1\|\mathbf{p} - \mathbf{q}\|_1 is known to the algorithm a priori.

View on arXiv
Comments on this paper