44
1

Learning transformed product distributions

Abstract

We consider the problem of learning an unknown product distribution XX over {0,1}n\{0,1\}^n using samples f(X)f(X) where ff is a \emph{known} transformation function. Each choice of a transformation function ff specifies a learning problem in this framework. Information-theoretic arguments show that for every transformation function ff the corresponding learning problem can be solved to accuracy \eps\eps, using O~(n/\eps2)\tilde{O}(n/\eps^2) examples, by a generic algorithm whose running time may be exponential in n.n. We show that this learning problem can be computationally intractable even for constant \eps\eps and rather simple transformation functions. Moreover, the above sample complexity bound is nearly optimal for the general problem, as we give a simple explicit linear transformation function f(x)=wxf(x)=w \cdot x with integer weights winw_i \leq n and prove that the corresponding learning problem requires Ω(n)\Omega(n) samples. As our main positive result we give a highly efficient algorithm for learning a sum of independent unknown Bernoulli random variables, corresponding to the transformation function f(x)=i=1nxif(x)= \sum_{i=1}^n x_i. Our algorithm learns to \eps\eps-accuracy in poly(n)(n) time, using a surprising poly(1/\eps)(1/\eps) number of samples that is independent of n.n. We also give an efficient algorithm that uses logn\poly(1/\eps)\log n \cdot \poly(1/\eps) samples but has running time that is only \poly(logn,1/\eps).\poly(\log n, 1/\eps).

View on arXiv
Comments on this paper