90
3

Tight bounds on the hardness of learning simple nonparametric mixtures

Abstract

We study the problem of learning nonparametric distributions in a finite mixture, and establish tight bounds on the sample complexity for learning the component distributions in such models. Namely, we are given i.i.d. samples from a pdf ff where f=\sum_{i=1}^k w_i f_i, \quad\sum_{i=1}^k w_i=1, \quad w_i>0 and we are interested in learning each component fif_i. Without any assumptions on fif_i, this problem is ill-posed. In order to identify the components fif_i, we assume that each fif_i can be written as a convolution of a Gaussian and a compactly supported density νi\nu_i with supp(νi)supp(νj)=\text{supp}(\nu_i)\cap \text{supp}(\nu_j)=\emptyset. Our main result shows that (1ε)Ω(loglog1ε)(\frac{1}{\varepsilon})^{\Omega(\log\log \frac{1}{\varepsilon})} samples are required for estimating each fif_i. Unlike parametric mixtures, the difficulty does not arise from the order kk or small weights wiw_i, and unlike nonparametric density estimation it does not arise from the curse of dimensionality, irregularity, or inhomogeneity. The proof relies on a fast rate for approximation with Gaussians, which may be of independent interest. To show this is tight, we also propose an algorithm that uses (1ε)O(loglog1ε)(\frac{1}{\varepsilon})^{O(\log\log \frac{1}{\varepsilon})} samples to estimate each fif_i. Unlike existing approaches to learning latent variable models based on moment-matching and tensor methods, our proof instead involves a delicate analysis of an ill-conditioned linear system via orthogonal functions. Combining these bounds, we conclude that the optimal sample complexity of this problem properly lies in between polynomial and exponential, which is not common in learning theory.

View on arXiv
Comments on this paper