We consider the problem of learning a discrete distribution in the presence of an fraction of malicious data sources. Specifically, we consider the setting where there is some underlying distribution, , and each data source provides a batch of samples, with the guarantee that at least a fraction of the sources draw their samples from a distribution with total variation distance at most from . We make no assumptions on the data provided by the remaining fraction of sources--this data can even be chosen as an adversarial function of the fraction of "good" batches. We provide two algorithms: one with runtime exponential in the support size, , but polynomial in , and that takes batches and recovers to error . This recovery accuracy is information theoretically optimal, to constant factors, even given an infinite number of data sources. Our second algorithm applies to the setting and also achieves an recover guarantee, though it runs in time. This second algorithm, which approximates a certain tensor via a rank-1 tensor minimizing distance, is surprising in light of the hardness of many low-rank tensor approximation problems, and may be of independent interest.
View on arXiv