The sample complexity of multi-distribution learning

Abstract
Multi-distribution learning generalizes the classic PAC learning to handle data coming from multiple distributions. Given a set of data distributions and a hypothesis class of VC dimension , the goal is to learn a hypothesis that minimizes the maximum population loss over distributions, up to additive error. In this paper, we settle the sample complexity of multi-distribution learning by giving an algorithm of sample complexity . This matches the lower bound up to sub-polynomial factor and resolves the COLT 2023 open problem of Awasthi, Haghtalab and Zhao [AHZ23].
View on arXivComments on this paper