Tight Time-Space Lower Bounds for Constant-Pass Learning

In his breakthrough paper, Raz showed that any parity learning algorithm requires either quadratic memory or an exponential number of samples [FOCS'16, JACM'19]. A line of work that followed extended this result to a large class of learning problems. Until recently, all these results considered learning in the streaming model, where each sample is drawn independently, and the learner is allowed a single pass over the stream of samples. Garg, Raz, and Tal [CCC'19] considered a stronger model, allowing multiple passes over the stream. In the -pass model, they showed that learning parities of size requires either a memory of size or at least samples. (Their result also generalizes to other learning problems.) In this work, for any constant , we prove tight memory-sample lower bounds for any parity learning algorithm that makes passes over the stream of samples. We show that such a learner requires either memory size or at least samples. Beyond establishing a tight lower bound, this is the first non-trivial lower bound for -pass learning for any . Similar to prior work, our results extend to any learning problem with many nearly-orthogonal concepts. We complement the lower bound with an upper bound, showing that parity learning with passes can be done efficiently with memory.
View on arXiv