224

Tight Time-Space Lower Bounds for Constant-Pass Learning

IEEE Annual Symposium on Foundations of Computer Science (FOCS), 2023
Abstract

In his breakthrough paper, Raz showed that any parity learning algorithm requires either quadratic memory or an exponential number of samples [FOCS'16, JACM'19]. A line of work that followed extended this result to a large class of learning problems. Until recently, all these results considered learning in the streaming model, where each sample is drawn independently, and the learner is allowed a single pass over the stream of samples. Garg, Raz, and Tal [CCC'19] considered a stronger model, allowing multiple passes over the stream. In the 22-pass model, they showed that learning parities of size nn requires either a memory of size n1.5n^{1.5} or at least 2n2^{\sqrt{n}} samples. (Their result also generalizes to other learning problems.) In this work, for any constant qq, we prove tight memory-sample lower bounds for any parity learning algorithm that makes qq passes over the stream of samples. We show that such a learner requires either Ω(n2)\Omega(n^{2}) memory size or at least 2Ω(n)2^{\Omega(n)} samples. Beyond establishing a tight lower bound, this is the first non-trivial lower bound for qq-pass learning for any q3q\ge 3. Similar to prior work, our results extend to any learning problem with many nearly-orthogonal concepts. We complement the lower bound with an upper bound, showing that parity learning with qq passes can be done efficiently with O(n2/logq)O(n^2/\log q) memory.

View on arXiv
Comments on this paper