64

On learning k-parities with and without noise

Abstract

We first consider the problem of learning kk-parities in the on-line mistake-bound model: given a hidden vector x{0,1}nx \in \{0,1\}^n with x=k|x|=k and a sequence of "questions" a1,a2,...{0,1}na_1, a_2, ...\in \{0,1\}^n, where the algorithm must reply to each question with <ai,x>(mod2)< a_i, x> \pmod 2, what is the best tradeoff between the number of mistakes made by the algorithm and its time complexity? We improve the previous best result of Buhrman et al. by an exp(k)\exp(k) factor in the time complexity. Second, we consider the problem of learning kk-parities in the presence of classification noise of rate η(0,1/2)\eta \in (0,1/2). A polynomial time algorithm for this problem (when η>0\eta > 0 and k=ω(1)k = \omega(1)) is a longstanding challenge in learning theory. Grigorescu et al. showed an algorithm running in time (nk/2)1+4η2+o(1){n \choose k/2}^{1 + 4\eta^2 +o(1)}. Note that this algorithm inherently requires time (nk/2){n \choose k/2} even when the noise rate η\eta is polynomially small. We observe that for sufficiently small noise rate, it is possible to break the (nk/2)n \choose k/2 barrier. In particular, if for some function f(n)=ω(1)f(n) = \omega(1) and α[1/2,1)\alpha \in [1/2, 1), k=n/f(n)k = n/f(n) and η=o(f(n)α/logn)\eta = o(f(n)^{- \alpha}/\log n), then there is an algorithm for the problem with running time poly(n)(nk)1αek/4.01poly(n)\cdot {n \choose k}^{1-\alpha} \cdot e^{-k/4.01}.

View on arXiv
Comments on this paper