49
2
v1v2 (latest)

Sharpened Error Bounds for Random Sampling Based 2\ell_2 Regression

Abstract

Given a data matrix XRn×dX \in R^{n\times d} and a response vector yRny \in R^{n}, suppose n>dn>d, it costs O(nd2)O(n d^2) time and O(nd)O(n d) space to solve the least squares regression (LSR) problem. When nn and dd are both large, exactly solving the LSR problem is very expensive. When ndn \gg d, one feasible approach to speeding up LSR is to randomly embed yy and all columns of XX into a smaller subspace RcR^c; the induced LSR problem has the same number of columns but much fewer number of rows, and it can be solved in O(cd2)O(c d^2) time and O(cd)O(c d) space. We discuss in this paper two random sampling based methods for solving LSR more efficiently. Previous work showed that the leverage scores based sampling based LSR achieves 1+ϵ1+\epsilon accuracy when cO(dϵ2logd)c \geq O(d \epsilon^{-2} \log d). In this paper we sharpen this error bound, showing that c=O(dlogd+dϵ1)c = O(d \log d + d \epsilon^{-1}) is enough for achieving 1+ϵ1+\epsilon accuracy. We also show that when cO(μdϵ2logd)c \geq O(\mu d \epsilon^{-2} \log d), the uniform sampling based LSR attains a 2+ϵ2+\epsilon bound with positive probability.

View on arXiv
Comments on this paper