Sharpened Error Bounds for Random Sampling Based Regression

Given a data matrix and a response vector , suppose , it costs time and space to solve the least squares regression (LSR) problem. When and are both large, exactly solving the LSR problem is very expensive. When , one feasible approach to speeding up LSR is to randomly embed and all columns of into a smaller subspace ; the induced LSR problem has the same number of columns but much fewer number of rows, and it can be solved in time and space. We discuss in this paper two random sampling based methods for solving LSR more efficiently. Previous work showed that the leverage scores based sampling based LSR achieves accuracy when . In this paper we sharpen this error bound, showing that is enough for achieving accuracy. We also show that when , the uniform sampling based LSR attains a bound with positive probability.
View on arXiv