152

Online Learning Via Regularized Frequent Directions

Abstract

Online Newton step algorithms usually achieve good performance with less training samples than first order methods, but require higher space and time complexity in each iteration. In this paper, we develop a new sketching strategy called regularized frequent direction (RFD) to improve the performance of online Newton algorithms. Unlike the standard frequent direction (FD) which only maintains a sketching matrix, the RFD introduces a regularization term additionally. The regularization provides an adaptive stepsize for update, which makes the algorithm more stable. The RFD also reduces the approximation error of FD with almost the same cost and makes the online learning more robust to hyperparameters. Empirical studies demonstrate that our approach outperforms sate-of-the-art second order online learning algorithms.

View on arXiv
Comments on this paper