Upper Bounds for Learning in Reproducing Kernel Hilbert Spaces for Non IID Samples
Main:17 Pages
Bibliography:2 Pages
Appendix:2 Pages
Abstract
In this paper, we study a Markov chain-based stochastic gradient algorithm in general Hilbert spaces, aiming to approximate the optimal solution of a quadratic loss function. We establish probabilistic upper bounds on its convergence. We further extend these results to an online regularized learning algorithm in reproducing kernel Hilbert spaces, where the samples are drawn along a Markov chain trajectory hence the samples are of the non i.i.d. type.
View on arXivComments on this paper
