Convergence analysis of online algorithms for vector-valued kernel regression

We consider the problem of approximating the regression function from noisy -distributed vector-valued data by an online learning algorithm using a reproducing kernel Hilbert space (RKHS) as prior. In an online algorithm, i.i.d. samples become available one by one via a random process and are successively processed to build approximations to the regression function. Assuming that the regression function essentially belongs to (soft learning scenario), we provide estimates for the expected squared error in the RKHS norm of the approximations obtained by a standard regularized online approximation algorithm. In particular, we show an order-optimal estimate \mathbb{E}(\|\epsilon^{(m)}\|_H^2)\le C (m+1)^{-s/(2+s)},\qquad m=1,2,\ldots, where denotes the error term after processed data, the parameter expresses an additional smoothness assumption on the regression function, and the constant depends on the variance of the input noise, the smoothness of the regression function, and other parameters of the algorithm. The proof, which is inspired by results on Schwarz iterative methods in the noiseless case, uses only elementary Hilbert space techniques and minimal assumptions on the noise, the feature map that defines and the associated covariance operator.
View on arXiv@article{griebel2025_2309.07779, title={ Convergence analysis of online algorithms for vector-valued kernel regression }, author={ Michael Griebel and Peter Oswald }, journal={arXiv preprint arXiv:2309.07779}, year={ 2025 } }