26
0
v1v2 (latest)

Convergence analysis of online algorithms for vector-valued kernel regression

Main:2 Pages
Appendix:18 Pages
Abstract

We consider the problem of approximating the regression function fμ:ΩYf_\mu:\, \Omega \to Y from noisy μ\mu-distributed vector-valued data (ωm,ym)Ω×Y(\omega_m,y_m)\in\Omega\times Y by an online learning algorithm using a reproducing kernel Hilbert space HH (RKHS) as prior. In an online algorithm, i.i.d. samples become available one by one via a random process and are successively processed to build approximations to the regression function. Assuming that the regression function essentially belongs to HH (soft learning scenario), we provide estimates for the expected squared error in the RKHS norm of the approximations f(m)Hf^{(m)}\in H obtained by a standard regularized online approximation algorithm. In particular, we show an order-optimal estimate \mathbb{E}(\|\epsilon^{(m)}\|_H^2)\le C (m+1)^{-s/(2+s)},\qquad m=1,2,\ldots, where ϵ(m)\epsilon^{(m)} denotes the error term after mm processed data, the parameter 0<s10<s\leq 1 expresses an additional smoothness assumption on the regression function, and the constant CC depends on the variance of the input noise, the smoothness of the regression function, and other parameters of the algorithm. The proof, which is inspired by results on Schwarz iterative methods in the noiseless case, uses only elementary Hilbert space techniques and minimal assumptions on the noise, the feature map that defines HH and the associated covariance operator.

View on arXiv
@article{griebel2025_2309.07779,
  title={ Convergence analysis of online algorithms for vector-valued kernel regression },
  author={ Michael Griebel and Peter Oswald },
  journal={arXiv preprint arXiv:2309.07779},
  year={ 2025 }
}
Comments on this paper