63

Upper Bounds for Learning in Reproducing Kernel Hilbert Spaces for Orbits of an Iterated Function System

Main:17 Pages
Bibliography:2 Pages
Appendix:2 Pages
Abstract

One of the key problems in learning theory is to compute a function ff that closely approximates the relationship between some input xx and corresponding output yy, such that yf(x)y\approx f(x). This approximation is based on sample points (xt,yt)t=1m(x_t,y_t)_{t=1}^{m}, where the function ff can be approximated within reproducing kernel Hilbert spaces using various learning algorithms. In the context of learning theory, it is usually customary to assume that the sample points are drawn independently and identically distributed (i.i.d.) from an unknown underlying distribution. However, we relax this i.i.d. assumption by considering an input sequence (xt)tN(x_t)_{t\in {\mathbb N}} as a trajectory generated by an iterated function system, which forms a particular Markov chain, with (yt)tN(y_t)_{t\in {\mathbb N}} corresponding to an observation sequence when the model is in the corresponding state xtx_t. For such a process, we approximate the function ff using the Markov chain stochastic gradient algorithm and estimate the error by deriving upper bounds within reproducing kernel Hilbert spaces.

View on arXiv
Comments on this paper