Upper Bounds for Learning in Reproducing Kernel Hilbert Spaces for
Orbits of an Iterated Function System
One of the key problems in learning theory is to compute a function that closely approximates the relationship between some input and corresponding output , such that . This approximation is based on sample points , where the function can be approximated within reproducing kernel Hilbert spaces using various learning algorithms. In the context of learning theory, it is usually customary to assume that the sample points are drawn independently and identically distributed (i.i.d.) from an unknown underlying distribution. However, we relax this i.i.d. assumption by considering an input sequence as a trajectory generated by an iterated function system, which forms a particular Markov chain, with corresponding to an observation sequence when the model is in the corresponding state . For such a process, we approximate the function using the Markov chain stochastic gradient algorithm and estimate the error by deriving upper bounds within reproducing kernel Hilbert spaces.
View on arXiv