Worst-case Error Bounds for Online Learning of Smooth Functions
Online learning is a model of machine learning where the learner is trained on sequential feedback. We investigate worst-case error for the online learning of real functions that have certain smoothness constraints. Suppose that is the class of all absolutely continuous functions such that , and is the best possible upper bound on the sum of the powers of absolute prediction errors for any number of trials guaranteed by any learner. We show that for any , . Combined with the previous results of Kimber and Long (1995) and Geneson and Zhou (2023), we achieve a complete characterization of the values of that result in being finite, a problem open for nearly 30 years.We study the learning scenarios of smooth functions that also belong to certain special families of functions, such as polynomials. We prove a conjecture by Geneson and Zhou (2023) that it is not any easier to learn a polynomial in than it is to learn any general function in . We also define a noisy model for the online learning of smooth functions, where the learner may receive incorrect feedback up to times, denoting the worst-case error bound as . We prove that is finite if and only if is. Moreover, we prove for all and that .
View on arXiv@article{xie2025_2502.16388, title={ Worst-case Error Bounds for Online Learning of Smooth Functions }, author={ Weian Xie }, journal={arXiv preprint arXiv:2502.16388}, year={ 2025 } }