720

Accelerating Quasi-Monte Carlo in Reproducing Kernel Hilbert Spaces

Abstract

Quasi-Monte Carlo (QMC) methods are being adopted in machine learning due to the increasingly challenging nature of numerical integrals that are routinely encountered in contemporary applications. For integrands that are α\alpha-times differentiable, an α\alpha-optimal QMC algorithm converges at a best-possible rate O(Nα1/2+ϵ)O(N^{-\alpha- 1/2 +\epsilon}) where ϵ>0\epsilon>0 can be arbitrarily small. However, in applications the value of α\alpha can be unknown and/or a rate-optimal QMC algorithm can be unavailable. Standard practice is to employ αL\alpha_L-optimal QMC where the lower bound αLα\alpha_L \leq \alpha is known, but this does not exploit the full power of QMC when αL<α\alpha_L < \alpha. We present a novel solution that uses kernel methods to accelerate QMC by a factor O(N(ααL)/d)O(N^{-(\alpha - \alpha_L)/d}), where dd is the dimension of the integral. For d=1d=1 we can therefore recover optimal convergence rates. A topical application to robotic arm data demonstrates a substantial speed-up in the computation required to evaluate predictions for mechanical torques.

View on arXiv
Comments on this paper