662

Variance Reduction for QMC in Reproducing Kernel Hilbert Spaces

Abstract

Quasi-Monte Carlo (QMC) methods are gaining in popularity in the machine learning community due to the increasingly challenging nature of numerical integrals that are routinely encountered in contemporary applications. For integrands that are α\alpha-times differentiable, an α\alpha-optimal QMC algorithm converges at a rate O(Nα12+ϵ)O(N^{-\alpha-\frac{1}{2}+\epsilon}) for any ϵ>0\epsilon>0 and it is known that this rate is best-possible. However in many applications it can happen that either the value of α\alpha is unknown or a rate-optimal QMC algorithm is unavailable. How can we perform efficient numerical integration in such circumstances? A direct approach is to employ αL\alpha_L-optimal QMC where the lower bound αLα\alpha_L \leq \alpha is known, but when αL<α\alpha_L < \alpha this does not exploit the full power of QMC. In this paper we show that if an upper bound ααU\alpha \leq \alpha_U is also available, then the direct approach can be accelerated by a factor O(N(ααL)/d)O(N^{-(\alpha - \alpha_L)/d}) where dd is the dimension of the integral. Such variance reduction methods are likely to become practically important with the increasing adoption of QMC algorithms.

View on arXiv
Comments on this paper