Kernelized Variance Reduction for Quasi-Monte Carlo
Quasi-Monte Carlo (QMC) methods are gaining in popularity among the machine learning community due to the increasingly challenging nature of numerical integrals that are routinely encountered in contemporary applications. For integrands that are -times differentiable, an optimal QMC algorithm converges at a rate for any and it is known that this rate is best-possible. However in many applications it can happen that either the value of is unknown or a rate-optimal QMC algorithm is unavailable. This raises the question of how to design a low-variance estimator for the integral in such circumstances. A direct approach employs a conservative lower bound , but when we are sacrificing the full power of the QMC methodology. In this paper we show that if an upper bound is also available, then the direct approach can be accelerated by a factor where is the dimension of the domain of integration. Such techniques are likely to become important with the growing adoption of QMC algorithms within the machine learning community.
View on arXiv