Quasi-Monte Carlo (QMC) methods are gaining in popularity in the machine learning community due to the increasingly challenging nature of numerical integrals that are routinely encountered in contemporary applications. For integrands that are -times differentiable, an -optimal QMC algorithm converges at a rate for any and it is known that this rate is best-possible. However in many applications it can happen that either the value of is unknown or a rate-optimal QMC algorithm is unavailable. How can we perform efficient numerical integration in such circumstances? A direct approach is to employ -optimal QMC where the lower bound is known, but when this does not exploit the full power of QMC. In this paper we show that if an upper bound is also available, then the direct approach can be accelerated by a factor where is the dimension of the integral. Such variance reduction methods are likely to become practically important with the increasing adoption of QMC algorithms.
View on arXiv