668

Kernelized Variance Reduction for Quasi-Monte Carlo

Abstract

Quasi-Monte Carlo (QMC) methods are gaining in popularity among the machine learning community due to the increasingly challenging nature of numerical integrals that are routinely encountered in contemporary applications. For integrands that are α\alpha-times differentiable, an optimal QMC algorithm converges at a rate O(Nα12+ϵ)O(N^{-\alpha-\frac{1}{2}+\epsilon}) for any ϵ>0\epsilon>0 and it is known that this rate is best-possible. However in many applications it can happen that either the value of α\alpha is unknown or a rate-optimal QMC algorithm is unavailable. This raises the question of how to design a low-variance estimator for the integral in such circumstances. A direct approach employs a conservative lower bound αLα\alpha_L \leq \alpha, but when αL<α\alpha_L < \alpha we are sacrificing the full power of the QMC methodology. In this paper we show that if an upper bound ααU\alpha \leq \alpha_U is also available, then the direct approach can be accelerated by a factor O(N(ααL)/d)O(N^{-(\alpha - \alpha_L)/d}) where dd is the dimension of the domain of integration. Such techniques are likely to become important with the growing adoption of QMC algorithms within the machine learning community.

View on arXiv
Comments on this paper