ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1501.03379
174
27
v1v2v3v4v5v6v7 (latest)

Variance Reduction for QMC in Reproducing Kernel Hilbert Spaces

14 January 2015
Chris J. Oates
Mark Girolami
ArXiv (abs)PDFHTML
Abstract

Quasi-Monte Carlo (QMC) methods are gaining in popularity in the machine learning community due to the increasingly challenging nature of numerical integrals that are routinely encountered in contemporary applications. For integrands that are α\alphaα-times differentiable, an α\alphaα-optimal QMC algorithm converges at a rate O(N−α−12+ϵ)O(N^{-\alpha-\frac{1}{2}+\epsilon})O(N−α−21​+ϵ) for any ϵ>0\epsilon>0ϵ>0 and it is known that this rate is best-possible. However in many applications it can happen that either the value of α\alphaα is unknown or a rate-optimal QMC algorithm is unavailable. How can we perform efficient numerical integration in such circumstances? A direct approach is to employ αL\alpha_LαL​-optimal QMC where the lower bound αL≤α\alpha_L \leq \alphaαL​≤α is known, but when αL<α\alpha_L < \alphaαL​<α this does not exploit the full power of QMC. In this paper we show that if an upper bound α≤αU\alpha \leq \alpha_Uα≤αU​ is also available, then the direct approach can be accelerated by a factor O(N−(α−αL)/d)O(N^{-(\alpha - \alpha_L)/d})O(N−(α−αL​)/d) where ddd is the dimension of the integral. Such variance reduction methods are likely to become practically important with the increasing adoption of QMC algorithms.

View on arXiv
Comments on this paper