ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.12664
44
4

Langevin Quasi-Monte Carlo

22 September 2023
Sifan Liu
    BDL
ArXiv (abs)PDFHTML
Abstract

Langevin Monte Carlo (LMC) and its stochastic gradient versions are powerful algorithms for sampling from complex high-dimensional distributions. To sample from a distribution with density π(θ)∝exp⁡(−U(θ))\pi(\theta)\propto \exp(-U(\theta)) π(θ)∝exp(−U(θ)), LMC iteratively generates the next sample by taking a step in the gradient direction ∇U\nabla U∇U with added Gaussian perturbations. Expectations w.r.t. the target distribution π\piπ are estimated by averaging over LMC samples. In ordinary Monte Carlo, it is well known that the estimation error can be substantially reduced by replacing independent random samples by quasi-random samples like low-discrepancy sequences. In this work, we show that the estimation error of LMC can also be reduced by using quasi-random samples. Specifically, we propose to use completely uniformly distributed (CUD) sequences with certain low-discrepancy property to generate the Gaussian perturbations. Under smoothness and convexity conditions, we prove that LMC with a low-discrepancy CUD sequence achieves smaller error than standard LMC. The theoretical analysis is supported by compelling numerical experiments, which demonstrate the effectiveness of our approach.

View on arXiv
Comments on this paper